Australia, New Zealand, Poland, Indonesia, Thailand, Taiwan, and Hong Kong
Technology & Development
Binance is a leading global blockchain ecosystem behind the world’s largest cryptocurrency exchange by trading volume and registered users. We are trusted by over 280 million people in 100+ countries for our industry-leading security, user fund transparency, trading engine speed, deep liquidity, and an unmatched portfolio of digital-asset products. Binance offerings range from trading and finance to education, research, payments, institutional services, Web3 features, and more. We leverage the power of digital assets and blockchain to build an inclusive financial ecosystem to advance the freedom of money and improve financial access for people around the world.
About the Role
We are seeking an LLM Algorithm Engineer (Safety First) to join our AI/ML team, with a focus on building robust AI guardrails and safety frameworks for large language models (LLMs) and intelligent agents. This role is pivotal in ensuring trust, compliance, and reliability in Binance’s AI-powered products such as Customer Support Chatbots, Compliance Systems, Search, and Token Reports.
Responsibilities:
Design and build an AI Guardrails framework as a safety layer for LLMs and agent workflows
Define and enforce safety, security, and compliance policies across applications
Detect and mitigate prompt injection, jailbreaks, hallucinations, and unsafe outputs
Implement privacy and PII protection: redaction, obfuscation, minimisation, data residency controls
Build red-teaming pipelines, automated safety tests, and risk monitoring tools
Continuously improve guardrails to address new attack vectors, policies, and regulations
Fine-tune or optimise LLMs for trading, compliance, and Web3 tasks
Collaborate with Product, Compliance, Security, Data, and Support to ship safe features
Requirements:
Master’s/PhD in Machine Learning, AI, Computer Science, or related field
Research track record (ICLR, NeurIPS, ACL, ICML) a plus