We are seeking a software engineer to help us build out our trust and safety capabilities and design and implement systems that detect and prevent abuse, promote user safety, and reduce risk across our platform.
Requirements
- Architect, build, and maintain anti-abuse and content moderation infrastructure designed to protect us and end users from unwanted behavior.
- Work closely with our other engineers and researchers to utilize both industry standard and novel AI techniques to measure, monitor and improve AI models’ alignment to human values.
- Diagnose and remediate active incidents on the platform and build new tooling and infrastructure that address the root causes of system failure.