Join the Safety Systems team at OpenAI as a Senior Researcher to design and execute cutting-edge attacks, build adversarial evaluations, and advance our understanding of how safety measures can fail—and how to fix them.
Requirements
- Ph.D., master's degree, or equivalent experience in computer science, machine learning, security, or a related discipline
- 4+ years of experience in AI red-teaming, security research, adversarial ML, or related safety fields
- Fluency in modern ML / AI techniques and comfort hacking on large-scale codebases and evaluation infrastructure
Benefits
- Competitive compensation, equity, and benefits
- Access to cutting-edge models, tooling, and compute resources
- A highly collaborative, mission-driven environment with world-class colleagues