We are looking for exceptional research engineers to help shape our empirical grasp of AI safety concerns and own individual threads within this endeavor end-to-end. The role involves identifying emerging AI safety risks, building evaluations of frontier AI models, designing scalable systems, and contributing to risk management.
Requirements
- Passionate and knowledgeable about short-term and long-term AI safety risks
- Able to think outside the box and have a robust ‘red-teaming mindset’
- Experienced in ML research engineering, ML observability and monitoring, creating large language model-enabled applications, and/or another technical domain applicable to AI risk
Benefits
- Committed to providing reasonable accommodations to applicants with disabilities
- Equal opportunity employer
- Believe artificial intelligence has the potential to help people solve immense global challenges