Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Requirements
- Motivated by reducing catastrophic risks from advanced AI systems
- Excited to transition into full-time empirical AI safety research and would be interested in a full-time role at Anthropic
- Have a strong technical background in computer science, mathematics, physics, cybersecurity, or related fields
- Thrive in fast-paced, collaborative environments
- Can implement ideas quickly and communicate clearly
- Fluent in Python programming
- Available to work full-time on the Fellows program for 4 months
Benefits
- Weekly stipend of 3,850 USD / 2,310 GBP / 4,300 CAN
- Access to a shared workspace (in either Berkeley, California or London, UK)
- Connection to the broader AI safety research community
- Funding for compute (~$15k/month) and other research expenses
- Optional equity donation matching
- Generous vacation and parental leave
- Flexible working hours