We are seeking a senior researcher with passion for AI safety and experience in safety research to work on research projects to make our AI systems safer, more aligned and more robust to adversarial or malicious use cases.
Requirements
- Ph.D. or other degree in computer science, machine learning, or a related field
- 4+ years of experience in the field of AI safety, especially in areas like RLHF, adversarial training, robustness, fairness & biases
- Experience in safety work for AI model deployment
- In-depth understanding of deep learning research and/or strong engineering skills
Benefits
- Committed to equal opportunity employer
- Provide reasonable accommodations to applicants with disabilities