
OpenAI aims to ensure AI benefits humanity, prioritizing safety, diversity, and widespread benefits.
The Alignment team at OpenAI is dedicated to ensuring that our AI systems are safe, trustworthy, and consistently aligned with human values, even as they scale in complexity and capability. As a Research Engineer / Research Scientist, you will be at the forefront of ensuring that our AI systems consistently follow human intent, even in complex and unpredictable scenarios.