
OpenAI aims to ensure AI benefits humanity, prioritizing safety, diversity, and widespread benefits.
The Safety Reasoning Research team at OpenAI is seeking a Research Engineer/Scientist to develop innovative machine learning techniques to improve the safety understanding and capability of our foundation model. The role involves conducting applied research, developing AI moderation models, and contributing to research on multimodal content analysis.