The Artificial Intelligence (AI) Safety Protections team within Trust and Safety develops and implements AI/Large Language Model (LLM)-powered solutions to ensure the safety of Generative AI foundational models. This role involves mitigating risks associated with Generative AI and addressing safety with LLM/AI technology.
Requirements
- Bachelor's degree or equivalent practical experience
- 2 years of experience in data analysis, including identifying trends, generating summary statistics, and drawing insights from quantitative and qualitative data
- 2 years of experience in managing projects and defining project scope, goals, and deliverables
- Experience in abuse and fraud environments, with web security, content moderation and threat analysis
- Experience with programming languages (e.g., Python, R, Julia, Java, C or C++)
- Experience in applying machine learning techniques to datasets
- Excellent problem-solving and thinking skills, with attention to detail in a changing environment