The Public Sector ML team at Scale deploys advanced AI systems, including LLMs, agentic models, and multimodal pipelines, into mission-critical government environments. We build evaluation frameworks that ensure these models operate reliably, safely, and effectively under real-world constraints. As an ML Engineer, you will design, implement, and scale automated evaluation pipelines that help customers trust and operationalize advanced AI systems across defense, intelligence, and federal missions.
Requirements
- Develop and maintain automated evaluation pipelines for ML models across functional, performance, robustness, and safety metrics, including LLM-judge–based evaluations.
- Design test datasets and benchmarks to measure generalization, bias, explainability, and failure modes.
- Build evaluation frameworks for LLM agents, including infrastructure for scenario-based and environment-based testing.
- Conduct comparative analyses of model architectures, training procedures, and evaluation outcomes.
- Implement tools for continuous monitoring, regression testing, and quality assurance for ML systems.
- Design and execute stress tests and red-teaming workflows to uncover vulnerabilities and edge cases.
- Collaborate with operations teams and subject matter experts to produce high-quality evaluation datasets.
Benefits
- Comprehensive health, dental and vision coverage
- Retirement benefits
- Learning and development stipend
- Generous Paid Time Off
- Commuter stipend