Deepgram is seeking a Research Scientist to pioneer the development of Latent Space Models (LSMs) for voice AI. The role involves solving fundamental data, scale, and cost challenges associated with building robust, contextualized voice AI. The ideal candidate will have a strong mathematical foundation in statistical learning theory, deep expertise in foundation model architectures, and a proven ability to bridge theory and practice.
Requirements
- Strong mathematical foundation in statistical learning theory, particularly in areas relevant to self-supervised and multimodal learning
- Deep expertise in foundation model architectures, with an understanding of how to scale training across multiple modalities
- Proven ability to bridge theory and practice—someone who can both derive novel mathematical formulations and implement them efficiently
- Demonstrated ability to build data pipelines that can process and curate massive datasets while maintaining quality and diversity
- Track record of designing controlled experiments that isolate the impact of architectural innovations and validate theoretical insights
- Experience optimizing models for real-world deployment, including knowledge of hardware constraints and efficiency techniques
- History of open-source contributions or research publications that have advanced the state of the art in speech/language AI
Benefits
- Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA
- Equal opportunity employer
- Happy to provide accommodations for applicants who need them