We're training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. Join us on our mission and shape the future!
Requirements
- 5+ years of engineering experience running production infrastructure at a large scale
- Experience designing large, highly available distributed systems with Kubernetes, and GPU workloads on those clusters
- Experience with Kubernetes dev and production coding and support
- Experience with GCP, Azure, AWS, OCI, multi-cloud on-prem / hybrid serving
- Experience in designing, deploying, supporting, and troubleshooting in complex Linux-based computing environments
- Experience in compute/storage/network resource and cost management
- Excellent collaboration and troubleshooting skills to build mission-critical systems, and ensure smooth operations and efficient teamwork
- The grit and adaptability to solve complex technical challenges that evolve day to day
- Familiarity with computational characteristics of accelerators (GPUs, TPUs, and/or custom accelerators), especially how they influence latency and throughput of inference.
- Strong understanding or working experience with distributed systems.
- Experience in Golang, C++ or other languages designed for high-performance scalable servers
Benefits
- Weekly lunch stipend, in-office lunches & snacks
- Full health and dental benefits, including a separate budget to take care of your mental health
- 100% Parental Leave top-up for up to 6 months
- Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement
- Remote-flexible, offices in Toronto, New York, San Francisco, London and Paris, as well as a co-working stipend
- 6 weeks of vacation (30 working days!)