Join our mission-driven team as we build out current and future generations of robots, developing and deploying predictive perception systems that fuse multi-sensor robot data into a unified representation of the near future.
Requirements
- Develop multimodal world-model architectures that ingest and fuse camera, LiDAR/depth, and robot state and produce short-horizon predictions.
- Build and maintain training pipelines: dataset construction, tokenization/backbones, distributed training, and ablation frameworks.
- Define model evaluation metrics and regression suites that reflect real robot outcomes.
- Create visualization/debug tooling for temporal predictions (rollouts, replays, overlays, failure case inspection).
- Optimize and distill models for edge deployment; benchmark latency, memory, and stability on target hardware.
- Collaborate with the AI Platform team to integrate the world model into autonomy stacks and validate behavior.
- Work with Operations to identify failure modes in the field and drive data curation and model iteration.