Join our growing community of top-tier AI & ML professionals and help solve real-world problems and shape the future of intelligent systems.
Requirements
- 6+ years of experience as a Data Engineer
- Proficient in PySpark and SQL for large-scale data processing
- Deep understanding of Delta Lake features
- Experience working with cloud platforms
- Hands-on experience with Databricks Auto Loader, Structured Streaming, and job scheduling
- Familiarity with Unity Catalog for multi-workspace governance and fine-grained data access
- Experience integrating with orchestration tools and using infrastructure-as-code for deployment
- Comfortable with version control and automation using Git, Databricks Repos, dbx, or Terraform
- Experience with performance tuning, Z-Ordering, caching strategies, and partitioning best practices
Benefits
- Structured Onboarding Process
- Ecosystem of Opportunity
- Collaborative Environment
- Flexible & Impact-Driven Work
- Talent-Led Innovation