We are looking for a Principal Data Engineer to own and build the production-grade data layer that powers a Claims AI / Intelligent Suite running on Azure Databricks.
Requirements
- 7+ years of experience in Data Engineering with strong hands-on experience building production pipelines on Databricks (PySpark, Delta Lake, DLT) – Databricks is a must.
- Deep knowledge of Delta Lake optimization, streaming and batch patterns, schema evolution, and performance tuning.
- Solid experience with Unity Catalog, data governance, and secure data access patterns.
- Strong SQL and PySpark skills, with a production engineering mindset.
- Experience building CI/CD for data pipelines and operating data systems in production.
- Strong Azure fundamentals (ADLS Gen2, identities, Key Vault, security and networking concepts).
- Data quality and reliability mindset with experience implementing observability and quality frameworks.
- Ability to work independently and take ownership after high-level direction is provided (“run with the ball”).
- Experience with Insurance (P&C / Commercial Liability) and AI/ML data platform experience is a strong plus.
- Databricks Data Engineer Professional or Azure Data Engineer Associate certification is a plus.
Benefits
- Learning Opportunities
- Travel opportunities to attend industry conferences and meet clients
- Mentoring and Development
- Flexible working options to help you strike the right balance
- Special day rewards to celebrate birthdays, work anniversaries, and other personal milestones
- Company-provided equipment