We are looking for a Senior Data Engineer to join a team supporting global claims reporting for a leading reinsurance organisation, building reliable data pipelines and scalable solutions using Azure and Databricks.
Requirements
- Strong experience with Azure Databricks (PySpark / Spark SQL)
- Solid Python and SQL skills
- Experience building data pipelines and data models
- Good understanding of Delta Lake / Lakehouse architecture
- Experience working with Azure DevOps and CI/CD
- Comfortable working with large, complex datasets
- Able to work with different teams and stakeholders
Benefits
- Job security
- Potential for professional growth and development