Seeking a highly skilled Senior Data Engineer with expertise in Databricks, PySpark, and Data Modeling to lead scalable data solutions on Azure.
Requirements
- 2–3 years of hands-on experience with Azure Databricks in enterprise-scale projects.
- 2+ years of experience in PySpark for data transformation and processing.
- 2–3 years of experience in Azure Data Factory (ADF) for orchestration and integration.
- Strong proficiency in SQL for data querying and manipulation.
- Proven experience in data modeling (dimensional modeling, normalization, star/snowflake schema).
- Solid understanding of Delta Lake architecture, data partitioning, and performance tuning.
- Experience working with structured and semi-structured data (e.g., JSON, Parquet, Avro).