Senior Data Engineer - Databricks role requires 8 years of data engineering experience, 2-3 years of hands-on experience with Databricks platform, and proven track record of refactoring legacy code to modern frameworks.
Requirements
- Strong foundation in data engineering principles, ETL/ELT processes, and data pipeline design patterns
- Proven hands-on experience developing data pipelines using PySpark
- Practical experience with Databricks workspace, cluster management, notebooks, and job orchestration
- Knowledge of Databricks Workspace AI Agent capabilities and integration
- Experience implementing data models including dimensional modeling, data vault, or lakehouse architectures
- Understanding of Delta Lake features including ACID transactions, schema evolution, and optimization techniques
- SQL proficiency for data querying and transformation
- Experience with cloud platforms (Azure, AWS, or GCP)
- Understanding of data governance and security best practices
- Knowledge of streaming data processing (Structured Streaming)
- Familiarity with DevOps practices and CI/CD pipelines
- Experience with version control systems (Git)
- Understanding of data quality frameworks and testing methodologies
Benefits
- Competitive salary
- Health insurance
- Retirement plan
- Generous paid time off
- Professional development opportunities