We are looking for experienced Data Engineers to build and optimise scalable data pipelines and data models using modern data engineering practices.
Requirements
- Strong hands-on experience with PySpark and DataFrames
- Advanced proficiency in SQL
- Good experience working on Google Cloud Platform (GCP)
- Strong background in large-scale data engineering and data modelling
- Experience designing and delivering enterprise-scale data platforms
- Hands-on experience processing and analyzing large datasets
- Strong exposure to data engineering and business intelligence solutions
Benefits
- Competitive salary and benefits
- Hybrid work model
- Opportunity to work with cutting-edge technologies
- Collaborative team environment
- Strong focus on professional growth and career development