Design, implement, and maintain scalable data pipelines and ETL processes, create and manage APIs, collaborate with cross-functional teams, lead data modelling efforts, and implement best practices for data governance. Collaborate with teams to build scalable data-driven products. Implement backend APIs & work on databases to support applications. Work in an Agile Environment that practises Continuous Integration and Delivery.
Requirements
- At least 3 years of experience as a Data Engineer
- Strong proficiency in programming languages, particularly Python, PySpark and SQL derivatives
- Experience working with structured, semi-structured, and unstructured data
- Experience with AWS ETL and orchestration tools
- Extensive knowledge of data modelling, data access and data storage infrastructure
- Knowledge of open table formats
- Familiarity with data mesh principles, domain ownership, data product thinking and federated governance
- Familiarity with Big Data technologies and cloud services
- Solid understanding of data architecture concepts, data lakes, and data marts
- Exceptional analytical and problem-solving skills
- Excellent communication skills
- Ability to mentor and guide team members
Benefits
- Learning culture
- Annual Leave Benefits with additional perks such as Family Care and Birthday Leave