As an AWS Data Engineer, design, develop, and maintain scalable data pipelines on AWS. Work with technical analysts, client stakeholders, data scientists, and team members to ensure data quality and integrity.
Requirements
- 10+ years’ experience with a core data engineering skillset leveraging AWS native technologies (AWS Glue, Python, Snowflake, S3, Redshift).
- Experience in the design and development of robust and scalable data pipelines leveraging AWS native services.
- Proficiency in leveraging Snowflake for data transformations, optimization of ETL pipelines, and scalable data processing.
- Experience with streaming and batch data pipeline/engineering architectures.
- Familiarity with DataOps concepts and tooling for source control and setting up CI/CD pipelines on AWS.
- Hands-on experience with Databricks and a willingness to grow capabilities.
- Experience with data engineering and storage solutions (AWS Glue, EMR, Lambda, Redshift, S3).
- Strong problem-solving and analytical skills.
- Knowledge of Dataiku is needed
- Graduate/Post-Graduate degree in Computer Science or a related field.