AWS Data Engineer to design, develop, and maintain scalable data pipelines on AWS, working closely with technical analysts, client stakeholders, data scientists, and other team members to ensure data quality and integrity while optimizing data storage solutions for performance and cost-efficiency.
Requirements
- 10+ years' experience with a core data engineering skillset leveraging AWS native technologies
- Proficiency in leveraging Snowflake for data transformations, optimization of ETL pipelines, and scalable data processing
- Experience with streaming and batch data pipeline/engineering architectures
- Familiarity with DataOps concepts and tooling for source control and setting up CI/CD pipelines on AWS
- Hands-on experience with Databricks and a willingness to grow capabilities
- Experience with data engineering and storage solutions (AWS Glue, EMR, Lambda, Redshift, S3)
- Strong problem-solving and analytical skills
- Knowledge of Dataiku is needed
- Graduate/Post-Graduate degree in Computer Science or a related field