We are looking for an experienced AWS Data Engineer to design, build, and optimize scalable data solutions across cloud platforms.
Requirements
- Design, develop, and maintain scalable ETL pipelines using AWS services such as Glue, Lambda, Kinesis, Step Functions, and EMR
- Create and manage AWS Glue crawlers and jobs for automated data ingestion and cataloging across structured and unstructured data sources
- Build and optimize data workflows using Apache Airflow and PySpark
- Design and manage data warehouse solutions using AWS Redshift, including performance tuning and query optimization
- Develop and maintain data models, data design frameworks, and source-to-target mappings (STTM)
- Enable seamless data consumption for analytics and reporting tools such as QuickSight, SageMaker, and other BI platforms
- Work with S3, RDS, and other AWS storage services to manage large-scale data efficiently
- Ensure data quality through automated testing, code coverage, and validation processes
- Support UAT, deployment, and go-live activities for data solutions
- Monitor and optimize cluster performance and resource utilization
- Implement security and governance practices using IAM and CloudTrail
- Collaborate with teams using version control systems like Git or SVN for code management