Design and manage scalable data pipelines, develop and enhance ETL/ELT workflows, conduct data exploration and analysis, and collaborate with teams to shape data architecture.
Requirements
- 4 years of experience in Data Analytics or Data Engineering roles
- Expertise in SQL and NoSQL databases
- Advanced proficiency in Python and PySpark
- Experience building and managing ETL pipelines using tools like Airflow or similar platforms
- Strong knowledge of data modeling concepts and techniques for performance optimization
- Hands-on experience with AWS data services such as S3, EMR, Redshift, Athena, and RDS