The Data Engineer will develop and maintain ETL/ELT pipelines, optimize data storage and retrieval, collaborate with data teams, and ensure data quality and security. The role requires expertise in Databricks, AWS, and data engineering ecosystem, and 3-5 years of experience in data engineering or software development.
Requirements
- 3-5 years of hands-on experience working on implementing and operating data capabilities and cutting-edge data solutions, preferably in a cloud environment
- Expertise in Databricks
- Expertise in designing and building real time data ingestion data pipelines in AWS
- Expertise in CloudFormation
- Expertise in developing GitHub Workflows & integrating GitHub workflows with AWS
- Expertise with using boto3 apis for Lambda, S3, Glue, Crawlers, Data Zone (preferred)
- Expert in building APIs using AWS services
- In-depth knowledge and hands-on experience with ASW Glue services and AWS Data engineering ecosystem
- Hands-on experience developing and delivering data, ETL solutions with some of the technologies like AWS data services (Redshift, Athena, lakeformation, etc.), Cloudera Data Platform, Tableau labs is a plus
Benefits
- Competitive benefits
- Services and programs that provide employees with the resources to pursue their goals, both at work and in their personal lives