Our client provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting-edge analytics techniques, and a consultative approach.
Requirements
- Develop and optimize ETL pipelines from various data sources using Databricks on cloud (AWS, Azure, etc.)
- Experienced in implementing standardized pipelines with automated testing, Airflow scheduling, Azure DevOps for CI/CD, Terraform for infrastructure as code, and Splunk for monitoring
- Continuously improve systems through performance enhancements and cost reductions in computing and storage
- Data Processing and API Integration: Utilize Spark Structured Streaming for real-time data processing and integrate data outputs with REST APIs
- Lead Data Engineering Projects to manage and implement data-driven communication systems
- Experienced with Scrum and Agile Methodologies to coordinate global delivery teams, run scrum ceremonies, manage backlog items, and handle escalations
- Integrate data across different systems and platforms
- Strong verbal and written communication skills to manage client discussions
Benefits
- Generous Paid Time Off
- 401k Matching
- Retirement Plan
- Four Day Work Week
- Generous Parental Leave
- Tuition Reimbursement
- Relocation Assistance