Acuity Inc. is seeking a highly skilled Data Engineer to join our Engineering Team, helping drive the design and delivery of AWS cloud-scale data platforms for federal clients.
Requirements
- Build and maintain scalable PySpark-based data pipelines in Databricks notebooks
- Design and implement Delta Lake tables
- Develop ETL and ELT workflows that integrate multiple source systems into a centralized, query-optimized data warehouse architecture
- Collaborate with data architects and engineers to implement cloud-native data solutions on AWS
- Optimize pipeline performance through intelligent partitioning, caching, broadcast joins, and adaptive query tuning
- Deploy and version data engineering assets using Git-integrated development workflows
- Monitor pipeline health, job execution, and cluster utilization using native Databricks tools and AWS CloudWatch
- Conduct technical discovery and mapping of legacy source systems
- Implement governance practices including metadata tagging, data quality validation, audit logging, and lineage tracking
- Support ad hoc data access requests, develop reusable data assets, and maintain shared notebooks
Benefits
- Competitive Compensation
- Personal Growth
- Recognition and Visibility
- Collaborative Culture
- Diversity and Inclusion