The role involves building and managing data pipelines, troubleshooting issues, and ensuring data accuracy across various platforms such as Azure Synapse Analytics, Azure Data Lake Gen2, and SQL environments.
Requirements
- 5+ years of Dynamics 365 ecosystem experience
- Strong PySpark development background
- Extensive experience with SQL
- Strong understanding and experience in implementing and supporting ETL processes, Data Lakes, and data engineering solutions
- Proficiency in using Azure Synapse Analytics
- Hands-on experience with PySpark for data processing and automation
- Ability to use VPNs, MFA, RDP, jump boxes/jump hosts, etc., to operate within the customers secure environments
- Some experience with Azure DevOps CI/CD IaC and release pipelines
- Ability to communicate effectively both verbally and in writing, with strong problem-solving and analytical skills
- Understanding of the operation and underlying data structure of D365 Finance and Operations, Business Central, and Customer Engagement
- Experience with Data Engineering in Microsoft Fabric
- Experience with Delta Lake and Azure data engineering concepts (e.g., ADLS, ADF, Synapse, AAD, Databricks)
Benefits
- Competitive salary
- Comprehensive benefits package
- Flexible work from anywhere
- Work-life balance
- Career growth and professional development opportunities
- Collaborative and inclusive work culture