We are seeking a Senior Data Engineer to help evolve and enhance our data platform and capabilities. The role involves designing and building scalable, reliable, and secure data pipelines, working in partnership with the Data Science teams to build scalable Pyspark workloads, and collaborating cross-functionally with engineers, analysts, and data scientists to deliver impactful data solutions.
Requirements
- Significant experience delivering Python-based projects for data engineering.
- Experience building and tuning spark pipelines that run at scale across large quantities of data.
- Strong hands-on experience with SQL and NoSQL databases (e.g. SQL Server, MongoDB, Cassandra).
- Proven experience with modern data warehousing and large-scale processing (e.g. Snowflake, DBT, BigQuery, Clickhouse).
- Proficient with data orchestration tools such as Airflow, Dagster, or Prefect.
- Experience with cloud platforms (Azure, AWS, or GCP) for data processing and storage.
- Practical experience with Kafka or equivalent event-driven architectures (e.g. AWS SQS, Azure EventHubs, AWS Kinesis).
- Good understanding of data modelling for OLAP and OLTP workloads.
- Familiar with agile methodologies and CI/CD processes in the context of data solutions.
- Experienced as a senior team member on complex data engineering projects.
- Able to design and optimise data structures for high-volume systems.
- Experienced in assisting with data platform modernisation or migration to the cloud.
- Takes initiative to solve challenging data issues and drive projects forward.
Benefits
- Generous Paid Time Off
- Medical benefits
- Paid sick leave
- Dotdigital day
- Share reward
- Wellbeing reward
- Wellbeing Days
- Loyalty reward
- DEI commitment