We are looking to hire a Senior Data Engineer to join our Data team in London. This is an office-based role out of our London office. As a Data Engineer, you will play a key role in building, optimising, and maintaining the data pipelines, models, and infrastructure that power our classification systems, AI workflows, forecasting models, TikTok insights, and consumer intelligence products.
Requirements
- Design, develop, and maintain scalable data architectures across Snowflake, Databricks, and cloud environments.
- Lead schema design, dimensional modelling, and query optimisation to support high-performance analytics and AI workloads.
- Collaborate with senior data scientists to structure data for classification, forecasting, embedding generation, and multimodal workflows.
- Build robust ETL/ELT pipelines for ingestion, transformation, validation, and delivery.
- Develop resilient ingestion workflows for external APIs, including rate limiting, retries, schema drift handling, and monitoring
- Implement pipelines using Snowpark, PySpark, and distributed compute environments.
- Apply Snowflake performance optimisation, cost governance, RBAC, and Snowflake best practices.
- Support compute scaling across cloud platforms (AWS, GCP) and distributed cluster environments.
- Implement data validation frameworks (e.g., Great Expectations) and enforce data contracts.
- Build monitoring, alerting, and lineage visibility for pipelines (e.g., dbt tests, metadata tracking).
- Ensure high standards of data accuracy, completeness, and reliability.
- Build automated CI/CD workflows for data using GitHub Actions, CircleCI, or similar.
- Develop automated unit tests, integration tests, and quality gates for data pipelines.
- Partner with DataOps & Platform Engineering to improve observability, documentation, and deployment workflows.
- Build and maintain orchestration workflows using Airflow, Prefect, Dagster, or equivalent.
- Optimise DAGs for performance, reliability, and clarity, while ensuring operational excellence.
- Run, log, monitor, and debug workloads across VMs, Docker containers, and cloud compute environments.
- Improve reliability and maintainability of containerised workloads powering AI and data pipelines.
- Translate analytical and AI requirements into scalable engineering solutions.
- Document pipelines, decisions, runbooks, and architecture clearly and consistently.
- Mentor junior engineers and contribute to building team-wide engineering maturity.
Benefits
- 25 days of holiday per year - with an option to buy/ sell up to 5 days
- Pension, Life Assurance and Income Protection Flexible benefits platform with options including Private Medical, Dental Insurance & Critical Illness
- Employee assistance programme, season ticket loans and cycle to work scheme
- Volunteering opportunities and charitable giving options
- Great learning and development opportunities.