We are seeking a Data Engineer to join our team at OSBI. The successful candidate will be responsible for translating business requirements into Azure Cloud-based big data solutions for the data modernisation program. They will work closely with other members of the data Engineering team and product owners to ensure Data strategy and platforms fulfil current needs and are aligned with the overall vision of the organisation.
Requirements
- 4-5 years of IT experience preferably in Data and Analytics implementation
- Minimum 3 years of Data engineering (Azure) delivery experience
- 2 years of implementation experience in Azure Data engineering projects
- Experience in an MI / BI / Analytics environment (Kimball, lake house, data lake)
- Proficiency in Python/SQL/SparkSQL/PySpark
- Expert level understanding on Azure Data Factory, Azure Synapse Analytics, Azure SQL, Azure Data Lake, Datra bricks, perview and Azure App Service
- Significant technical skills such as Transact SQL language, relational database skills
- Understanding of data governance and security best practices
- Experience in Streaming and batch architecture (e.g., Kafka, Kafka streams, Spark, Flink)
- Exposure in at least one distributed data processing framework: Spark (Core, Streaming, SQL), Storm or Flink, etc.
- Should have worked on any NoSQL solutions like Mongo DB, Cassandra, HBase, Cosmos DB etc, or Cloud-based NoSQL offerings like DynamoDB, Big Table, etc.
- Good Exposure in development with CI / CD pipelines
- Knowledge of containerization, orchestration, and Kubernetes engine
- Development of data pipelines using API ingestion and Streaming ingestion methods
- Experience in working in a fast-paced agile environment