Skip to main content

This job has expired

Head of Data Engineering

Employer
Stott and May
Location
London
Salary
120000.00 - 125000.00 GBP Annual
Closing date
31 May 2022

View more

Employer Sector
Technology, ICT & Telecoms
Contract Type
Permanent
Hours
Full Time
Travel
None
Job Type
Data Engineering

Head of Data Engineering
Salary - GBP125,000 per annum + Benefits
Location - London (Hybrid Working)

Role Purpose
This position needs someone with energy and enthusiasm to manage our data development team to maintain the existing data platform and work on strategic new projects. In this role you must have a desire to deliver high quality output within challenging timeline.

The Head of Data Engineering is responsible to manage our Data Engineering team that covers data ingestion, transformation and publish through the various layers of the Data Lake using Big Data open technologies such as Kafka, Spark, Scala and associated AWS tooling. The team is responsible to process 0.5TB of data from both internal and external data sources, also to prototype new data sources internally and external.

You will also need to deliver a high volume, performant environment, structured to enable the Data Science roles to run modelling and insight activity and the core Data Warehouse users to build insight.

In additional to a data engineering team the role holder will also manage a team of Data quality and testing resource.

Key Accountabilities
*Manage a team of 7 permanent data engineers plus a few contractors plus 5 testing and quality team.
*Oversee the design and code data transformation and load processes through the various layers of the Data Lake using Big Data open-source technologies such as Kafka, Spark, Scala and associated AWS ETL tooling
*Own all of the Data Content, manipulation, business rules and associated processes within the Data Lake
*Responsible for the overall code quality and simplicity in the system and leading the enforcement of quality within the data landscape
*Ensure the team within an Agile framework using the agile methodology tooling that controls our development and CI/CD release processes
*Building knowledge of all data resources within ND and prototype new data sources internally and externally
*Building a knowledge of all existing data manipulation within both data warehouse systems and within the business marts.
*Monitor and profile performance and oversee the improvement work where appropriate.
*Championing the new Data Lake technology across the organisation to address a broad set of use cases across data science and data warehousing
*Researching, testing and recommending new layers or products in the Data Lake stack as these fast-moving technologies develop, keeping our business at the forefront able to attract the best talent

Additional responsibilities
*Working as a peer of Head of Platform Engineering to ensure the data solution is designed and built in a consistent and efficient way
*Act in a professional and customer-focused manner always with both internal and external customers; facilitate continued and improved company reputation and success.
*Identify new risks to the business, apply appropriate risk mitigation actions by reviewing on a regular basis those controls in place and processes/activities within your span of control.

Skills and Experience
Essential
*Proficiency with traditional database SQL technologies (Oracle, SQL Server, DB2)
*Experience with data solution BAU processes (ETL, table refresh etc.)
*Experience with integration of data from multiple data source
*Proficiency with Big Data data integration technologies such as Spark, Scala, Kafka
*Excellent Scala/Java engineer with knowledge of and experience with container frameworks
*Experience of CI/CD
*Good aptitude in multi-threading and concurrency concepts
*Familiarity with the fundamentals of Linux Scripting language
*Team leading experience and technical mentorship

Desirable
*Big Data Cloud technology exposure (AWS)
*Experience of relational database technologies (MS SQL Server, Redshift, Aurora)
*Experience of ETL technologies (eg AWS Glue, Informatica, Ab Initio)
*Excellent API and library design skills
*Writing high-performance, reliable and maintainable code.
*Good knowledge of No SQL database structures, theories, principles, and practices.
*Analytical and problem solving skills, applied to Big Data domain
*Experience with cloud deployment

Get job alerts

Create a job alert and receive personalised job recommendations straight to your inbox.

Create alert