This data engineering role within Marketing Data and Analytics is an opportunity to help shape the future of data engineering as you develop solutions with a cloud-first approach for streaming large datasets of marketing related data
You'll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers and the bank safe and secure
Participating actively in the data engineering community, you'll deliver opportunities to support our strategic direction while building your network across the bank
What you'll do
We'll look to you to drive value for the customer through modelling, sourcing and data transformation. You'll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering. You'll also be responsible for sourcing, transforming, and storing data in an optimal fashion in Snowflake for use by analysts and the data science community.
We'll also expect you to be:
Delivering the automation of data engineering pipelines through the removal of manual stages
Developing comprehensive knowledge of the bank's data structures and metrics, advocating change where needed for product development
Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight
Delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists
Developing solutions for streaming data ingestion and transformations in line with streaming strategy
C reating services that consume and generate new datasets using Kafka
The skills you'll need
To be successful in this role, you'll need to be an intermediate level programmer and Data Engineer with a qualification in Computer Science or Software Engineering. Alongside an engineering background, you'll be comfortable coding ingestion pipelines in Python, Scala or Java, and have experience of ETL frameworks or methodologies, such as Airflow, as well as demonstrable experience of working with data persistence and warehousing technologies in AWS or GCP.
You'll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as a proven track record in extracting value and features from large scale data.
You'll also demonstrate:
Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling
The ability to code in one or more languages including Python, Scala or Java
Experience using cloud technologies, including Amazon Web Services and Google Cloud Platform
Experience using modern data warehouses, such as Snowflake, Redshift or BigQuery
A good understanding of modern code development practices
Good critical thinking and proven problem solving abilities
It would also be ideal if you have experience of using Kafka, Airflow, Docker, Terraform, APIs and Unix scripting