We are looking for data engineers that are excited to take responsibility, work as a team, and are not afraid to directly collaborate with their colleagues in other disciplines such as Product, Design, Back-end, Front-end, Marketing and Sales.
Our data technology stack currently consists of Kafka, PostgreSQL, BigQuery, Spark, Airflow, Airbyte and GCP. We prefer to work with people who have seen -or want to learn - different technologies, as we believe that all tools are a means to an end and it’s good to have some perspective.
In a nutshell, you will:
Build and maintain robust data pipelines in a distributed platform.
Launch fast and iterate often - this means that within your first few weeks, you will bring code live that impacts many customers.
Go beyond and take end-to-end responsibility to deliver high-impact data projects.
Introduce new technologies that improve the scalability and stability of the platform.
Deploy and manage data infrastructure for both batch and real-time data pipelines.
Collaborate with colleagues in delivering projects, code reviews, and knowledge-sharing sessions.
Stay up-to-date with the latest developments in the data engineering world.
Here are a few practical examples of projects that you will work on:
Transforming our batch data pipelines into real-time data pipelines, leveraging technologies like Flink and Spark streaming.
Building out the foundation that powers our (smart) data products, e.g. analytics, forecasting, recommendations.
Work together with our customers to build and improve our data products, e.g. when it comes to data ingestion and data extraction.
Building embedded data analytics products by working together with Design and Front-end.
What you'll bring
We are looking for data engineers of various seniorities who are excited to work on a state-of-the-art pricing and billing platform.
A strong technical background, e.g. in computer science or data engineer.
You have at least 1-3 years of relevant working experience.
Extensive, preferably production, experience with writing Python and SQL.
You have experience with building distributed (batch) data pipelines, e.g. with BigQuery, Spark, Kafka and Snowflake.
You have worked with data orchestration tools like Airflow, Dagster, Prefect, etc.
Experience with tools such Airbyte for data integration.
Familiarity with DevOps tools like ci/cd in Gitlab, Kubernetes and Terraform.
You are not afraid to learn about and dive into data analytics or data science related parts of a project.
You thrive in a collaborative environment and enjoy working with a diverse group of people with different areas of expertise.
You are willing to come work at our amazing Utrecht office at least 4 days a week (but preferably 5).
You are living in The Netherlands.
You have a valid work permit for The Netherlands.