Data Engineer

Job description

Location: Copenhagen


Position overview


We are a diverse and inclusive team which roots in our Copenhagen office, though we have transitioned to a remote-first setup with current team members living and working out of both Denmark, Sweden, and Lithuania. We recognize and value people as individuals while keeping in mind that we succeed as a team. We know that everyone has their strengths, weaknesses, and quirks and that we all mess up from time to time, so it is important to be part of a highly trusting team where we can communicate honestly and openly about things. In our day-to-day operations, people are free to work fully remotely, or in a hybrid fashion out of one of our office locations around Europe (Berlin, Copenhagen, and Vilnius). We operate with core working hours from 9-13 UTC and aim to bring the whole team together for a physical meetup every 3-4 months, typically in Copenhagen or Vilnius, so it is crucial that you are located somewhere in or close to Europe and can travel for up to 10 days per year.

The team is focused on the data delivery end of our business. Our responsibility starts after the data has been validated and ends when it has been made available at the API layer of our BI solutions or transformed into data files that can be consumed by our customers for their different pricing and analytical systems. Currently this workload is split between our physical Datacenter in Copenhagen and AWS. Part of our current and future journey is migrating all our solutions to AWS and reforming some of the initial services to make sure they are leveraging the best offerings in the cloud.

Who we are looking for:

On the technical side, we are looking for a data engineer who is used to and capable of working with big data and knows how to build robust solutions operating 24/7 in the cloud. A suitable candidate understands that there is no perfect solution, and can make a conscious choice between performance, maintainability, and time to market when needed. But overall, we are looking for someone who is primarily a good cultural fit. Given the choice between pure technical excellence and great cultural fit, we will prioritize the fit. From that angle we are looking for someone who can own their mistakes, admit the limits of their current knowledge and who is willing to learn and grow with the rest of us.



The team’s digital footprint is split between a data pipeline based on Kafka, Spark, and Scala and end-point solutions based primarily on C# and MSSQL.

For this position we are looking for someone who is competent in:

  • Scala
  • Kafka
  • Apache Spark


It would be an advantage if you have knowledge of:

  • Amazon Webservices (AWS)
  • Databricks
  • Azure Devops
  • Terraform
  • The Elastic stack
  • C# / .NET
  • Kubernetes
  • SQL (MSSQL/Snowflake/Postgres)


What we offer:

  • Possibility to work fully remote, hybrid or on-site
  • Catered lunch if/when you do come to work at the office
  • Pension is 5% from Infare and 2,5% own payment
  • Health insurance through pension contributions
  • 5 extra days off
  • Opportunity to work with the latest tech and the greatest players in the industry
  • To join our team and partake in an exciting journey creating new products together with our customer


About the company

Infare is the leading provider of competitor air travel data that empowers airlines to make effective pricing decisions, on the road to recovery and future success. For more than two decades, Infare has been providing competitor travel data to boost the pricing strategy effectiveness of, primarily, airline Revenue Management teams. Our mission is to help airlines deliver the right price, to the right person, at the right time and in the right channel. Our very specific expertise and experience are unmatched in the industry.  So too is our ongoing investment in technology and innovation. For airlines seeking to use competitor travel data as a critical tool on the road to recovery and future success, Infare is the partner of choice.