Our mission is to accelerate the adoption of cryptocurrency so that you and the rest of the world can achieve financial freedom and inclusion. In our first decade, Kraken has risen to become one of the largest, most successful and respected crypto exchanges on the planet.
We are changing the way the world thinks about finance and our range of successful products are playing a critical role in the mainstream adoption of crypto assets. We continue to trail-blaze into new territory with the introduction of Kraken Bank, providing a more seamless integration between crypto and the traditional financial system. This makes us the first crypto company (ever) to be awarded a U.S. state banking charter.
Our diverse group of 2,000+ Krakenites are distributed all over the world as part of our 'remote first' culture, united by a shared passion for delighting customers, upholding crypto values and achieving our meaningful mission. We attract people who push themselves to improve, are radically transparent and think differently in order to unlock their potential.
Crypto is a rapidly evolving industry and we’re just getting started. We’re growing fast and you're invited to join the revolution!
About the Role
The data engineering team is responsible for designing and implementing scalable solutions that allow the company to make data-driven decisions fast and accurately on several terabytes of data. The team maintains the company’s data warehouse and data-lake, and you will be responsible for creating various pipelines to move and process vast amounts of data into the different data products. The team deals with both batch and streamed data, and split into different responsibilities and areas matching both the engineer’s and Kraken’s interest.
- Build scalable and reliable data pipeline that collects, transforms, loads and curates data from internal systems
- Augment data platform with data pipelines from select external systems
- Ensure high data quality for pipelines you build and make them auditable
- Drive data systems to be as near real-time as possible
- Support design and deployment of distributed data store that will be central source of truth across the organization
- Build data connections to company's internal IT systems
- Develop, customize, configure self service tools that help our data consumers to extract and analyze data from our massive internal data store
- Evaluate new technologies and build prototypes for continuous improvements in data engineering
- 5+ years of work experience in relevant field (Data Engineer, DWH Engineer, Software Engineer, etc)
- Experience with data warehouse technologies and relevant data modeling best practices (Presto, Druid, etc)
- Experience building data pipelines/ETL and familiarity with design principles (Apache Airflow is a big plus!)
- Excellent SQL and data manipulation skills using common frameworks like Spark/PySpark, Pandas, or similar
- Proficiency in a major programming language (e.g. Scala, Python, Golang,..)
- Experience with business requirements gathering for data sourcing
Nice to have
- Experience working with cloud services (e.g. AWS, GCP, ..) and/or Kubernetes
- Experience in building and contributing to data lakes on the cloud
- Designing and writing CI/CD pipelines
- Working with petabytes of data
- Enjoys Dockerizing services
Location Tagging: #US #EU
We’re powered by people from around the world with their own unique backgrounds and experiences. We value all Krakenites and their talents, contributions, and perspectives.
Check out all our open roles at https://www.kraken.com/careers. We’re excited to see what you’re made of.
Learn more about us
Watch "Top 10 Qualities of Kraken - How to Grow a Decacorn Remixed""
Follow us on Twitter
Catch up on our blog
Follow us on LinkedIn
Deadline for applications: 14.11.2021.