Team Sava specializes in building outstanding teams of passionate, world-class professionals and top-tier software developers for growing global hi-tech companies.
Our partner is 365Scores, a global sports technology hub offering multiple solutions to millions of users worldwide. The company provides real-time results, updated stats, original content, aggregated information, customized news feeds, and more.
Job Summary
We are looking for a talented and experienced Data Engineer to join our team. In this role, you will be responsible for designing, implementing, and maintaining data pipelines and infrastructure. You will collaborate closely with cross-functional teams, including data scientists, analysts, and software engineers, to ensure the availability, accessibility, and quality of our partner's data assets. Your expertise in data processing, database technologies, and ETL processes will be essential in enabling effective data utilization and analysis.
We offer:
- Opportunity to work on a great product
- Remote / Pet-friendly office
- Paid team lunch (twice per month when the team is in the office)
- Provided equipment and technology in support of remote work
- Flexible working hours, adjusted to your lifestyle
- Private healthcare insurance for you and your family
- Minimum 23 vacation days
- Team building events
- Internal referral fees between €300 and €1.000
- Know someone who can be a good fit? Let us know and when a new hire starts working collect a referral fee in the amount of €300
Team Sava nurtures an individual approach to each member of the team to make sure you feel as comfortable and supported as possible. Your opinion matters to us and we make it our business to hear your voice and create an optimal environment for you to do your best work.
Check out our Careers page for more information on how we work!
Responsibilities:
- Design, develop, and maintain efficient and scalable data pipelines that extract, transform, and load data from various sources into target systems.
- Monitor and optimize data pipelines for performance, reliability, and data quality.
- Implement and manage data storage solutions, including relational databases, data warehouses, and NoSQL databases.
- Implement data partitioning, indexing, and compression strategies for improved performance and scalability.
- Identify and resolve data quality issues, including data cleansing, standardization, and validation.
- Identify and implement performance optimization techniques to enhance data processing speed, query performance, and system scalability.
- Monitor and analyze data infrastructure and pipeline performance, identifying bottlenecks and implementing optimizations.
- 5+ years of experience as Data Engineer
- Strong SQL abilities and hands-on experience with SQL and no-SQL DBs
- Hands-on experience in Python or equivalent programming language
- Experience with data warehouse solutions (like Bigquery/ Redshift/ Snowflake or others)
- Experience with data modeling, data catalog concepts, data formats, data pipelines/ETL design, implementation, and maintenance.
- Experience with AWS/GCP cloud services
Bonus points for:
- Experience with Airflow and DBT.
- Experience with MLOps.
- Experience with development practices – Agile, CI/CD, TDD
Sounds like an exciting challenge for you? Then these are the next steps:
Hiring process:
1. You let us know that you find this role interesting by sending us your CV
2. Our friendly HR contacts you promptly to schedule a short call to exchange additional information
3. We organize technical assessment and interview where you get the opportunity to meet the rest of the team you would be working with
4. One final interview to talk about your experience, expectations and plans
5. We think we are a great match, send you a job offer, you agree and accept
Looking forward to hearing from you!