Responsibilities:
► Data Pipeline Development
- Design, implement, and optimize end-to-end data pipelines for ingesting, processing, and transforming large volumes of structured and unstructured data.
- Develop robust ETL (Extract, Transform, Load) processes to integrate data from diverse sources into our data ecosystem.
► Data governance
- Implement data validation and quality checks to ensure accuracy and consistency.
- Catalog data products and source systems.
- Technical guidance in data governance for domain experts.
► Data Modeling and Architecture
- Design and maintain data models, database schemas, and database structures to support analytical and operational use cases.
- Optimize data storage and retrieval mechanisms for performance and scalability.
- Evaluate and implement data storage solutions, including relational databases, NoSQL databases, data lakes, and cloud storage services.
► Collaboration and Documentation
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver tailored solutions.
- Document technical designs, workflows, and best practices to facilitate knowledge sharing and maintain system documentation.
Requirements:
► Bachelor's degree in Computer Science, Engineering, Information Systems, or a related field.
► 3+ years of experience in data engineering, back-end software development, or related roles.
► Strong knowledge of database systems and data modeling techniques.
► High proficiency in SQL.
► Proficiency in programming languages commonly used in data engineering; Python is preferred.
► Experience with data orchestration tools; Airflow is preferred, along with familiarity with DataOps practices.
► Strong knowledge of software containerization technologies, such as Docker and Docker Compose.
► Excellent problem-solving skills and attention to detail.
► Strong communication and collaboration skills within a team-oriented environment.
► Ability to adapt to evolving technologies and business requirements.
► Strong proficiency in the English language.
|
Nice to have:
► Master’s degree with a specialization in Data Engineering.
► Familiarity with a BI visualization solutions (e.g. PowerBI, Superset, Metabase, Tableau)
► Familiarity with cloud platforms and services (e.g., Azure, AWS, Google Cloud Platform).
- Azure is preferred, particularly familiarity with services like Microsoft Fabric and Purview.
Benefits offered by KEBA:
► Great team spirit nurtured as a main aspect of our company’s culture
► Flexible work time
► Compensatory time (no lost overtime hours/minutes)
► Hybrid work mode: combining home office and on-site work
► Ability to reduce workload for a defined time
► 22 vacation days (+additional days for loyal employees)
► Self-learning time (10% of total working time)
► Internal library, trainings, online courses, conferences participation
► Private health insurance
► FitPass membership available
► Covered parking fees
► Christmas gifts (vouchers) for KEBA children
► Fully equipped kitchen and dining/chill-out area in our new premises
|