We are looking for a skilled Data Engineer to join our team and help design, build, and scale robust data infrastructure that supports advanced analytics and AI-driven features. In this role, you will be responsible for developing reliable data pipelines, managing structured and unstructured data, and ensuring high data quality and availability across the platform.
This is a full-time, hybrid position based in Novi Sad, with flexibility for partial remote work.
Key Responsibilities
- Design, build, and maintain scalable data pipelines for ingesting, processing, and transforming data from multiple sources
- Develop and optimize ETL/ELT workflows to support analytics, reporting, and downstream AI/ML use cases
- Model and structure data for efficient querying, analytics, and long-term storage
- Implement and maintain data warehousing and/or data lake solutions
- Ensure data quality, consistency, validation, and monitoring across pipelines
- Optimize data performance, reliability, and cost-efficiency
- Collaborate closely with ML/AI engineers, backend developers, and product teams to support data-driven features
- Document data flows, schemas, and operational processes
- Troubleshoot and resolve data pipeline and performance issues in production environments
Required Qualifications
- Experience in Data Engineering and Data Modeling
- Experience with cloud platforms (AWS or Azure)
- Hands-on experience with ETL/ELT processes and workflow orchestration
- Proficiency in SQL and strong understanding of relational and analytical databases
- Experience with Python for data processing
- Knowledge of data warehousing concepts (e.g., star/snowflake schemas, partitioning, indexing)
- Familiarity with handling large-scale, heterogeneous datasets
- Strong analytical thinking and problem-solving skills
- Ability to work effectively in a team-oriented environment
- Bachelor’s degree in Computer Science, Engineering, or a related field
- English level: B2 or higher
Nice to Have / Preferred Experience
- Familiarity with tools such as Airflow, dbt, Spark, Kafka, or similar
- Experience working with data lakes and/or lakehouse architectures
- Understanding of how data pipelines support AI/ML workflows
- Exposure to unstructured or semi-structured data (documents, files, logs, etc.)
- Experience with data versioning, lineage, and monitoring
What we offer
- Opportunity to work on complex, real-world data engineering challenges with direct impact on the product
- Modern and evolving tech stack with room to influence architectural and technical decisions
- Continuous education and professional development, including access to courses, training programs, conferences, and learning resources
- Support for developing both technical and soft skills, with mentorship and knowledge-sharing within the team
- Collaborative and engineering-driven work culture
- Hybrid work model with flexible work-from-home arrangements
- Compensation based on experience and expertise
Please send us your CV, including relevant project descriptions, technologies you are proficient in, and the industry you have experience with.