Are you all about designing and developing efficient Data solutions for Big Data projects? Seek no more, as we are currently looking for a Data Engineer to work on one of our bigger FinTech projects, orchestrating the deployment, data warehousing and real-time analytics solutions. Let’s dive into the details.
Your role and responsibilities
You will be working on an enterprise grade FinTech project where you will be responsible for the design, development, and implementation of everything data related. This includes everything from initial data models creation to ETL processes, developing and optimizing Data Lakes and real time processing solutions. You will also get to develop complete analytical solutions, mine extensive data sets for insights, build scalable data products, and enable the overall Data Engineering and Analytics capability. In addition to this, you would be responsible for:
- Leverage large volumes of Data by employing a variety of languages (Python, Java, C#, Scala...) and tools (Azure SQL, ADF, Databricks, Spark, Snowflake) to marry systems together
- Building scalable data systems, ETL packages, algorithms and pipelines with high quality, efficiency and performance for large-scale processing systems
- Managing, optimizing, overseeing and monitoring data retrieval, storage, distribution and migration throughout different databases and different servers
- Designing, developing and implementing data engineering and warehousing capabilities across multiple databases to support the business, consumer and analytical needs across the project.
- Aligning Data architecture with business goals and requirements
- Recommend and sometimes implement ways to improve data reliability, efficiency, and quality
About you
You are someone with technical qualifications and a desire to learn and play with new Data technologies. You know your way around the complexity of distributed BigData systems, cloud technologies (Azure, AWS, GCP...) and serverless architecture. Using various programming languages to solve data processing problems is nothing new to you (Python, Java, C#, Scala, Shell...). You posses in-depth knowledge of distributed systems, SQL (and NoSQL) database design, data modeling and query performance tuning on SQL Server, MySQL, Azure SQL/Redshift, Postgres or similar platforms. You are comfortable working with ambiguity (e.g. imperfect data, loosely defined concepts, ideas, or goals) and translating these into more understandable outputs.
Bonus points for:
Proficiency in one or more of the following: Azure SQL, Snowflake, Azure Databricks, Apache Spark, SQL Data Warehouse, Scala, Azure ETL, Terraform, Azure Resource Manager Templates
What next?
If you’re ready to be a part of a team that works together to achieve both technical and personal greatness be sure to hit apply. We will carefully select all the candidates for the next steps. For a detailed info on our hiring process and what to expect, be sure to check out our Careers page on Klika.ba.
Questions?
Not sure if you’re the right person for this? You need more info about the project or us? Don’t worry, I’m here for you :) Be sure do drop me a message whichever way you like: