Key responsibilities:
- Evaluate business needs and objectives
- Identify and analyze data sources and analytical requirements
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Cloudera tools as well as open source technologies
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics
- Colaborate closely with Data scientists and the rest of IT
- Creating and continuous update of appropriate documentation for solutions and IT systems being in charge of
- Participate in development of roadmaps and implementation strategy around Data science initiatives including Recommendation engines, Predictive modeling and Machine learning
- Participate in Agile/Scrum routines
- Support Data Architect implementing Data Governance procedures
- Support Solution Architecture Design for new solutions and for changes of existing solutions
- Supporting functional and acceptance tests of new deliveries
- Identify and perform performance tuning opportunities
- Analyze Data Quality issues and participate in resolving
For a good start, you will need:
- Excellent knowledge of programming and scripting languages such as Scala, Python, Java is an advantage
- Apache Spark and Spark Structured Streaming
- Experience with major Big data technologies and frameworks including but not limited to Hadoop, Hive, HBase, Oozie, Flume, ZooKeeper, NoSQL databases , Flink, Kafka, Nifi, Spark MlLib
- Experience with some of major relational database engines, like Oracle, Teradata, SQL Server; Oracle SQL, PL/SQL; Oracle ETL technology and tools (OWB, ODI)
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Experience in client-driven large-scale implementation projects
- Data Science and Analytics experience is a plus (Machine Learning, Recommendation Engines, Search Personalization)
- Strong experience in applications design, development and maintenance
- Practical expertise in performance tuning and optimization, bottleneck problems analysis
- Solid technical expertise & troubleshooting skills
- Possess expertise in Object-Oriented Analysis and Design
- Experience with Git, JIRA, Jenkins
- Experience with Agile
- Will be a plus: Experience with DWH and knowledge of the ETL technology and tools (OWB, ODI) is advantage
You will enjoy:
- Working in a dynamic and friendly environment which allows you to grow and develop.
- Flexible working hours with the possibility of working remotely.
- 25 days off per year.
- Attractive salary and participation in bonus reward system.
- Private health insurance and paid yearly regular medical check-up.
- Mobile phone with unlimited internet and Family Telenor tariff package.
- Recreation or health program.
- In-house training and development programs.
- Career development opportunities.
Please include your references in your CV, as reference check is an integral part of our selection process.
We hope to meet you soon!