Job description
Corsearch is the premier provider of clearance and protection solutions for trademark and brand professionals. Corsearch has over 400 employees serving over 5000 on 5 continents, and we’re growing and changing…FAST!
Why work as a Data Engineer at Corsearch?
Data is in the core of our business; we consume data from hundreds of data sources. And building the reliable future proof data platform is absolute priority for the company. We’re currently going through a big transformation, implementing top-notch technologies and you can make a huge difference in the way the future of data looks like for Corsearch.
What will you do?
You will be building the unified, source-agnostic data platfor,m consisting of data lake, operational data store, data warehouse, data APIs, search engines and analytical tools – integrated together with batch ETL and real-time streaming capabilities. You will work very closely with our architects and product people on design of the platform, including development of the ETL data pipelines, physical and logical database design.
In short, your responsibilities will include:
- Data Pipelines development for online and batch processing from hundreds of data sources using the unified ETL framework;
- ETL framework design and development for online and batch processing;
- Optimization of ETL: unification, decrease of development TTM, increase of code readability;
- Performing code reviews;
- Physical and logical database design;
- System/components/modules documentation;
- Developing data feeds for external data consumers;
- Adoption of the new technologies, used to improve the performance of the data platform, indexing engines, APIs.
How will your day in the office be?
We’re truly global company, you will be working with the colleagues from multiple countries, so you can expect to have some video calls during your day. We’re very flexible and can find the best way to organise your work at Corsearch including flexible hours, remote work and best equipment to develop the best in class solutions.
Tech teams work in SCRUM, using Jira for planning. There will be standups, planning sessions and retros. We like experiments with the processes we follow, and you can influence it as well. You will also be a part of code review group to support the quality of our code.
Besides work, there is a lot of fun in the office. BBQs, Pizza Fridays, sports activities, etc – everybody can find something interesting.
Requirements
What do we ask of you?
- 5+ years in enterprise data platforms implementation projects;
- Strong experience with proprietary and open-source ETL engines (AbInitio, Informatica, SAP DataServices, Pentaho, Talend, Airflow, etc);
- Exceptional SQL knowledge and optimization skills;
- Strong experience working with cloud-based solutions (AWS preferably);
- Strong, proven experience with performance database/ETL optimization;
- Strong scripting languages experience, preferably Python.
- Web scraping technologies experience is a plus;
- Self-starter capability, ability to acquire new skills autonomously;
- Demonstrably solid written and verbal communication skills to drive projects to successful conclusion;
- Ability to troubleshoot and solve complex problems independently;
- Ability to manage conflicting priorities effectively.
What do you do next?
If this sounds interesting, click apply and introduce yourself! We’d love to have a chat to get to know each other. We will be sure to keep you posted about the recruitment process every step of the way, which may also include an assessment. We very much look forward to hearing from you!
Deadline for applications: 14.03.2021.