At Synechron, we collaborate with high profile clients from multibillion-dollar insurance and financial industry worldwide. Combining innovative ideas and high-level engineering expertise with latest technologies we deliver tomorrow’s enterprise software, AI/Data and R&D solutions today.
Work on a Big Data project, building a data lake for a London, UK based client, one of Europe’s largest asset managers and a major global investor.
- Create and maintain optimal data pipeline architecture required for extraction, transformation, and loading of data from a wide variety of data sources using SQL and Hadoop ‘big data’ technologies.
- Assemble large, complex data sets that meet functional/non-functional business requirements.
- Identify, design, and implement internal process improvements including automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Explore ways that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, data quality and reliability.
- Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Advanced working SQL knowledge and experience working with relational databases.
- Understanding in ETL, Data Engineering and Business Intelligence best practices.
- Experience in building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
Experience using at least one of the following:
- Big data tools: Hadoop, Spark, Kafka, etc.
- Relational SQL and NoSQL databases and tools: HDFS, Hive, Impala, HBase, PostgreSQL, MongoDB, etc.
- Design and implementation of BI solutions (DWH, ETL, monitoring, etc.).
- Data pipeline, orchestration, validation and workflow management tools: Oozie, Airflow, NiFi, HUE, etc.
- Stream-processing systems: Storm, Flume, Spark-Streaming, etc.
- Object-oriented/functional/scripting languages: Python, Java, Scala, Bash, etc.
- Experience with Fintech or Insurtech is a plus