This is a key role that will be accountable for the development and operations of the Data Platform to drive maximum value from data for business users and in line with company best practices. You will work as part of a cross-functional agile delivery team, including front and backend engineers, data scientists, product managers, and infrastructure.
You will have the opportunity to work on complex problems, implementing high-performance solutions that will run on top of our cloud-based big data platform.
- Work as part of the Data Engineering team to uphold and evolve common standards and best practices, collaborate to ensure that our data solutions are complementary and not duplicative,
- Build and maintain high-performance, fault-tolerant, secure, and scalable data platform to support multiple data solutions use cases,
- Interface with other technology teams to design and implement robust products, services and capabilities, for the data platform making use of infrastructure as code and automation,
- Build and support platforms to enable our data engineers and data scientists to build our cloud based big data platform,
- Create patterns, common ways of working, and standardized guidelines to ensure consistency across the organisation,
- Help to engineer our platform ingestion, data warehouse/data lake and API strategies for our data, management ecosystem,
- Work with data scientists to ensure scalability, resilience and operational efficiency of ML Models in production.
- Strong experience on Cloud architecture/administration in production environments,
- Experience with object-oriented and functional design, coding, and testing patterns as well as experience in engineering software platforms and large-scale data infrastructures,
- Experience writing production-quality code in Java/Python/Bash/PowerShell/Go, etc.
- Experience of building and maintaining distributed platforms to handle high volume of data,
- Strong platform-level design, architecture, implementation and troubleshooting skills,
- Good understanding of Enterprise patterns and best practices applied to data engineering and data science use cases at scale,
- Good understanding of cloud storage and orchestration, and computing platforms (especially document/blob stores, Kafka, Airflow, Elastic, Spark),
- Good understanding of DevOps/DataOps in an Agile Environment, familiarity with Jira and Confluence,
- Experience of Docker/Kubernetes would be beneficial,
- Expertise in Databases (Postgres, MySQL, etc),
- Solid experience of network and security on cloud-based environments, specifically on cloud services such as VPCs, Security Groups, NACLs and IAM roles,
- Deep understanding of CI/CD using tools like Jenkins/CircleCI/Azure Data Pipelines, along with deep experience in source control like Git.
- Great problem-solving skills, and the ability and confidence to hack their way out of tight corners,
- Ability to prioritize and meet deadlines,
- Conscientious, self-motivated, and goal orientated,
- Excellent attention to detail and solid written and English communication skills,
- Willingness and an enthusiastic attitude to work within existing processes/methodologies.
Someone who wants to be part of a meaningful mission and contribute to a good cause. You are looking for a company where you have the opportunity to follow your interests, to learn and grow, building and strengthening an organization others look up to.
Sounds like you? Come and join us in this interesting and meaningful role, a great team, and a holistic purpose.
Sounds interesting? Please send your cover letter and CV electronically.
Deadline for applications: 20.10.2021.