AVA. Deep data for a safe world. Located in Berlin, Novi Sad, London, and Singapore, AVA combines big data, distributed computing, pattern recognition, and artificial intelligence to provide time-critical information to its clients and partners in order to improve the safety of individuals, organisations, and businesses. All over the world, companies involved in logistics, transport, and tourism (to name a few) rely on the AVA ecosystem to enhance their business offering and protect what matters most to them.
The AVA platform leverages the economics of big data, cloud elasticity, Machine Learning (ML)/Artificial Intelligence (AI) automation, and permissible data sharing to turn information into business insights and address business and operational challenges.
Purpose of Role
This is a key role that will be accountable for the development and operations of the Data Platform to drive maximum value from data for business users and in line with company best practices. You will work as part of a cross-functional agile delivery team, including front and backend engineers, data scientists, product managers, and infrastructure.
You will have the opportunity to work on complex problems, implementing high-performance solutions that will run on top of our cloud-based big data platform.
- Work as part of the Data Engineering team to uphold and evolve common standards and best practices, collaborate to ensure that our data solutions are complementary and not duplicative,
- Build and maintain high-performance, fault-tolerant, secure, and scalable data platform to support multiple data solutions use cases,
- Interface with other technology teams to design and implement robust products, services, and capabilities for the data platform making use of infrastructure as code and automation,
- Build and support platforms to enable our data engineers and data scientists to build our cloud-based big data platform,
- Create patterns, common ways of working, and standardised guidelines to ensure consistency across the organisation,
- Help to engineer our platform ingestion, data warehouse/data lake, and API strategies for our data management ecosystem,
- Work with data scientists to ensure scalability, resilience, and operational efficiency of ML Models in production.
- Strong experience in Cloud architecture/administration in production environments,
- Expertise in Databases (Postgres, MySQL, etc),
- Solid experience of network and security on cloud-based environments, specifically on cloud services such as VPCs, Security Groups, NACLs, and IAM roles,
- Deep understanding of CI/CD using tools like Jenkins/CircleCI/Azure Data Pipelines, along with deep experience in source control like Git,
- Experience with object-oriented and functional design, coding, and testing patterns as well as experience in engineering software platforms and large-scale data infrastructures,
- Experience writing production quality code in Python/Bash/Powershell/Go, etc,
- Experience in building and maintaining distributed platforms to handle a high volume of data,
- Strong platform-level design, architecture, implementation, and troubleshooting skills,
- Good understanding of Enterprise patterns and best practices applied to data engineering and data science use cases at scale,
- Good understanding of Azure cloud storage and orchestration, and computing platforms (especially Azure Blob Store, Kafka, Airflow, Elastic, Spark),
- Good understanding of DevOps/DataOps in an Agile Environment, familiarity with Jira and Confluence,
- Experience of Docker/Kubernetes would be beneficial.
- Great problem-solving skills, and the ability and confidence to hack their way out of tight corners,
- Ability to prioritise and meet deadlines,
- Conscientious, self-motivated, and goal orientated,
- Excellent attention to detail and solid written and verbal English communication skills,
- Willingness and an enthusiastic attitude to work within existing processes/methodologies.
Sounds like you? Come and join us in this interesting and meaningful role, a great team, and a holistic purpose.