At Netlify, we’re building a platform to empower digital designers and developers to build better, more elaborate web projects than ever before. We’re aiming to change the landscape of modern web development. Netlify currently serves more than 1,000,000 developers worldwide.
Netlify is a diverse group of incredible talent from all over the world. We’re ~44% woman or non-binary, and are composed of more than a fourth as many nationalities as we are team members.
We recently raised $63M in Series C funding to bring forward the next generation of tooling for a more accessible web. Among our investors are Andreessen Horowitz, Kleiner Perkins, EQT Ventures as well as the founders of GitHub, Slack, Figma and Yelp. This latest round brings Netlify’s funding raised in total to $108M to date.
About the role:
As a Senior Data Engineer working on our critical data pipelines at Netlify, your contributions will have a huge impact on our burgeoning data function's efforts. You’ll design and build pipelines that will support key analytical and business intelligence functions, help enable decision making around user-facing features, and empower your fellow team members to experiment with and develop on top of our data.
Some of the things you'll do:
- Help to evolve and scale our data platform, with an eye towards growth
- Work closely with the analytics and business intelligence teams, as well as other stakeholders from finance, sales, marketing, and product, to understand the data needs of the business and produce processes that enable a better product and support growth decision-making
- Generate architecture recommendations and the ability to implement them
- Create smaller issues and code changes by collaborating with stakeholders to reduce scope and focus on iteration
- Improve, manage, and teach standards for code maintainability and performance in code submitted and reviewed
- Ship medium to large features independently
- Help evolve our CI/CD strategy for our ETL jobs and pipelines
We're looking for someone who has experience with:
- Experience developing production-grade ETL pipelines in Python
- Experience with Data Extraction, Cleaning, and Mining
- Strong comfort implementing Kimball-style architecture in analytical data warehouses, such as Snowflake, BigQuery, and Redshift
- Hands on experience with data orchestrators, such as Airflow, Dagster, Prefect, or Luigi (Airflow preferred)
- Experience planning and executing system expansion as needed to support the company's growth and analytical needs
- Belief in writing documentation as part of writing code
- Excellent written communication that will enable async work
- Clear communicator who can gather technical requirements and explain technical intricacies
- Desire to continually keep up with advancements in data engineering practices
Nice to haves:
- Experience implementing a DataOps framework
- Experience implementing Python best practices
- Familiarity with CI/CD in a data engineering/ops setting
- RESTful API development experience
- Some experience with R and/or Scala in production
- Experience in the Google Suite of the data ecosystem, especially Google Cloud Data proc or Apache Beam
Within 1 month, you’ll…
- Learn about our Dev and DataOps process and supporting tools.
- Have pairing sessions with some of the people you'll be working most closely with.
- Identify opportunities for improvements on existing pipelines and how things are organized in our data stores.
- Have started committing small quality of life improvements to pipelines as part of learning the shape of our data and how it flows through systems and processes.
- Be helping perform code reviews for new changes.
Within 2 months, you’ll…
- Feel comfortable spelunking in our data stack to answer your own questions
- Be contributing to internal conversations on data organization and structure
Within 3 months, you’ll…
- Be in the on-call rotation for the other data engineers and feel confident in your ability to handle most common issues (assuming they can't yet be automated away!) for your critical pipelines.
- Have a solid understanding of our Data peers' needs and skillsets so that we support them with data sources and schemas that enable them work efficiently.
- Rolled out your first few new pipelines to supply your team members with a new clean data source.
- Identified opportunities for improvement on our DataOps strategy that helps us increase observability, reproducibility, and supports our iteration speed.
Of everything we've ever built at Netlify, we are most proud of our team.
We believe that empowered, engaged colleagues do their best work. We’ll be giving you the tools you need to succeed and looking to you for suggestions to improve not just in your daily job, but every aspect of building a company. Whether you work from our main office in San Francisco or you are a remote employee, we’ll be working together a lot—paring, collaborating, debating, and learning. We want you to succeed! About 63% of the company are remote across the globe, the rest are in our HQ in San Francisco.