Greenhouse is hiring a Data Engineer to join our team!
As a member of our Data Engineering team, you will drive the value of our data by working with stakeholders across the business to architect and deliver highly valuable data sets. We’re searching for someone who can own data quality, enhance user experience, and drive operational efficiency through their work as a Data Engineer. If you are a data enthusiast, energized by working with cutting-edge tools to build valuable data assets, this is the role for you.
Who will love this job
- A standout colleague – you thrive off of developing and supporting your peers
- A doer – you get things done, you move quickly, and you love working in a dynamic environment
- A product-minded engineer – you not only build elegant solutions, but understand the business impact of your work
- An excellent communicator – you have a talent for explaining technical processes concisely (even to non-engineers), and work well with cross-functional internal teams
What you’ll do
- You will partner with folks across our product engineering teams as well as data humans (analysts and scientists) around the company
- Develop datasets with the customer in mind
- Focus on data quality and testing to ensure we find bugs before our stakeholders, i.e., avoids trust bugs
- Build reporting schemas and data models that are performant and easy to use
- Leverage AI to accelerate your work and the impact of the solutions you build
- Build data pipelines and tools using Snowflake, dbt, Kubernetes, and Airflow, and help lay the foundation for our growing machine learning capabilities
- Work across our data stack to evolve our data products - from data pipelines to analytical modeling to machine learning tools
- Additional projects and responsibilities as business needs require
You should have
- Experience in software development and a level of comfort navigating Python and SQL
- Experience architecting data pipelines
- Experience in data modeling to allow stakeholders to make data-driven decisions
- A track record of building data products that have real business impact
- Experience developing and debugging ETLs that run across multiple systems and tools (Airflow, Argo, etc.)
- Experience working cross-functionally to deliver clean and high-value data products
- Your own unique talents! If you don’t meet 100% of the qualifications outlined above, tell us why you’d be a great fit for this role in your cover letter