Join Snowflake's Open Source Analytics team to build and evolve the open and interoperable data lake ecosystem. Work on complex and exciting challenges in enterprise data lake analytics, collaborating with the best minds in the open source community.
Requirements
- 5+ years of experience designing and building scalable, distributed systems.
- Strong programming skills in Java, Scala, or C++ with an emphasis on performance and reliability.
- Deep understanding of distributed transaction processing, concurrency control, and high-performance query engines.
- Experience with open-source data lake formats (e.g., Apache Iceberg, Parquet, Delta) and the challenges associated with multi-engine interoperability.
- Experience building cloud-native services and working with public cloud providers like AWS, Azure, or GCP.
- A passion for open-source software and community engagement, particularly in the data ecosystem.
- Familiarity with data governance, security, and access control models in distributed data systems.
Benefits
- Be part of a pioneering effort to build the most open and interoperable data lake ecosystem in the industry.
- Work on a high-impact open-source project that solves real-world data challenges for enterprise customers across all industry verticals.
- Collaborate with some of the brightest minds in the data ecosystem, including core contributors and PMC members of Apache Iceberg and Apache Polaris (incubating).
- Have the opportunity to innovate in one of the fastest-growing and evolving areas in enterprise data, where you can make a direct impact on Snowflake’s growth and the broader open-source community.