As a Security Research Engineer at Maze, you'll be at the forefront of defining what constitutes real security risk in the age of AI-powered vulnerability detection. This is a unique opportunity to join our growing security research team at a well-funded startup building at the intersection of generative AI and cybersecurity, where your security expertise directly shapes how our AI models understand and prioritize cloud security threats.
Requirements
- 5+ years of hands-on security experience with proven vulnerability research background
- Deep knowledge of AWS security, cloud infrastructure vulnerabilities, container security, and cloud-native attack vectors
- Strong coding and scripting abilities (Python, Go, or similar) for automating research tasks, building validation tools, and creating proof-of-concept exploits
- Proven ability to analyze complex security data, distinguish between critical threats and false positives, and communicate technical findings to both technical and business audiences
- Experience translating security insights into product requirements, with ability to identify patterns across vulnerabilities that inform strategic product decisions
- Experience working with vulnerability databases, security advisory feeds, and threat intelligence sources to contextualize and prioritize security findings
- Strong communication skills and ability to work effectively with security research peers, AI/ML teams, and product stakeholders, translating security domain knowledge into actionable improvements
Benefits
- Scale Expert Data Labeling Operations
- Drive Product Development Through Research Insights
- Collaborate with Security Research Team
- Deep Vulnerability Research
- Enhance AI Model Accuracy
- Technical Investigation and Analysis
- Leverage External Security Intelligence
- Contribute to Thought Leadership