Lead cutting-edge research on offense-defense dynamics of advanced AI systems, examining how specific features of AI technologies influence their propensity to either enhance societal safety or amplify risks. Apply interdisciplinary methods to develop quantitative and qualitative frameworks for analyzing how AI capabilities proliferate through society as either protective or harmful applications.
Requirements
- A M.Sc. or higher in either Computer Science, Cybersecurity, Criminology, Security Studies, AI Policy, Risk Management, or a related field
- Demonstrated experience with complex systems modeling, risk assessment methodologies, or security analysis
- Strong understanding of dual-use technologies and the factors that influence whether capabilities favor offensive or defensive applications
- Experience in any of the following: Security mindset, Security studies research, Cybersecurity, Safety engineering, AI governance, Operational risk management, Systems dynamics modeling, Network theory, Complexity science, Adversarial analysis, or Technical standards development
- Ability to develop both qualitative frameworks and quantitative models that capture sociotechnical interactions, and comfort creating semi-quantitative semi-empirical models also grounded in logic
- Record of relevant publications or research contributions related to technology risk, governance, or security
- Exceptional analytical thinking with ability to identify non-obvious path dependencies and feedback loops in complex systems
Benefits