
OpenAI is hiring a Head of Preparedness for its San Francisco office, offering a compensation package of $555,000 along with equity, as the company strengthens safeguards around increasingly powerful AI systems. The role sits within OpenAI’s Safety Systems team, which focuses on building evaluations, safeguards, and frameworks to ensure that advanced AI models behave reliably and safely when deployed in real-world environments.
The position reflects OpenAI’s growing emphasis on preparedness as frontier AI capabilities continue to evolve rapidly. According to the company, it has already invested across multiple generations of advanced models, developing capability evaluations, threat models, and mitigation strategies. Preparedness, OpenAI notes, has now become a “major priority”, underscoring the need for structured and scalable approaches to managing emerging risks.
The incoming Head of Preparedness will be responsible for shaping and leading the company’s preparedness framework. This includes translating technical capability evaluations, threat modeling, and mitigation strategies into a “coherent, rigorous, and operationally scalable safety pipeline” that can be applied consistently across model development and deployment. The role is designed to ensure that safety insights are not only theoretical but actively inform decisions around model launches, usage policies, and governance.
In addition to strategic leadership, the role involves managing a small, high-impact team and guiding work across critical risk domains, including cybersecurity and biological risks. The Head of Preparedness will ensure that evaluation outcomes directly influence decision-making at senior levels, particularly as OpenAI balances innovation with responsible deployment.
As AI risks continue to evolve, the role is expected to adapt and expand the preparedness framework over time. This will require close collaboration with research, engineering, product, policy, and governance teams, as well as coordination with external partners. By embedding preparedness deeply into product development and deployment processes, OpenAI aims to anticipate potential harms before they materialize at scale.
The role highlights how leading AI companies are increasingly investing in safety leadership alongside technical innovation. With a competitive compensation package and broad organisational influence, the Head of Preparedness position signals OpenAI’s intent to make safety and risk readiness a core pillar of its AI strategy, rather than a downstream consideration.




