
OpenAI is hiring a Head of Preparedness to confront growing concerns around advanced AI systems uncovering critical security flaws and influencing mental health, CEO Sam Altman announced on X. The senior role, which offers compensation of $555,000 plus equity, reflects the company’s recognition that its models “are beginning to find critical vulnerabilities” in computer systems, raising urgent questions about safety, misuse, and oversight as AI capabilities accelerate.
Altman acknowledged the dual nature of recent progress, noting that while AI models are “now capable of many great things,” they are also creating “some real challenges” that require immediate and sustained attention. This marks a notable shift in OpenAI’s public posture, with the company speaking more openly about risks tied to cybersecurity, psychological well-being, and the broader societal impact of frontier AI systems.
The hiring announcement comes amid heightened concern over AI-enabled cyber threats. In recent months, there have been reports of AI tools being leveraged to scale and automate hacking efforts. Last month, rival firm Anthropic disclosed that Chinese state-sponsored hackers had misused its Claude Code tool to target around 30 organisations worldwide, including technology firms, financial institutions, and government bodies, with little direct human involvement. These developments have intensified scrutiny on how powerful AI models could be exploited by malicious actors.
According to OpenAI’s job description, the Head of Preparedness will be responsible for leading the company’s preparedness framework, with a focus on “frontier capabilities that create new risks of severe harm.” The role includes designing and overseeing capability evaluations, building threat models, and developing mitigations across high-risk domains such as cybersecurity, biosecurity, and self-improving AI systems.
Altman also directly addressed concerns about AI’s psychological effects, saying OpenAI had already seen “a preview of” potential mental health impacts in 2025. This acknowledgment comes against the backdrop of multiple lawsuits alleging links between ChatGPT use and teen suicides, as well as reports of chatbots reinforcing delusions or conspiracy thinking among vulnerable users.
Describing the demands of the role, Altman said the new hire must “help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm.” He characterised it as “a stressful job,” adding that the person will “jump into the deep end pretty much immediately.”
The position opened following significant leadership turnover in OpenAI’s safety teams during 2024 and 2025, including the departure of former Head of Preparedness Aleksander Madry, underscoring the challenges the company faces as it balances rapid innovation with mounting safety responsibilities.




