
OpenAI has introduced a new public bug bounty program aimed specifically at identifying abuse and safety risks in its artificial intelligence systems, marking a significant expansion of its existing security efforts. Announced in March 2026, the initiative focuses on uncovering vulnerabilities that could lead to harmful outcomes, even if they do not qualify as traditional cybersecurity flaws.
The new program complements OpenAI’s existing security bug bounty framework by broadening the scope to include AI-specific risks such as misuse, unintended behaviours, and harmful outputs. Researchers, ethical hackers, and the wider security community are invited to submit findings related to how AI systems can be exploited or manipulated in real-world scenarios.
Key areas of focus include emerging threats such as prompt injection attacks and data exfiltration, where malicious inputs can manipulate AI systems into leaking sensitive information or performing unintended actions. The program also targets vulnerabilities in “agentic” AI systems—tools that can act on behalf of users—where improper safeguards could result in large-scale harmful actions or unauthorized operations.
In addition to these risks, the initiative covers issues related to platform integrity and exposure of proprietary data. Researchers can report flaws that allow bypassing account protections, manipulating trust systems, or revealing confidential information generated by AI models. OpenAI noted that submissions demonstrating clear paths to user harm and actionable fixes would be eligible for rewards.
The program is hosted on Bugcrowd and follows similar operational guidelines as the company’s security bug bounty initiative, with dedicated teams reviewing and triaging submissions. High-severity reports that are reproducible and well-documented may receive rewards of up to $7,500, although final decisions remain at the company’s discretion.
OpenAI emphasized that the move reflects the evolving nature of risks in artificial intelligence, where threats are no longer limited to software vulnerabilities but also include behavioural and misuse-related challenges. As AI systems become more autonomous and widely deployed, the company aims to proactively identify and mitigate potential harms through collaboration with the global research community.
The launch of the safety-focused bug bounty program highlights a broader shift in the industry toward addressing AI-specific risks alongside traditional cybersecurity concerns. By formalizing a reporting channel for abuse and safety issues, OpenAI is signalling the increasing importance of responsible AI development and the need for continuous monitoring of emerging threats.




