
OpenAI has confirmed that hackers stole a limited amount of internal data after a recent supply-chain security incident linked to malicious code packages. The company stated that the breach affected only a small number of employee devices and did not impact ChatGPT user data, production systems, or core intellectual property.
According to reports, the incident was connected to a supply-chain attack involving TanStack, a widely used open-source npm library ecosystem used by developers. Attackers reportedly compromised TanStack’s legitimate release pipeline and published dozens of malicious package versions within a short time window, exposing organizations that downloaded the affected packages.
OpenAI stated that the attackers successfully accessed two employee devices through the compromised packages. However, the company emphasized that its investigation found no evidence that customer data, production environments, or AI systems were accessed. OpenAI also said none of its proprietary AI models or intellectual property were stolen during the incident.
As part of its response, OpenAI reportedly rotated code-signing certificates and conducted a broader internal security review. The company also secured the affected systems and expanded monitoring around developer environments and software dependencies.
The incident has renewed concerns about software supply-chain attacks, where hackers compromise trusted third-party tools or open-source libraries to infiltrate larger organizations. Security experts warn that AI companies are increasingly attractive targets because of their access to sensitive infrastructure, proprietary models, enterprise integrations, and large developer ecosystems.
Researchers noted that supply-chain attacks have become more sophisticated in recent years, with hackers increasingly targeting package repositories such as npm and PyPI to distribute malicious code. Similar attacks have previously affected major technology companies, cloud platforms, and enterprise software providers.
The incident also highlights broader cybersecurity concerns surrounding AI infrastructure and developer tooling. As AI companies expand coding agents, API integrations, and autonomous development systems, experts warn that vulnerabilities in software dependencies and developer environments could become major attack vectors for future cyber campaigns.




