OpenAI, the creator of ChatGPT, has significantly stepped up its internal security protocols amid concerns that competitors—particularly foreign firms—are trying to steal its technology. According to a report in the Financial Times, the company has implemented robust measures, including biometric fingerprint access in offices and data centers, stricter cybersecurity controls, and advanced access policies to safeguard its most sensitive AI work.
These measures were introduced following allegations that Chinese AI startup DeepSeek used “unauthorised model distillation” techniques to replicate OpenAI’s technology. OpenAI accuses DeepSeek of training a smaller AI by absorbing the outputs of its larger, costly models—a process akin to indirect copying. Although “distillation” is a standard machine learning method, OpenAI’s terms prohibit the practice when used to build competing models.
New security updates at OpenAI include:
- Biometric access controls, requiring fingerprint scans for secure areas.
- A deny-by-default internet policy, isolating key systems from networks unless explicitly approved.
- “Information tenting”, which compartmentalises access so only select teams can work on confidential projects like the o1 model (codenamed “Strawberry”).
- Enhanced physical and cyber protections at data centers, with former defence cybersecurity personnel brought in, including CISO Dane Stuckey and board member General Paul Nakasone.
One OpenAI employee described the new culture: “You either had everything or nothing” regarding access to classified projects.
DeepSeek’s rapid emergence in January, offering a model comparable to ChatGPT and Google Gemini at significantly lower cost, prompted OpenAI to intensify these precautions. While distillation isn’t illegal, OpenAI argues that using its outputs in this way contravenes its API terms and amounts to intellectual property infringement.
This security hardening reflects a broader tech-world trend, as the global AI arms race intensifies and concerns over international IP theft grow. Notably, the U.S. government has warned of escalating espionage threats targeting sensitive AI technologies.