OpenAI Highlights Security and Commercial Risks Linked to DeepSeek

OpenAI has informed US lawmakers that Chinese AI firm DeepSeek has allegedly relied on distillation techniques to extract outputs from leading American AI systems in order to train its R1 chatbot. The concerns were outlined in a memo sent Thursday to the House Select Committee on China and reviewed by Bloomberg News.

In the document, OpenAI stated that DeepSeek had employed distillation methods as part of what it described as “ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier labs.” The company further noted that it had identified “new, obfuscated methods” designed to bypass safeguards intended to prevent misuse of its models’ outputs.

OpenAI began privately raising alarms shortly after DeepSeek launched its R1 model last year. At that time, it initiated an internal investigation with partner Microsoft Corp. to determine whether DeepSeek had accessed its data without authorization, according to earlier reporting by Bloomberg. Distillation, a process in which one AI system uses the outputs of another model as training data to replicate similar capabilities, has become a focal point of concern.

According to the memo, the practice—largely associated with actors in China and occasionally Russia—has continued and grown more advanced despite enforcement measures aimed at users who violate OpenAI’s terms of service. The company cited activity observed on its platform as evidence of increasingly sophisticated attempts to extract value from its systems.

OpenAI also warned that widespread use of distillation could present commercial challenges for US-based AI developers. Since DeepSeek and several other Chinese models are offered without monthly subscription fees, American firms such as Anthropic PBC, which invest heavily in infrastructure and monetize premium services, could face competitive pressure. Such dynamics, OpenAI suggested, risk narrowing the technological advantage that the United States currently holds over China in artificial intelligence.

Beyond commercial implications, OpenAI raised national security concerns related to DeepSeek’s progress. The company pointed out that DeepSeek’s chatbot had censored responses on topics deemed sensitive by the Chinese government, including Taiwan and Tiananmen Square. OpenAI argued that when capabilities are replicated through distillation, safety guardrails embedded in original models may not transfer, potentially increasing the likelihood of misuse in sensitive domains such as biology or chemistry.

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

error: Content is protected !!

Share your details to download the Cybersecurity Report 2025

Share your details to download the CISO Handbook 2025

Sign Up for CXO Digital Pulse Newsletters

Share your details to download the Research Report

Share your details to download the Coffee Table Book

Share your details to download the Vision 2023 Research Report

Download 8 Key Insights for Manufacturing for 2023 Report

Sign Up for CISO Handbook 2023

Download India’s Cybersecurity Outlook 2023 Report

Unlock Exclusive Insights: Access the article

Download CIO VISION 2024 Report

Share your details to download the report

Share your details to download the CISO Handbook 2024

Fill your details to Watch