OpenAI Warns of “Potentially Catastrophic” AI Risks Amid Rapid Progress Toward Scientific Discovery

OpenAI Warns of “Potentially Catastrophic” AI Risks Amid Rapid Progress Toward Scientific Discovery

OpenAI has issued one of its strongest public cautions about the future of artificial intelligence, warning that the technology is advancing much faster than society realises and edging closer to making genuine scientific discoveries. In a blog post published on November 6 and later shared by CEO Sam Altman on X, the company said that while AI’s progress brings immense opportunity, it also carries “potentially catastrophic” risks if global safety mechanisms fail to keep pace.

According to OpenAI, the public still views AI mainly as chatbots and search tools, but today’s systems already demonstrate cognitive abilities that rival top human minds in advanced reasoning tasks. The company said it now sees AI as “80% of the way to an AI researcher,” with early signs that models can generate new knowledge — a development that could transform domains like science, healthcare, and materials research.

“In 2026, we expect AI to be capable of making very small discoveries,” the post notes. “By 2028 and beyond, we are pretty confident we will have systems that can make more significant discoveries.”

OpenAI highlighted that progress in AI has accelerated at an unprecedented rate. The cost of achieving a given intelligence level has dropped around 40 times per year, enabling machines to accomplish in seconds what once required humans hours or days. Yet, this speed of advancement is creating a widening gap between how most people use AI and what the technology is truly capable of — leaving society “largely unprepared for what comes next.”

The most serious warning centers on the advent of superintelligent systems — AI that can improve itself without human intervention. OpenAI emphasized that such systems must not be deployed until proven methods exist to ensure alignment, safety, and control. The company called for:

  • Shared safety standards across leading AI labs,

  • Public oversight and accountability, with light regulation for current models and tighter controls for advanced ones,

  • A global AI resilience ecosystem, similar to cybersecurity frameworks, and

  • Continuous worldwide reporting on AI’s real-world impact.

Despite the cautionary tone, OpenAI expressed optimism about AI’s transformative potential, describing it as a “foundational utility” that could bring “widely distributed abundance,” empowering people to live healthier, more fulfilling lives. “The north star should be helping empower people to achieve their goals,” the post concludes — underscoring the balance between innovation and responsibility as AI enters a new frontier.

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

error: Content is protected !!

Share your details to download the Cybersecurity Report 2025

Share your details to download the CISO Handbook 2025

Sign Up for CXO Digital Pulse Newsletters

Share your details to download the Research Report

Share your details to download the Coffee Table Book

Share your details to download the Vision 2023 Research Report

Download 8 Key Insights for Manufacturing for 2023 Report

Sign Up for CISO Handbook 2023

Download India’s Cybersecurity Outlook 2023 Report

Unlock Exclusive Insights: Access the article

Download CIO VISION 2024 Report

Share your details to download the report

Share your details to download the CISO Handbook 2024

Fill your details to Watch