Gartner Predicts That by 2028 Misconfigured AI Will Shut Down National Critical Infrastructure in a G20 Country

Safe Override Mode in AI Systems Supporting National Critical Infrastructure Is Essential to Ensure Ultimate Human Control

Gartner, Inc., a business and technology insights company, predicts that by 2028, misconfigured AI in cyber physical systems (CPS) will shut down national critical infrastructure in a G20 country.

Gartner defines CPS as engineered systems that orchestrate sensing, computation, control, networking and analytics to interact with the physical world (including humans). CPS is the umbrella term to encompass operational technology (OT), industrial control systems (ICS), industrial automation and control systems (IACS), industrial Internet of Things (IIoT), robots, drones, or Industrie 4.0.

“The next great infrastructure failure may not be caused by hackers or natural disasters but rather by a well-intentioned engineer, a flawed update script, or a misplaced decimal,” said Wam Voster, VP Analyst at Gartner. “A secure ‘kill-switch’ or override mode accessible only to authorized operators is essential for safeguarding national infrastructure from unintended shutdowns caused by an AI misconfiguration.”

Misconfigured AI can autonomously shut down vital services, misinterpret sensor data or trigger unsafe actions. This can result in physical damage or large-scale service disruption, posing direct threats to public safety and economic stability by compromising control of key systems like power grids or manufacturing plants.

For example, modern power networks rely on AI for real-time balancing of generation and consumption. A misconfigured predictive model could misinterpret demand as instability, triggering unnecessary grid isolation or load shedding across entire regions or even countries.

“The next great infrastructure failure may not be caused by hackers or natural disasters but rather by a well-intentioned engineer, a flawed update script, or a misplaced decimal.”– Wam Voster, VP Analyst at Gartner.

“Modern AI models are so complex they often resemble ‘black boxes,’” said Voster. “Even developers cannot always predict how small configuration changes will impact the emergent behavior of the model. The more opaque these systems become, the greater the risk posed by misconfiguration. Hence, it is even more important that humans can intervene when needed.”

To mitigate risks, Gartner recommends that chief information security officers (CISOs) must:

  • Implement Safe Override Modes: For all critical infrastructure CPS, include a secure “kill-switch” or other override mechanisms accessible only to authorized operators, so humans retain ultimate control even during full autonomy.
  • Digital Twins: Develop a full-scale digital twin of the systems supporting critical infrastructure for realistic testing of updates and changes to configurations before deployment.
  • Real-Time Monitoring: Mandate real-time monitoring with rollback mechanisms for changes made to AI in CPS, while also ensuring the creation of national AI incident response teams.
- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

error: Content is protected !!

Share your details to download the Cybersecurity Report 2025

Share your details to download the CISO Handbook 2025

Sign Up for CXO Digital Pulse Newsletters

Share your details to download the Research Report

Share your details to download the Coffee Table Book

Share your details to download the Vision 2023 Research Report

Download 8 Key Insights for Manufacturing for 2023 Report

Sign Up for CISO Handbook 2023

Download India’s Cybersecurity Outlook 2023 Report

Unlock Exclusive Insights: Access the article

Download CIO VISION 2024 Report

Share your details to download the report

Share your details to download the CISO Handbook 2024

Fill your details to Watch