
The world may be running out of time to prepare for the risks posed by advanced artificial intelligence, according to David Dalrymple, a prominent AI safety researcher and programme director at the UK government-backed Advanced Research and Invention Agency (ARIA). In an interview with The Guardian, Dalrymple warned that AI development is accelerating at a pace that safety frameworks may not be able to match, raising serious concerns about reliability, control, and long-term stability. As he put it, the technology is moving “really fast,” and it cannot be assumed that these “systems are reliable”.
Dalrymple’s concerns centre on the emergence of AI systems capable of performing the full range of tasks humans use to operate in the world. “I think we should be concerned about systems that can perform all of the functions that humans perform to get things done in the world, but better,” he said. The implications, he added, are profound: “We will be outcompeted in all of the domains that we need to be dominant in, in order to maintain control of our civilisation, society and planet.”
According to Dalrymple, the danger lies not only in rapid progress but in progress outpacing safety research. He described the potential outcome as “destabilisation of security and economy,” stressing that far more technical work is needed to understand and control the behaviour of increasingly capable AI systems. “I would advise that things are moving really fast and we may not have time to get ahead of it from a safety perspective,” he said. “And it’s not science fiction to project that within five years most economically valuable tasks will be performed by machines at a higher level of quality and lower cost than by humans.”
ARIA, while publicly funded, operates independently from the UK government, and Dalrymple’s work focuses on safeguarding AI use in critical infrastructure such as energy networks. He cautioned policymakers against assuming that advanced systems will naturally behave as intended. “We can’t assume these systems are reliable. The science to do that is just not likely to materialise in time given the economic pressure. So the next best thing that we can do, which we may be able to do in time, is to control and mitigate the downsides,” he said.
Dalrymple also warned that society is failing to grasp the scale of the transition ahead. “Progress can be framed as destabilising and it could actually be good, which is what a lot of people at the frontier are hoping. I am working to try to make things go better, but it’s very high risk and human civilisation is on the whole sleepwalking into this transition,” he said.
Looking ahead, he offered a stark prediction: by late 2026, AI systems could automate a full day’s worth of research and development work. This, he argues, would trigger “a further acceleration of capabilities,” as AI begins to meaningfully improve itself in areas such as mathematics and computer science—intensifying both its potential and its risks.




