
Starcloud has made history by training and running large language models (LLMs) in space for the first time, a milestone that could redefine the future of high-performance computing. The company’s Starcloud-1 satellite—equipped with an NVIDIA H100 GPU—successfully trained Andrej Karpathy’s nano-GPT on Shakespeare and performed inference on Google DeepMind’s Gemma model. Founder Philip Johnston described the achievement as “the first LLM in space” and a crucial step toward tapping into “the near limitless energy of our Sun.”
The breakthrough underscores Starcloud’s ambition to pioneer orbital compute as a scalable, sustainable alternative to Earth-based data centres. Backed by NVIDIA and a graduate of Y Combinator and Google for Startups, the company argues that running AI systems in orbit can substantially reduce the environmental and energy footprint associated with traditional data infrastructure. Its long-term vision includes building a 5-gigawatt solar-powered orbital data centre stretching four kilometres—an installation it says will be cheaper, more compact, and significantly cleaner than comparable ground-based operations.
Behind the achievement is an engineering feat: making NVIDIA’s most advanced GPU operate in the harsh environment of low Earth orbit. CTO Adi Oltean said it required “a lot of innovation and hard work,” highlighting the technical complexity of deploying high-performance compute hardware in space, including thermal management, radiation shielding, and reliable power delivery from satellite-based solar arrays.
Starcloud’s progress comes as major tech companies also accelerate their push toward orbital computing. Google is developing Project Suncatcher, with CEO Sundar Pichai calling the initiative a “moonshot” aimed at enabling vast AI workloads off-planet. Elon Musk has similarly signaled aggressive ambitions, saying SpaceX’s upcoming Starlink V3 satellites will deliver the “lowest cost AI compute” within five years. Musk also noted that Starship launches could deploy up to 500 gigawatts of solar-powered AI satellites annually—an orbital compute network that could exceed the entire US economy’s electricity consumption for intelligence processing every two years.
With Starcloud’s technology now proven in orbit, the race toward space-based AI compute is accelerating, hinting at a future where large-scale model training may rely on solar energy harvested off-planet rather than constrained terrestrial grids.




