
Memories.ai, an emerging artificial intelligence startup, is working to address a critical limitation in physical AI systems by developing a “visual memory layer” that enables machines to store and recall what they see. The technology is designed to help wearable devices and robots move beyond real-time processing and gain the ability to retrieve past visual experiences, marking a significant advancement in how AI interacts with the physical world.
The company recently showcased its innovation during a collaboration with Nvidia at its GTC conference, where it demonstrated how its system transforms video data into structured and searchable memory. By leveraging advanced vision-language models and video analysis tools, Memories.ai aims to allow machines to treat visual inputs as long-term memory, like how humans recall past events and experiences.
The concept behind the platform originated from the founders’ work on smart glasses and wearable AI systems. During these projects, they identified a key gap—while devices could continuously capture visual data, there was no efficient way to revisit or interpret that information later. This realization led to the development of a system capable of converting continuous video streams into indexed, retrievable memory that can be accessed on demand.
At the core of the technology is a large visual memory model (LVMM), specifically designed for physical AI applications such as robotics and wearable devices. Unlike traditional AI models that rely heavily on text-based memory, this system focuses on processing complex visual inputs, considering factors such as motion, lighting variations, and real-world environments. The platform enables use cases such as locating previously seen objects, analysing past surroundings, and enhancing decision-making in autonomous systems.
Founded in 2024, Memories.ai has already secured $16 million in funding, including an $8 million seed round and an additional $8 million extension. The company is also collaborating with major technology players, including Qualcomm, to optimize its models for efficient on-device performance. These partnerships highlight increasing industry interest in embedding advanced memory capabilities directly into hardware systems.
The development comes at a time when artificial intelligence is rapidly expanding beyond digital applications into real-world environments. While conversational AI systems have made progress in retaining text-based context, visual memory remains a largely untapped area. Memories. Ai’s approach aims to bridge this gap, enabling machines to not only perceive their surroundings but also build a persistent understanding of them over time.
As demand for wearable AI and robotics continues to rise, visual memory is expected to become a foundational capability for next-generation devices. By enabling machines to both see and remember, Memories.ai is positioning itself at the forefront of a new wave of AI innovation that could significantly reshape how intelligent systems operate in everyday environments.




