From open-source side project to enterprise customers processing billions of tokens weekly. A 20-year-old founder is solving it with technology that adapts and forgets like the human brain.
What’s happening: Dhravya Shah, who turned 20 last month, has raised $3 million to build Supermemory, a memory infrastructure layer for AI applications. The Mumbai-born founder built his own vector database to solve what he calls one of AI’s hardest challenges: enabling models to retain context across multiple sessions.
Why this matters: As AI adoption accelerates globally, the inability of large language models to maintain long-term memory across sessions remains a critical limitation. Shah’s approach addresses a fundamental infrastructure gap that affects everything from chatbots to video editors, with hundreds of enterprises already building on the platform.
A teenager who started building a bookmarking tool in his university dorm has secured $3 million in funding from some of Silicon Valley’s most influential technology executives to solve one of artificial intelligence’s most persistent problems: memory.
Dhravya Shah, who turned 20 last month, announced the seed funding round for Supermemory, describing it as infrastructure that enables AI applications to remember and adapt like the human brain. The round was led by Susa Ventures, Browder Capital and SF1.vc, with backing from Google’s Chief Research Scientist at DeepMind Jeff Dean, Cloudflare CTO Dane Knecht, Sentry founder David Cramer, and executives from OpenAI, Meta and Google.
“Memory is one of the hardest challenges in AI right now,” Shah said in the funding announcement. “I realized this when building the first version of supermemory, which was merely a bookmarking and notetaking tool I was building as a side-project in dorm two years ago when I was 18.”
From bookmarks to billions
Originally from Mumbai, Shah began building the initial version of Supermemory, then called Any Context, as part of a personal challenge to create something new each week. He released it as an open-source project on GitHub that allowed users to chat with their Twitter bookmarks.
The consumer application quickly gained traction, reaching 50,000 users and accumulating more than 10,000 stars on GitHub, making it one of the fastest-growing open-source projects in 2024. Users saved millions of items through the platform, and the project won multiple grants, including the buildspace grant.
“At scale, the consumer app ran into many issues, and to our surprise the infrastructure for ‘Memory’ for LLMs like this simply didn’t exist,” Shah explained in his announcement. “I had some experience in infrastructure and started sharing more details on how we were building the infrastructure behind the consumer app ourselves.”
Memory as infrastructure
At the time he was developing Supermemory, Shah was working on AI infrastructure at Cloudflare, where he contributed to work that filed a patent to make agents faster. He also worked at startups focused on memory solutions and created multiple consumer applications.
The experience reinforced his understanding of memory’s fundamental challenge. “It’s not just a search problem, it’s about really understanding the users and making their experience magical by contextualising the LLMs they talk to,” he wrote.
Interest from companies wanting to use Supermemory’s infrastructure for their own products prompted Shah to make a decisive shift. Many were prepared to pay immediately, and some offered contractual work for help implementing the open-source project. Shah decided to drop out of university, move to San Francisco full-time, and transform Supermemory into a commercial product.
“This is my life’s work,” Shah wrote. “I dropped out of college, moved to SF, and continued to build out the product as a solo founder.”
The commercial version of Supermemory functions as a universal memory API for AI applications. It builds a knowledge graph based on processed data and personalises context for users, supporting queries across different types of applications from writing tools to video editors.
Building from scratch
Shah’s approach involved building core infrastructure components from the ground up. “I built my own vector DB, content parsers and an engine that works like the human brain,” he wrote in his announcement.
The platform can ingest multiple data types, including files, documents, chats, projects, emails, PDFs and application data streams. It offers multimodal input support, allowing it to work across different types of AI applications. There is also a chatbot and notetaker feature that lets users add memories in text, add files or links, and connect to applications like Google Drive, OneDrive or Notion, along with a Chrome extension for adding notes from websites.
The infrastructure now serves hundreds of enterprises and builders, with some customers processing billions of tokens weekly. The company is working with various AI applications, including desktop assistants, video editors, search platforms and real estate tools, as well as a robotics company to retain visual memories captured by robots.
“Today, I am delighted that we have one of the best and fastest memory products in the world, with many hundreds of enterprises and builders building apps on top of supermemory,” Shah wrote. “And this is just the start.”
The vision ahead
Shah positions memory as a critical missing piece in the development of artificial general intelligence. He argues that whilst model providers are racing to build superintelligence with PhD-level knowledge and the ability to use tools, memory and adaptation remain underdeveloped.
“It’s increasingly obvious that the final big hill to climb to make intelligence truly feel human, the next exciting inflection point in AI, is memory and personalisation,” he wrote on the company website.
He emphasises that memory infrastructure must remain independent of specific model providers. “If Google releases the next best model this week, but you’re stuck to OpenAI because their API has memory, you would be locked into using what you are,” he explained. “Memory should be a universal right, not a moat.”
Shah argues that almost all early customers saw increases in app usage, customer satisfaction or revenue by making their experiences more personalised for users. “Users should not be locked into a chatbot because it knows everything about them. Because all chatbots can know everything about them. All of them work with supermemory.”
His long-term vision is ambitious. “Intelligence without memory is nothing but sophisticated randomness,” he wrote. “One day, when AGI is a thing and robots are walking around everywhere, they would need a memory as sophisticated as their intelligence. And it would be supermemory.”
The company is now hiring across engineering, research and product roles as it scales its infrastructure to serve growing demand from enterprises building AI applications that require persistent, contextual memory.
Keep up to date with our stories on LinkedIn, Twitter, Facebook and Instagram.