Welcome to Fully Distributed, a newsletter about AI, crypto, and other cutting edge technology. Join our growing community by subscribing here:
If you’ve seen the 2011 thriller Limitless, you too probably wondered what your life would look like if you could unlock the full potential of your brain. In the movie, Bradley Cooper plays Eddie Morra, a struggling writer who takes a pill that allows him to access “100% of his brain's capacity”, endowing him with superintelligence, perfect memory, and the ability to analyze minute details and information at incredible speed. Throughout the film, Eddie’s increased cognitive power enables him to vastly improve his lifestyle and generate significant wealth and power.
While that may be a bit far-fetched (we *technically* already use 100% of our brain anyway), the truth is that our brains do have all sorts of limitations and can often be overwhelmed by the sheer amount of information we’re deluged with on a daily basis.
But absent a magic pill, could we achieve the same result as Eddie with technology? Could recent advances in artificial intelligence endow us all with superpowers? That's where the idea of constructing a "second brain" popped into my head. I became curious if there was a way for us to work around our brain’s limitations so that we could become truly limitless.
In this blog post, I will explore the limitations of our existing ‘knowledge stack’ (starting with our own brains), and propose some interesting ways in which AI could help us get closer to Eddie Morra.
Let’s dive in.
Why our “knowledge stack” is flawed
I was browsing through my perfectly organized Notion folders the other day when I stumbled across a page with some startup ideas that I jotted down a few months ago. I got a bit annoyed at myself - I completely forgot I’d come up with these, and what’s more frustrating, I completely forgot that I wrote them down in the first place! What’s the point of having all these perfectly structured folders with backward and forward links if I still can’t easily and readily access my ideas?
The reality is, our brain (“first brain”) is a great place for coming up with ideas, but not so much for holding them. Think of our brain as a vast library filled with endless shelves of information, but not all of the books are within reach. Our memory is limited by what we focus on and what we are able to process, and only a small portion of what we encounter is stored in a way that we can easily access later. To help us remember and recall information when needed, we rely on our "second brain" - our note-taking and other memory aids, which act as our mental librarian, helping us locate the right information and access it quickly.
Of course, the idea of outsourcing some of the workload from our “first brain” into a “second brain” is not a novel one. Note-taking has been an important part of human history and scientific development for millennia. Historically, notes were handwritten but with recent technological advancements (e.g. computer, internet, mobile, cloud), an entire multibillion-dollar industry of “knowledge management software” has emerged. In fact, knowledge management has become such a fundamental skill amidst today’s flurry of information, that someone created a popular $500 course that teaches you how to build a “second brain” using these exact tools. However, even for the most ‘advanced’ expert users several critical challenges remain with regard to using these knowledge management solutions:
They require ongoing maintenance and organization, which takes substantial cognitive effort and time. If you slip up, the system will quickly become disorganized and increasingly less useful.
They create unnecessary friction between insight and productivity - searching for the ‘right’ information is time-consuming and distracting (not to say you need to remember what to search for in the first place)
Each folder or page within the system is siloed, which makes it hard to draw connections between different ideas
When we write something down, we often do not immediately know how, when (if at all), or in what context we will revisit this idea in the future. This means that we have to preemptively guess the best way to store or file that idea in a way that would maximize its usefulness in the future. Given that this is incredibly hard to get right, we often never end up revisiting the things we store.
I hope that by now I have convinced you of the following:
We are living in an age of information overload
Our brains are not good for storing/remembering vast amounts of information
We increasingly rely on software tools to store and organize our knowledge in a more structured way
These tools have fundamental flaws that ultimately hinder our performance
Ok, now let’s explore how artificial intelligence can solve virtually all of these issues.
Building a real ‘Second Brain’ with Artificial Intelligence
Unless you have been living under a rock, you have probably heard of ChatGPT by now - a viral conversational AI assistant that has reached 100 million users in just under two months (likely becoming the fastest product to do so in history). Its success provides a few important takeaways as we consider how to build our second brain:
The conversational interface appears to be an incredibly user-friendly UI - we like to ask questions and receive direct answers.
Large Language Models (LLMs) perform really well with tasks such as compare, contrast, suggest, and summarize.
You can improve the LLMs output by providing additional context/information about the task at hand in your prompt (input)
By all accounts, ChatGPT is a great starting point for a personal virtual assistant that has a vast general knowledge about the world (with some limitations) that can help you with a variety of different tasks. But what if we could augment this powerful tool with our proprietary knowledge? Could it become your on-demand, always-available brainstorming partner? Here are a few examples of what your ‘AI assistant’ could do for you:
Index and query all of your content across disparate platforms (e.g. Notion, Substack, Twitter combined) in a single conversational interface, making your entire corpus of knowledge much more easily and readily accessible
Execute a semantic search on your data that focuses on understanding the meaning and context behind your query (rather than just matching keywords), leading to vastly increased accuracy and relevancy of output
Convert unstructured language/notes into a self-sustaining knowledge graph of all of the concepts, entities, and their relationships within the dataset. As new information is added, the knowledge graph would automatically update.
Identify new threads in your thinking and link previously unrelated ideas or insights together
Greatly diminish the ongoing maintenance and organization needs of your knowledge system (and reduce the associated cognitive load)
To be clear - this hyper-personalized AI assistant will not replace you - but it will augment your thinking and complement your first brain’s ability to observe, reflect, generate new insights, and make new connections between ideas. All done via an intuitive conversational interface - it’s like talking to a ‘real’ personal assistant.
So how do we achieve this?
Current tech & its limitations
The stack to get us to a version of this already exists today. All we need is:
A ‘general purpose’ LLM (could be GPT-3 or something else) pre-trained on a large dataset - this will be used as a foundation of our assistant
Our personal corpus of knowledge (such as our Notion or Evernote notes)
An engine to process our data to make it ‘searchable’ (a number of open-source frameworks exist, and most rely on adding ‘embeddings’ to the dataset; examples include Haystack, Weaviate, GPT Index)
A framework to wrap the above into a conversational chatbot with memory (LangChain is an open-source project that is well suited for this)
While this may sound complicated to the non-technical folks, the main point here is that the tech to achieve this result exists already - it may not be perfect - i.e. some improvements around embeddings, prompting, etc. is required to optimize the bot’s performance - but it is already a massive improvement to the status quo.
In fact, some hobbyists are already tinkering in this space. For example, Dan Shipper from Every has uploaded 10 years of daily journaling into a GPT model and was able to receive accurate answers to some deeply personal and nuanced questions. The model was able to see patterns across the 10 years of daily data that our brains are simply unable to. This experiment, albeit unperfect, provides a glimpse into what hyper-personalized assistants may look like in the near future.
Limitations
I would be remiss if I didn’t mention some of the limitations that we have to figure out in the near term. The most obvious one is “model hallucinations” - today’s imperfect LLMs sometimes conjure up an answer that is completely made up (but sounds believable!). If we rely on these assistants for suggestions or fact recollection, we need to be confident that these hallucinations do not occur. Another issue is that of model bias of the general purpose model itself - what if it leans a certain way politically and its output reflects that? What about biases that are less noticeable? We need a way to deal with this (or at least be hyperaware of this risk). Finally, there is a philosophical question - can an AI model put thoughts in my head? At what point is an idea truly mine? What are the dangers of relying on this brainstorming partner?
Predictions
To conclude this piece I wanted to throw out a few predictions on this topic:
The conversational interface will become ubiquitous and will become a major way we interact with data and information across ALL workflows
Every major platform (Windows, MacOS, Gmail, Notion) will implement some sort of LLM-augmented ‘search’ for its users, but each dialog agent will have limited access to the data beyond its own platform (opportunity for aggregation)
“AI assistants” will go after Enterprise vertical applications - data retrieval / knowledge sharing across Sales (CRM applications), customer service, HR, etc.
Similar to how Google Search removed the need of knowing encyclopedia facts, LLMs will greatly reduce the importance of remembering things - humans will simply outsource all that computational load to their “second brain”.
As a result, humanity will be moving around orders of magnitude more data between different users/entities in ways never imagined before - this will require new and better ‘pipes’ / infrastructure/tooling for data management
What do you think about the future of AI assistants? What happens to the world if all of us have access to Eddie Morra’s superpowers? Will it be the greatest equalizer, or will we experience rapidly growing inequality never seen before?
Tell me what you think! DMs open. @leveredvlad on Twitter :)
PS If you would like to collaborate on building an AI-enabled “second brain” please reach out - currently working on something new.
If you enjoyed reading this, subscribe to my newsletter! I regularly write essays about AI, crypto, and other cutting-edge technology.