LLMs (large language models) are changing the world. Widely used LLMs were built by training algorithms to predict the next word based on the preceding word sequence.
(Technically, LLMs operate on “tokens,” not words. Words and tokens are related but distinct. A typical LLM would probably represent the word “isn’t” with two ordered tokens, the first representing “is” and the second representing “not” because whether someone says “isn’t” or “is not” is basically irrelevant.)
I propose that brains are like LLMs but with “tokens” representing discrete events that could happen, rather than words.
Similar to how LLMs predict the next word in a sequence based on the preceding words, brains predict the next event in a sequence of events either in the real world (while we’re awake) or a sequence of imagined events presented to us (while dreaming).
We begin learning cause-and-effect the moment we’re born. Our brains are event prediction machines.
We’re so prone to and adept at detecting patterns that we often “see” patterns even when they’re caused by random/chance variation. E.g., the “hot hand” fallacy in sports (Maybe why so many love sports gambling?)
Which brains?
I propose that brains generally – not just HUMAN brains – are event sequence predicting machines because language is unnecessary for brains to operate this way, and I don’t see why animal brains would operate differently. Thinking in terms of events and trying predict the next event to occur are as valuable to the survival and thriving of animals generally as to humans specifically.
I’m about to share how analyzing a dream I had last night led me to the insight underlying this blog post, and we know animals also dream:
In his new book, When Animals Dream: The Hidden World of Animal Consciousness, philosopher David M. Pena-Guzman of San Francisco State University argues the science shows that animals really do dream, and that those dreams are evidence of consciousness.
– “Do Animals Dream? with David M. Peña-Guzmán,” University of Chicago, July 21, 2022, https://news.uchicago.edu/do-animals-dream-david-m-pena-guzman
Dreams as event prediction sequences
I woke up stressed from a dream this morning. My most recurring dream involves losing my laptop through carelessness. My dream last night had several unusual twists:
- First, last night’s dream didn’t involve MY laptop, as my laptop dreams usually do.
- Second, last night’s dream didn’t revolve around the laptop. The laptop only entered the story long into my dream.
I was at a conference center at some kind of reunion. I had already had conversations with several old aquaintances unrelated to laptops. One acquaintance needed to go somewhere and foolishly stashed her laptop at the bottom of a public refrigerator (weird, I know… but it was a conference center with no obvious places to store valuables) and asked me to keep an eye out. I soon discovered someone had stolen it. I went to my friend and told her. A split second later I was shocked to see she had instantly whipped out her mobile phone and screamed in a panic, “They’re already in my files!” I immediately suggested she cancel her credit cards.
One interesting bit here is that these “old acquaintances” were somehow faceless – I have no idea what any of them looked like – and unrelated to anyone I’ve ever known in real life, as if representing the CONCEPT of an “old acquaintance” rather than an actual old acquaintance.
More importantly:
- How and why did my brain manage to surprise “me” during my dream?!?!
- How did my brain decide that the next thing to happen would be my acquaintance pulling out her cellphone to discover her files had been accessed?
- And how did my brain decide this without my dream consciousness being aware of it?
I don’t recall ever contemplating/imagining anything like this before. But recognizing that laptop loss could lead to further losses preventable through prompt action is a valuable insight.
If one purpose of dreaming is to imagine alternative possibilities that might occur in real life and/or to imagine alternative ways of responding to events that have happened to us or might happen to us, dreams that throw unexpected events at us might be extremely valuable.
Do brains generate dreams as GANs (generative adversarial networks) generate images, videos, and simulated environments?
Are our dreams event sequences generated as we predict each next event? If so, who’s doing the predicting?
- My “dream mind ‘consciousness’” in my unconscious dream…
- Or… the mind behind the curtains of my “dream mind ‘consciousness’”?
I suspect they’re working in tandem, synergistically generating my dreams by both choosing some of the events in my dream event sequences. After the mind behind the curtains generates an event or three, “conscious me” takes a tu or two. Kind of like how LLM coder-decoders work in tandem in generative AIs to accomplish something neither could alone. Or how NPCs (non-player characters) in video games interleave their actions with the gamer’s actions.
Here’s how IBM describes a GAN:
A generative adversarial network (GAN) is a machine learning model designed to generate realistic data by learning patterns from existing training datasets. …[T]wo neural networks work in opposition — one generates data, while the other evaluates whether the data is real or generated. While deep learning has excelled in tasks such as image classification and speech recognition, generating new data, including realistic images or text, has been more challenging due to the complexity of computations in generative models.
GANs, introduced by Ian Goodfellow in his 2014 paper “Generative Adversarial Nets,” offer a groundbreaking solution to this challenge. This innovative framework has transformed generative modeling, making it easier to develop models and algorithms capable of creating high-quality, realistic data.
A GAN architecture consists of two deep neural networks: the generator network and the discriminator network. The GAN training process involves the generator… creating synthetic data such as images, text or sound that mimics the real data from the given training set. The discriminator evaluates both the generated samples and the data from the training set and decides whether it’s real or fake.
– Jobit Varughese, “What are generative adversarial networks (GANs)?” https://www.ibm.com/think/topics/generative-adversarial-networks
Substitute “life experience” for “training datasets,” “mind behind the curtains” for “generator network,” “dream mind ‘consciousness’” for “discriminator network,” and “how best to respond” for “whether it’s real or fake” and this perfectly describes how I propose brains dream!
The cutting edge of generative AI is probably Google Deep Mind’s Genie 3:
a general purpose world model that can generate an unprecedented diversity of interactive environments.
Given a text prompt, Genie 3 can generate dynamic worlds that you can navigate in real time at 24 frames per second, retaining consistency for a few minutes at a resolution of 720p.
Genie 3 is generating worlds that users can navigate, shape, and interact with, much like I imagine our brains simulate dreams that our “dream mind ‘consciousness’” can interact with.
Prediction machines love surprises… because that’s how they improve
Another important aspect of my dream was my shock at my acquaintance immediately checking her phone and discovering her files had been accessed. In my dream, that stunned me. I don’t recall ever thinking about a laptop theft as an event requiring immediate action to avoid further loss/damage. But my brain ran this simulation and taught me a potentially valuable lesson. Where did it get the idea?
Like an LLM trained to predict the next word but also to randomize predictions by sometimes choosing likely words that aren’t THE most likely, my brain could be generating events in my dreams that it considers plausible next events, even if they aren’t the most likely predictions. Injecting some randomness is more likely to expose us to interesting learning opportunities than sticking close to the most predictable path.
When reality diverges from our expectations, we experience surprise. When we’re surprised, we get emotionally aroused and become more attentive. Paying greater attention helps us learn more from the unexpected experience. That’s learning. Our brain holds a model of how the world works, and when reality surprises us, we’re wise to pay attention and revise our internal theory of the world.
Several days ago, I was stunned in real life seeing my daughter sitting in the kitchen chair where my son always sits. It’s a totally mundane event, except that it was so statistically improbable that my mind actually “saw” my son there for a split second before realizing it was my daughter and seeing her! (Inputs from our eyes, ears, skin, etc. are filtered through many interpretive brain layers – an ACTUAL neural network – before reaching our consciousness. What we think we see is NOT what we actually see.)
My mind’s mind was blown seeing my daughter where I was 99.99% certain my son was! I immediately shared my surprise with my wife and daughter, who (naturally) thought little of it and me rather crazy for acting so surprised.
Are humans obsessed with stories because our brains are event prediction machines?
Humans love stories. We obsess over novels and movies and TV dramas and office gossip. We crave it. Why?
A while back, I read the first 20% of Dean Buonomano’s book titled “Your Brain Is a Time Machine”. I listened at high speed while doing chores, but I absorbed enough to absorb his main idea.
As I’m finishing this post, I searched for “brain as prediction machine” and see that MANY have argued for this perspective… so many that it’s practially a cottage industry. A few examples:
- Are We All Just Predictive Machines? The Brain as a Guessing Engine
- Your Brain Is a Prediction Machine That Is Always Active
- The Brain as a Prediction Machine: The Key to Consciousness?
- The brain is a prediction machine: It knows how well we are doing something before we even try
- Predictions in the Brain: Using Our Past to Generate a Future
- The Experience Machine: How Our Minds Predict and Shape Reality
I don’t know whether this blog on dreams as GAN-like simulations breaks any new ground scientists haven’t yet uncovered, but the insight was new to me. And, most interestingly, came to me from a dream!
Some people don’t think in words but in “bubbles [combining] concepts, images, and feelings”
One last piece of supporting evidence that brains are event prediction machines, not LLMs, is that not every human thinks in words:
According to We Used to Think Everybody Heard a Voice Inside Their Heads – But We Were Wrong:
Only in recent years have scientists found that not everyone has the sense of an inner voice – and a new study sheds some light on how living without an internal monologue affects how language is processed in the brain.
This latest study, from researchers at the University of Copenhagen in Denmark and the University of Wisconsin-Madison in the US, also proposes a new name for the condition of not having any inner speech: anendophasia.
This is similar to (if not the same as) anauralia, a term researchers coined in 2021 for people who don’t have an inner voice, nor can they imagine sounds, like a musical tune or siren.
Our brains are unlikely to be LLMs because not everyone thinks in words. Some people report thinking through concepts. To communicate, they must then consciously translate their mental concepts into words. (Cite article)
I can’t find an Internet reference to the “BBC Science Focus” article “Silent Minds: The People Who Have No Inner Voice” by James Lloyd that I read on “Apple News,” but here’s how one person describes his thinking:
It was a lightbulb moment for Koski when he discovered via an online video that other people have an inner voice. “When I used to watch movies that voiced a character’s inner monologue, I just thought they were doing it for effect,” he says. “I didn’t realise people actually experienced that. My mind was blown.”
He says that his lack of inner speech doesn’t mean a lack of thoughts; it’s just that his thoughts don’t involve language.
“If you think of the mind as an ocean,” he says, “then each of my thoughts feels like a bubble rising into my consciousness. Inside the bubble is a combination of concepts, images and feelings, but no words or speech.”
Whereas someone with inner speech might think “Where did I put my keys?”, Koski says that his thought bubble might contain the concept of something missing, an image of his home with all the places the keys could be, and a feeling of dread. He also experiences his thoughts as shimmering with colour, depending on the emotion. Surprise is pale yellow; anxiety is dark blue or purple; dread is a translucent black.
…One benefit for Koski is that people tell him he’s good at explaining complex ideas, which he puts down to his way of thinking. “I love reading about new topics,” he says, “and if I learn about, say, a physics theory, it’s like all the information gets condensed into a single thought bubble, and then I can convert that back into words for others.”
Koski’s explanation that “each of my thoughts feels like a bubble [containing] a combination of concepts, images, and feelings” reminds me of the formless, nameless, personality-less “old acquaintances” from my dream last night who seemed to represent the CONCEPT of an “old acquaintance.”
Koski’s description of his thoughts “feel[ing] like a bubble rising into my consciousness” sounds exactly like the “man behind the curtains” I described above or the “generator network” in an LLM.
With thanks to “RDNE Stock project” for this photo shared on Pexels.com.