Artificial Intelligence Is Teaching Us New, Surprising Things About the Human Mind
Thought is ever-changing electrical patterns unconnected to individual neurons. Meta is working on a system to read your mind.
The world has been learning an awful lot about artificial intelligence lately, thanks to the arrival of eerily human-like chatbots.
Less noticed, but just as important: Researchers are learning a great deal about us – with the help of AI.
AI is helping scientists decode how neurons in our brains communicate, and explore the nature of cognition. This new research could one day lead to humans connecting with computers merely by thinking–as opposed to typing or voice commands. But there is a long way to go before such visions become reality.
I say tomato, you say pangolin
Celeste Kidd, a psychology professor at the University of California, Berkeley, was surprised by what she discovered when she tried to examine the range of opinions people have about certain politicians, including Barack Obama and Donald Trump.
Her research was intended to explore the widening divergence of how we conceive of subjects to which we attach moral judgements – such as politicians. Previous work has shown that morally-fraught concepts are the ones people perceive in the most polarized ways.
To establish a baseline for her experiment, she began by asking thousands of study participants about their associations with common nouns, in this case animals.
What she discovered was that even for common animals – including chickens, whales and salmon – people’s notions of their characteristics are all over the map. Are whales majestic? You’d be surprised who disagrees. Are penguins heavy? Opinions vary. By quizzing people on many such associations, Dr. Kidd was able to amass a pool of data that clusters people according to which of these associations they agree on. Using this method, she found that people can be grouped into between 10 and 30 different clusters, depending on their perception of an animal.
Dr. Kidd and her team concluded that people tend not to see eye to eyeabout even the most basic characteristics of common objects. We also overestimate how many people see things as we do. In a world in which it feels like people are increasingly talking past one another, the root of this phenomenon may be the fact that even for citizens of a single country speaking a common language, words simply don’t mean the same thing to different people.
That might not seem like a very profound observation, but what Dr. Kidd’s research suggests is the degree to which that’s true may be much greater than psychologists previously thought.
Arriving at this insight required the application of a tool of mathematics that makes many kinds of AI possible – known as a “clustering model”.
The most important feature of AI which enables new kinds of research, says Dr. Kidd, is the same that makes possible AI chatbots like OpenAI’s ChatGPT, Google’s Bard, and Microsoft’s Bing chat: It’s the capacity of modern computer systems to process a lot more data than in the past. It “opens up a lot of possibilities for new insights, from biology to medicine to cognitive science,” she adds.
Cracking the brain’s neural code
In her research, Tatiana Engel, an assistant professor of neuroscience at Princeton University, uses the same kinds of networks of artificial neurons that are behind most of what we currently call artificial intelligence. But rather than using these to better-target ads, or to generate fake images, or compose text, she and her team use them to interpret the electrical signals of hundreds of neurons at once in the brains of animals.
A three dimensional model is created of the subject’s head with a laser scanner.PHOTO: JIN S. LEE
Dr. Engel and her team then go a step further: they train networks of artificial neurons to perform the same tasks as an animal – say, a swimming worm. They then find that those artificial networks organize themselves in ways that reasonably approximate the way they’re organized in real animals. While neural networks in the brain are vastly more complicated, the result of this simulation is a model system that is both close enough to its biological equivalent, and simple enough, to teach us things about how the real brain works, Dr. Engel says.
One key insight this yields is that the actual substance of thought – the patterns that constitute the mind you’re using to read this sentence – is dynamic electrical activity in our brains rather than something physically anchored to particular neurons.
In other words, in contrast to what neuroscientists once believed about how we make decisions, there are no “eat the chocolate” neurons and “don’t eat the chocolate” neurons. Thinking, it turns out, is just electrical signals zooming about inside our heads, forming a complex code which is carried by our neurons.
What’s more, AI is letting scientists listen in on the things that happen in our brains when we’re not doing anything in particular.
“This allows us to discover the brain’s internal life,” says Dr. Engel.
Do androids dream of electric sheep? We don’t know yet, but we may soon be able to determine if humans are thinking about the real thing.
Real-life mind reading
If a research lab owned by Meta META 0.31%increase; green up pointing triangle Platforms, Facebook’s parent company, figuring out how to read your mind makes you at all uncomfortable, you’re probably not going to be a fan of what the rest of the 21st century has in store.
Historically, it’s been very difficult to measure brain activity inside our heads, because the electrical signals generated by our brains, which are miniscule to begin with, must be measured from outside of our skulls. (Elon Musk’s aspirations for his Neuralink startup notwithstanding, opening up our heads and putting in brain interfaces hasn’t proved popular.)
PHOTO:
But progress in artificial intelligence techniques is yielding amore-powerful amplifier of those weak brain signals. Meta’s AI lab published research on one such mind-reading technology last summer.
Meta scientists didn’t actually stick anyone in a brain scanner. Instead, they used data on brain signals gathered by researchers at universities. This data was captured from human subjects who were listening to words and phrases, while sitting in non-invasive brain scanners. These scanners came in two varieties: One was the sort of electrodes-embedded-in-a-swim-cap with which many people are familiar, called an EEG (short for “electroencephalogram”). The other looks like a supervillain’s attempt to create a world-crushing megabrain, called a MEG (for “magnetoencephalogram”).
To analyze this data, researchers used a type of AI called a “self-supervised learning model.” Without this technique, the latest generation of AI chatbots would be impossible. Such models can extract meaning from giant pools of data without any instruction from humans, and have also been used to try and figure out what animals are communicating with each other.
PHOTO:
A little less than half of the time, Meta’s AI algorithm was able to correctly guess what words a person had heard, based on the activity generated in their brains. That might not sound too impressive, but it’s leaps and bounds better than what such systems have been able to achieve in the past.
Alexandre Défossez, a scientist at Meta who was part of the team that conducted this research, says that the eventual goal of this work is to create a general-purpose “speech decoder” that can directly transform our brain activity–our thoughts–into words.
Imagine texting a friend just by thinking about it – as long as you’re wearing an EEG cap at the moment, at any rate. The technology could have a big impact on the lives of people who are unable to communicate in other ways, adds Dr. Défossez.
It’s just one more example of the way that AI might someday give us the tools for improving our individual and collective well-being – or at least an explanation for why, in the age of social media, both of those things frequently seem so deranged.
Browse our latest news and updates below