Sunday Brain Mix: AI is Overconfident, AI Creates Social Norms, AI is Close to Free Will, and AI Learns Like Us
Time for the Sunday Brain Mix which today is more of an AI mix. But first I am happy to say that my fantastic founding members received the first part of the Chapter Your Decisive Brain from The Handbook of Your Brain in Business - this has been a long time in writing and is comprehensive overview of all things the brain and decision making. Paid subscribers will receive the instalments from next week and all subscribers will receive extracts. Thanks to everyone one for being a subscriber!
In the Handbook I have also visited some classic cases in neuroscience and this week one of the first conditions ascribed to a brain regions (way back in the 1800s) was associated with AI which I though was an interesting take on AI. So this week is a summary of topics of AI and its relationship to humanness and brain processes.
AI is Overconfident
Last week I was with the family and we were looking up some relatively specialised knowledge and read through the Google AI result. You probably know the type of thing that now comes up in search results with google’s gemini giving a top level summary. These are sometimes very useful summaries.
However, you may also find, particularly if you know something about an area you are knowledgeable about, that this summary may be slightly off, or completely wrong. This is what happened with the family, we realised the response was junk. But because of the way AI formulates these responses in well-articulated sentences AI comes across as being overconfident. Or at worst like that ignorant person who spouts false truths with calm confidence.
This is what a group of researches in Japan noticed and particularly the relationship with something called Wernicke’s aphasia. Wernicke is famed in neuroscience circles because his description of this aphasia and related brain regions is one of the first cases of localisation in the brain. Localisation is the standard theory that our functions are localised and specialised in different brain regions.
Wernicke at the end of the 1800s noted that different language functions specifically those with language production, but no comprehension, seemed to be located in the left of the brain because patients he examined who had problems with language conmprehension had brain damage to this regions.
For this the region in the left superior temporal gyrus is often named called Wernicke’s area and a speech disorders is named after him: Wernicke’s Aphasia. Wernicke’s aphasia is when people have language function but lack comprehension. This means they often spout well formed sentences that mean nothing and without comprehending them themselves. This is just like AI.
To see if this was truly the case, the team of researchers used something called energy landscape analysis, a technique first developed by physicists to visualize energy states in magnetic metal - this has recently been adapted for neuroscience and the brain’s energetic states.
They reviewed patterns in resting brain activity from people with different types of aphasia and compared them to internal data from several AI tools (that are based on Large Language Models like ChatGPT is). And the team discovered some striking similarities. The way digital information or signals move around and are manipulated in these AI models closely matches the way some brain signals behave in the brains of people with certain types of aphasia, including Wernicke's aphasia.
Obviously they don’t have brain damage but shows that they draw on knowledge without understanding and can then form coherent sentences which sound very confident to us users - as I also write about in Your Decisive Brain we have to be cautious with overconfidence.
This knowledge and technology may also help refine these AI tools.
With AI sounding human, albeit overconfident, how similar is AI in other forms of advanced human abilities?
Apparently very much so, with more recent research showing that AI spontaneously creates social norms for interacting with each other!
AI Develops its Own Social Norms
Most research into AI is in single models or tools and not as a collective. Understandably we are focused on how individual AI tools can perform. So what happens when you put groups of AI together?
Keep reading with a 7-day free trial
Subscribe to leading brains Review to keep reading this post and get 7 days of free access to the full post archives.