Research Hit: Is Real Artificial Intelligence Now Here (no, not ChatGPT)?
A new model of AI integrates "senses" learns quicker, with less data and processing power, and is more transparent.
Are you suggesting that the current AI models such as ChatGPPt aren’t real artificial intelligence?!
Kind of, I have mentioned before of some of the limitations I experience with these models making them not very useful for me most of the time (such as in writing articles such as this). But sure the results can be mightily impressive at times (and I have used them effectively also) as I am sure you are aware.
But I also notice that these models despite being built on neural networks do not really operate in the same ways as the human brain does - which is why I was fascinated by a recently published piece on the PV-RNN (Predictive coding inspired, Variational Recurrent Neural Network) framework.
Wow, that is a mouthful - what does this AI model do differently then?
In layman’s terms it works and learns like a toddler.
For example a feature of human intelligence is the ability to generalise and this begins very early on in life - for example children quickly grasp the concept of a colour - say red and can then immediately apply this to all sorts of objects despite being different shades of red: red ballon, red apple, red shirt.
This model is also built on different sensory inputs, an embodied system: it has “vision” with a screen, “movement and proprioception” with a robotic arm, and linguistic inputs. And now has been shown to work and learn effectively through its different and across its “senses”.
Wow, a real android then?
Nowhere near close to a fully operational android - it is a simple robotic arm only but has proprioception i.e. being able to sense touch and respond to it’s position.
Of more interest is the design of the model which is based on something called the Free Energy Principle which I speak about more in my online book (here) - and a feature of this which is surprise minimisation i.e building a predictive brain.
What’s more the architecture is fundamental different to these Large Language Models such as ChatGPT which need masses of data to function. Basically it is designed more like the human brain with an executive centre and from this builds predictions. Basically what toddlers do - born with a functional architecture and then build models based on inputs and responses to the world.
So, how close are we to a functional android?
Miles away but the key aspect here is that the model can function effectively, integrate the “senses” but also requires much less data, and hence no interconnected access to large data bases.
Another massive advantage is that there is much more transparency in its models and how it makes decisions (making it likely more ethical) - LLMs are huge black boxes and we don’t know how they make “decisions” or how biased they are. There’s many humorous and also shockingly bad examples of this already.
I recently read AI Snake Oil by the way which is a good read and introduction to AI if you are not into it.
So isn’t this obvious, create something similar to the human brain to create Artificial Intelligence?
Yes, but fiendishly difficult to operationalise - the Free Energy Principle is still a bit undercover as well. I’m sure it’s role will increase massively over the next few years.
Could this be the next breakthrough?
We’re not even past the first AI (LLM) breakthrough - but I imagine it has the potential to really take AI to the next level. Whether we want that or not is another question?!
Reference
Prasanna Vijayaraghavan et al.
Development of compositionality through interactive learning of language and action of robots.
Sci. Robot.10,eadp0751(2025).
DOI:10.1126/scirobotics.adp0751