Bayesian brain theory to reach AGI without LLMs?
What is AGI and why should I care?
AGI, or Artificial General Intelligence, is a system that can apply its skills in our real world and navigate it. It uses a world model(s) to do so. This is something LLMs can mimic but not understand. We already have the former Meta AI and Turing Award winner, Yann LeCun, calling LLM architecture a “dead end” [Turing Award winner Yann LeCun: Large models are a “dead end”].
I can say that I don’t disagree with him, but one key component in building a complete AGI system could include a form of LLM technology.
Promising paths
In my study and design of an AGI system, I by mistake designed a copy of the Bayesian brain theory based on simple logical puzzles. As an example, it can be extremely hard to understand what a number 2 is if you have only seen 1 and 0’s your whole life!
Therefore, it needed more: a Novelty gate mechanism to trigger saving of information (or discarding), a database system structure that mimics our subconscious by hiding it from parts of the system, so only key data is used and prepared in the correct format for the algorithms and encoders to do their work.
This all is being speculated on in the neuroscience community, with a lot of algorithmic talk being used in today’s research. For example, the brain is capable of easily learning a skill based on previous parts of other skills that it pieces together: [Scientists uncover the brain’s hidden learning blocks].
So in conclusion, we have a lot of learning to do still. I will try to soon share parts of my design for an AGI system.
