A Brief History of Intelligence is a surprisingly good book, given that its title purported to also be about AI. Rather than being even a little bit about AI, the book is actually an evolutionary approach to
intelligence, and it covers its topic really well, even innovatively. The approach is to discuss the rise of human-like intelligence by following the taxonomy and evolutionary history, and the approach is particularly good.
For instance, the rise of brains is tied to an organism's ability to steer and move:
The first brain and the bilaterian body share the same initial evolutionary purpose: They enable animals to navigate by steering. (kindle loc 1046)
Surprisingly, he notes modern ailments of depression can be observed even in relatively primitive intelligences such as a nematode:
If a nematode is exposed to thirty minutes of a negative stimulus (such as dangerous heat, freezing cold, or toxic chemicals), at first it will exhibit the hallmarks of the acute stress response—it will try to escape, and stress hormones will pause bodily functions. But after just two minutes of no relief from this inescapable stressor, nematodes do something surprising: they give up. The worm stops moving; it stops trying to escape and just lies there. This surprising behavior is, in fact, quite clever: spending energy escaping is worth the cost only if the stimulus is in fact escapable. Otherwise, the worm is more likely to survive if it conserves energy by waiting. Evolution embedded an ancient biochemical failsafe to ensure that an organism did not waste energy trying to escape something that was inescapable; this failsafe was the early seed of chronic stress and depression. (kindle loc 1046)
Then there's the rise of reinforcement learning, and the problem of temporal credit assignment when it comes to learning. When something succeeds, how do you know which of the things you did in the past was what gave rise to the success! It turns out that reinforcement learning, where you co-evolve both an actor and a critic, is what allows temporal credit assignment to make learning possible in animals:
Dopamine is not a signal for reward but for reinforcement. As Sutton found, reinforcement and reward must be decoupled for reinforcement learning to work. To solve the temporal credit assignment problem, brains must reinforce behaviors based on changes in predicted future rewards, not actual rewards. This is why animals get addicted to dopamine-releasing behaviors despite it not being pleasurable, and this is why dopamine responses quickly shift their activations to the moments when animals predict upcoming reward and away from rewards themselves. (kindle loc 1619)
Then the author (Max Bennett) explores how memory evolved as part of being able to simulate the world as a brain develop, and why human memory is so famously unreliable. In effect, when you're remembering something, you're projecting into the past and recreating the environment you remember you were in. The problem is that you're using generative algorithms to re-create those memories, and the same hallucinations you might have encountered in AI systems are also responsible for creating those false memories. Your memories of the past and your ability to project into the future and create plans are both sides of the same coin, and in many ways equally unreliable!
Once you have a memory system and a way to simulate the world, then you're able to spatially map the world and gain useful data. In an open ended environment (Benett points to a paper using Montezuma's Revenge as an example), it turns out that you need to evolve a new instinct in order to solve extremely complicated problems:
The approach is to make AI systems explicitly curious, to reward them for exploring new places and doing new things, to make surprise itself reinforcing. The greater the novelty, the larger the compulsion to explore it. When AI systems playing Montezuma’s Revenge were given this intrinsic motivation to explore new things, they behaved very differently—indeed, more like a human player. They became motivated to explore areas, go to new rooms, and expand throughout the map. But instead of exploring through random actions, they explored deliberately; they specifically wanted to go to new places and to do new things...The importance of curiosity in reinforcement learning algorithms suggests that a brain designed to learn through reinforcement, such as the brain of early vertebrates, should also exhibit curiosity. And indeed, evidence suggests that it was early vertebrates who first became curious. Curiosity is seen across all vertebrates, from fish to mice to monkeys to human infants. In vertebrates, surprise itself triggers the release of dopamine, even if there is no “real” reward. And yet, most invertebrates do not exhibit curiosity; only the most advanced invertebrates, such as insects and cephalopods, show curiosity, a trick that evolved independently and wasn’t present in early bilaterians. (kindle loc 2058-2065)
One interesting thing is that the way learning works is that both actor and critic reinforce each other, but must ultimately be guided by real senses and real world results. When you no longer get real input, the entire system is capable of hallucinating:
People whose eyes stop sending signals to their neocortex, whether due to optic-nerve damage or retinal damage, often get something called Charles Bonnet syndrome. You would think that when someone’s eyes are disconnected from their brain, they would no longer see. But the opposite happens—for several months after going blind, people start seeing a lot. They begin to hallucinate. This phenomenon is consistent with a generative model: cutting off sensory input to the neocortex makes it unstable. It gets stuck in a drifting generative process in which visual scenes are simulated without being constrained to actual sensory input—thus you hallucinate. (kindle loc 2545)
Similarly, this explains also the presence of activities such as dreaming. The evolution of imagination is also important, and points to the fact that both generation and recognition occupy the same circuits in brains and cannot be done simultaneously:
The most obvious feature of imagination is that you cannot imagine things and recognize things simultaneously. You cannot read a book and imagine yourself having breakfast at the same time—the process of imagining is inherently at odds with the process of experiencing actual sensory data. In fact, you can tell when someone is imagining something by looking at that person’s pupils—when people are imagining things, their pupils dilate as their brains stop processing actual visual data. People become pseudo-blind. As in a generative model, generation and recognition cannot be performed simultaneously. (kindle loc 2569)
The book goes on to explain autism and human social behavior and language. There's too much going on there in order to quickly summarize in a review, but essentially, one more layer of the cortex can be devoted to monitoring the brain itself. This might not seem to be useful, but is in fact, what's necessary in highly social primate groups in order to develop theory of mind to maintain social status. This need then gives rise to consciousness and self-awareness (you need all that to simulate the perspective of other brains), and the need to do so also gives rise to language.
The book contains lots of interesting ideas and is well worth reading. I highly recommend it!
No comments:
Post a Comment