By now there must be a lot of people who actually believe that little glowing lights move along their axons and dendrites when they think, flashing at the synapses.
Anyway.
There has been a lot of fuss about AI lately, what with Google translate switching over to a neural network, rich people funding AI ethics research, and the EU trying to get ahead of the legislative curve. There has also (this is humans in conversation after all...) been a lot of stuff on the grave dangers to humanity of super intelligent AIs from the likes of Stephen Hawking and Nick Bostrom.
There has been a lot of fuss about AI lately, what with Google translate switching over to a neural network, rich people funding AI ethics research, and the EU trying to get ahead of the legislative curve. There has also (this is humans in conversation after all...) been a lot of stuff on the grave dangers to humanity of super intelligent AIs from the likes of Stephen Hawking and Nick Bostrom.
Before we get too carried away, it seems to me that there is one very important question that we should be investigating. It is: What is the computational complexity of general intelligence? Before I say how we might find an answer, let me explain why this is important by looking at the extremes that that answer might take.
At one end is linear complexity. In this case, if we have a smart computer, we can make it ten times smarter by using a computer that is ten times bigger or faster.
At the other end is exponential complexity. In this case, if we have a smart computer, we can make it ten times smarter only by having a computer that is twenty-two-thousand times bigger or faster. (That is e10 times bigger or faster; there may be a factor in there too, but that's the essence of it.)
If smart computers do really present a danger, then the linear case is bad news because the machines can easily outstrip us once they start designing and building themselves and it is quicker to make a computer than to make a person. In the exponential case the danger becomes negligible because the machines would have great difficulty obtaining the resources to make smarter versions of themselves. The same problem would inhibit us trying to make smarter machines too (or smarter people by genetic engineering, come to that).
Note, in passing, that given genetic engineering the computers have no advantage over us when they, or we, make smarter versions of themselves or ourselves. The computational complexity of the problem must be the same for both.
The big fuss about AI at the moment is almost all about machine learning using neural networks. These have been around for decades doing interesting little tricks like recognising printed letters of the alphabet in images. Indeed, thirty years ago I used to set my students a C programming exercise to make a neural network that did precisely that.
Some of the computational complexity of neural-net machine learning falls neatly into two separate parts. The first is the complexity of teaching the network, and the second is the complexity of it thinking out an answer to a given problem once it has been taught. The computer-memory required for the underlying network is the same in both cases, but the time taken for the teaching process and the give-an-answer process are different and separable.
Typically learning takes a lot longer than finding an answer to a problem once the learning is finished. This is not a surprise - you are a neural network, and it took you a lot longer to learn to read than it now takes you actually to read - say - a blog post.
The reason for the current fuss about machine learning is that the likes of Google have realised that their big-data stores (which are certainly exponentially bigger than the newsprint that I used to give my students to get a computer to read) are an amazingly rich teaching resource for a neural network.
And here lies a possible hint at an answer to my question. The teaching data has increased exponentially, and as a result the machines have got a little bit smarter.
On the other hand, once you have taught a neural network, it comes up with answers (that are often right...) to problems blindingly fast. The time taken is roughly proportional to the logarithm of the size of the network. This is to say that, if a network takes one millisecond to answer a question, a network twenty-two-thousand times bigger will take just ten milliseconds.
But the real experiments to find the computational complexity of general intelligence are staring us in the face. They lie in biology, not in computing. Psychologists have spent decades figuring out how smart squirrels, crows, ants, and all the rest are. And they have also investigated related matters like how fast they learn, and how much they can remember. Brain sections and staining should allow us to plot a graph of numbers of neurons and their degree of interconnectivity against an ordering of smartness of species. We'd then be able to get an idea if ten times as smart requires ten times as much brain, or twenty-two-thousand times as much, or somewhere in between. This interesting paper seems like a start.
Finally, Isaac Asimov had a nice proof that telepathy doesn't exist. If it did, he said, evolution would have exploited and refined it so fast and so far that it would be obvious everywhere.
We, as the smartest organisms on the planet, like to think we have taken it over. We have certainly had an effect, and now find ourselves living in the Anthropocene. But that effect on the planet is negligible compared to - say - the effect of phytoplankton, which are not smart at all. And our unique intelligence took three billion years to achieve. This is a strong indication that it is quite hard to engineer, even for evolution.
My personal guess is that general intelligence, by which I mean what a crow does when it bends a wire to hook a nut from a bottle, or what a human does when they explain quantum chromodynamics, will turn out to be exponentially hard. We may well get there by throwing exponential resources at the problem. But to get further either the intelligent computer, or we, will require exponentially more resources.
At the other end is exponential complexity. In this case, if we have a smart computer, we can make it ten times smarter only by having a computer that is twenty-two-thousand times bigger or faster. (That is e10 times bigger or faster; there may be a factor in there too, but that's the essence of it.)
If smart computers do really present a danger, then the linear case is bad news because the machines can easily outstrip us once they start designing and building themselves and it is quicker to make a computer than to make a person. In the exponential case the danger becomes negligible because the machines would have great difficulty obtaining the resources to make smarter versions of themselves. The same problem would inhibit us trying to make smarter machines too (or smarter people by genetic engineering, come to that).
Note, in passing, that given genetic engineering the computers have no advantage over us when they, or we, make smarter versions of themselves or ourselves. The computational complexity of the problem must be the same for both.
The big fuss about AI at the moment is almost all about machine learning using neural networks. These have been around for decades doing interesting little tricks like recognising printed letters of the alphabet in images. Indeed, thirty years ago I used to set my students a C programming exercise to make a neural network that did precisely that.
Some of the computational complexity of neural-net machine learning falls neatly into two separate parts. The first is the complexity of teaching the network, and the second is the complexity of it thinking out an answer to a given problem once it has been taught. The computer-memory required for the underlying network is the same in both cases, but the time taken for the teaching process and the give-an-answer process are different and separable.
Typically learning takes a lot longer than finding an answer to a problem once the learning is finished. This is not a surprise - you are a neural network, and it took you a lot longer to learn to read than it now takes you actually to read - say - a blog post.
The reason for the current fuss about machine learning is that the likes of Google have realised that their big-data stores (which are certainly exponentially bigger than the newsprint that I used to give my students to get a computer to read) are an amazingly rich teaching resource for a neural network.
And here lies a possible hint at an answer to my question. The teaching data has increased exponentially, and as a result the machines have got a little bit smarter.
On the other hand, once you have taught a neural network, it comes up with answers (that are often right...) to problems blindingly fast. The time taken is roughly proportional to the logarithm of the size of the network. This is to say that, if a network takes one millisecond to answer a question, a network twenty-two-thousand times bigger will take just ten milliseconds.
But the real experiments to find the computational complexity of general intelligence are staring us in the face. They lie in biology, not in computing. Psychologists have spent decades figuring out how smart squirrels, crows, ants, and all the rest are. And they have also investigated related matters like how fast they learn, and how much they can remember. Brain sections and staining should allow us to plot a graph of numbers of neurons and their degree of interconnectivity against an ordering of smartness of species. We'd then be able to get an idea if ten times as smart requires ten times as much brain, or twenty-two-thousand times as much, or somewhere in between. This interesting paper seems like a start.
Finally, Isaac Asimov had a nice proof that telepathy doesn't exist. If it did, he said, evolution would have exploited and refined it so fast and so far that it would be obvious everywhere.
We, as the smartest organisms on the planet, like to think we have taken it over. We have certainly had an effect, and now find ourselves living in the Anthropocene. But that effect on the planet is negligible compared to - say - the effect of phytoplankton, which are not smart at all. And our unique intelligence took three billion years to achieve. This is a strong indication that it is quite hard to engineer, even for evolution.
My personal guess is that general intelligence, by which I mean what a crow does when it bends a wire to hook a nut from a bottle, or what a human does when they explain quantum chromodynamics, will turn out to be exponentially hard. We may well get there by throwing exponential resources at the problem. But to get further either the intelligent computer, or we, will require exponentially more resources.
5 comments:
Really interesting article. Just another thought: our complex intelligence is a result of evolution, which is a slow process. Pretty much like genetic algorithms: they are quite slow, but give good solutions over time. However, if we (or the computers themselves) found a better -in terms of resources/neurons- way of increasing "intelligence", the evolutionary approach would be quickly surpassed and then... who knows!
Yes - I agree. Basically what you are saying is that if we (or a smart computer) finds a better solution to intelligence than the connectionist model that we and nature now use, then the complexity of connectionism becomes irrelevant. You must be correct there.
I ask for the data, and a few weeks later it starts to arrive: http://www.sciencedirect.com/science/article/pii/S2352154616302637
More data. AI seems to be exponentially hard, as I guessed it might be: https://srconstantin.wordpress.com/2017/01/28/performance-trends-in-ai/
Another result: https://arxiv.org/abs/2404.04125
Post a Comment