Man v machine

Does the defeat of Kasparov by the Deep Blue computer mean that humans are no longer the only possessors of true intelligence?

Michael Lockwood
Monday 12 May 1997 18:02 EDT
Comments

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

The dramatic defeat of the world chess champion Garry Kasparov in a six-game match by an IBM computer, Deep Blue, raises a host of questions about the nature of human intelligence and the possibility of simulating it mechanically. Some will insist that Deep Blue no more possesses genuine intelligence than does a pocket calculator; others will take Kasparov's defeat as evidence that we ourselves are nothing more than very complicated machines. So who is right?

Well, what is certainly true is that today's chess-playing computers do not play the game in remotely the same fashion as do their human adversaries. Deep Blue, it is said, can examine 200 million distinct states of the board in a single second, whereas a human chess-player can only examine, perhaps, two such states. But then most of the computer's labour would, from the perspective of an experienced human player, be so much wasted effort: a matter of pursuing the possible consequences of moves that the human player would rightly dismiss out of hand.

Pattern recognition plays a crucial role in human chess-playing, but is largely lacking in computer chess programes. Human players see positions on the board as relevantly similar to those they have encountered previously, but they would be hard put to say in what precise respect the current and the remembered positions resemble each other; this makes it difficult to program such knowledge into a computer.

But what Deep Blue lacks on the pattern recognition side, it more than makes up for in sheer speed. So it is with much of today's so-called artificial intelligence. It's not so much artificial intelligence, in our sense of the term, as incredibly rapid "artificial stupidity", where exhaustive and undiscriminating searches produce results we would achieve, if at all, only by highly selective searches guided by insight.

However, one shouldn't allow such considerations to make us too complacent about the claims of artificial intelligence. First, huge strides have already been made, and will doubtless continue to be made, in the field of pattern recognition, by so-called neural networks. A neural network (which normally exists only as a simulation on a conventional computer) can be thought of as a vast array of very simple processors, analogous to neurons in the brain, connected up in such a way as to enable the system to learn various prescribed tasks (where performing the task means producing certain outputs in response to certain inputs).

Information about the appropriateness of the system's outputs is repeatedly fed back into the system, and causes the strength of the connections between the processors to be adjusted so as to improve performance. This technology is likely, in due course, to make it possible to devise chess programs that play in a far more human fashion than Deep Blue, and which are capable, moreover, of learning from their mistakes.

Beyond that, there are some powerful theoretical arguments, deriving from the work of Alan Turing in the 1930s, which suggest that, in principle, the cognitive powers of the human mind could be matched by any suitably programmed conventional computer with sufficient memory and speed of operation. Modern computers (apart from their limited memory) are implementations of what is known as a universal Turing machine.

A Turing machine is an imaginary device (incorporating a reading, erasing and printing head which operates on a moving paper tape) which was invented by Turing in order to give a precise meaning to the concept of performing some cognitive task mechanically - multiplying two multi-digit numbers together would be an example of such a mechanical task.

Different Turing machines, as originally conceived, are designed to perform different tasks. But Turing showed that you could build a universal Turing machine which, given (on its tape) a description of any particular Turing machine, could then replicate the behaviour of that machine And this, in essence, is what a modern, general-purpose computer is designed to do: programming a modern computer is, in effect, a matter of instructing it to behave like a particular Turing machine.

Now we shouldn't ordinarily think of our own cognitive activity as purely mechanical. To be sure, we spend much of each day engaged in routine tasks which call for little or no creative thought (if, indeed, they call for any thought at all). But we also do other things, such as composing a letter to a friend, which do seem to us to involve creativity. And, indeed, it is true of most classes of mathematical problems that there is no general automatic prescription for solving them. To that extent, doing mathematics, like playing chess, is itself, in general, a creative activity. But the fact that a person writing a letter to a friend, or a mathematician trying to prove some theorem, isn't operating according to conscious rules, doesn't exclude there being, at some level, rules at work governing the relevant thought processes: rules, moreover, which could in principle be programmed into a computer.

Evidence, after all, suggests that all mental activity is a manifestation of the workings of the brain. And the brain, being a material object, is presumably subject to the self-same laws of physics that govern matter elsewhere. These laws themselves appear to be such that the behaviour of anything which obeyed them could in principle be simulated by a universal Turing machine; ie by a suitably programmed computer.

Those who are impressed by this line of argument confidently expect that it will eventually be possible to program computers in such a way that they can pass themselves off as human beings in conversation. Turing himself proposed this, in 1950, as the acid test of whether a computer could think. He imagined a human being and a computer engaged in an "imitation game" with a human interrogator, whose task was to try to tell, on the basis of their answers to his questions, which was the human being and which was the computer. The computer would be programmed to answer the questions in as human a manner as possible, while the actual human being would try to persuade the interrogator that he or she was the real human being.

Turing argued that a computer which was capable of fooling such interrogators at least 50 per cent of the time should be regarded, not only as engaged in successful simulation of thought, but to be genuinely thinking. (We could imagine a similar set-up involving chess, with a human player simultaneously playing, via some remote link, a human player and a computer, and trying to guess which was which. Programming a computer to win a chess version of Turing's imitation game would clearly be a different matter from programming it merely to beat the human chess "interrogator" at chess: it would have to play like a human being, right down to making the sorts of mistakes a human would make.)

This Turing test has been enthusiastically embraced, by many contemporary workers in the field of artificial intelligence, as a test not merely of whether a computer is genuinely thinking - whatever that means - but of whether it is conscious. Indeed, some of Turing's remarks seem to imply that he himself regarded his test in this way.

The Turing test, thus interpreted, raises two questions which must be distinguished from each other. First, will it ever be possible to programme a computer to pass the Turing test? People who answer "yes" to this question are said to believe in "weak AI" ("AI" meaning artificial intelligence). Second, if a computer could be constructed and/or programmed to pass the Turing test on a regular basis, at least as often as the average human being would, should it be credited with consciousness? People who believe in weak AI and answer "yes" to this second question are said to believe in "strong AI".

Let us suppose that weak AI is true, and that in the fullness of time experts in artificial intelligence succeed in programming computers (operating on essentially the same principles as current ones) reliably to pass the Turing test. Should we then conclude, in accordance with strong AI, that the computers are conscious, having "inner lives" comparable to our own? I think not.

Consciousness, as I see it, is a great mystery; nothing in our current understanding provides the smallest clue as to what it is, in physical terms, or why it should exist at all. But I take it that it is a biological phenomenon which evolved in response to various adaptive pressures: thus regarded, it is there only because it produces behaviour which conduces to the survival of our genes. Consciousness was nature's solution to certain problems of adaptation. But what nature had to work with, in solving this problem, is very different from what we have to work with.

Think of nature as under pressure to engender, in animals, dispositions to produce certain sorts of behaviour in response to various sorts of stimuli. From the fact that nature produced the desired relationship between sensory input and behavioural output by creating consciousness, it doesn't follow that we, with our technology, cannot produce this relationship without creating consciousness. Baldly put, perhaps nature wouldn't have needed to produce consciousness, if she had had etched silicon to work with, rather than organic carbon.

Finally, wouldn't it be better, on the whole, if strong AI were false, always assuming that we could be sure? "Intelligent" computers would be much more useful to us if we could confidently treat them as mechanical slaves, rather than as sensitive beings with rights that we were morally obliged to respect. But if we are one day faced with computers that can pass the Turing test, and we remain unsure whether they are conscious or not, one might plausibly argue that we should give them the benefit of the doubt!

Michael Lockwood is a lecturer in philosophy at Oxford University. He is the author of 'Mind, Brain and the Quantum' (Blackwell, 1989).

William Hartston analyses the final two games between Kasparov and Deep Blue in The Tabloid, page 14.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in