If computers can't feel, they can't think

UNDER THE MICROSCOPE

Susan Greenfield
Saturday 03 August 1996 18:02 EDT
Comments

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

I was sitting in an audience of some 1,000 people in Tucson, Arizona, when I suddenly had an insight into a nagging problem. As insights go, I suppose this one was pretty modest, as it was not so much a solution to a problem, but rather a crystallization as to what the problem was in the first place.

There we were, a broad collection of physicists, astronomers, psychologists, neuroscientists and philosophers, at a meeting on scientific approaches to consciousness. One philosopher/ psychologist was justifiably proud of the feats of a very advanced computer, or "artificial network", which could, for example, "recognize" the gender of a variety of faces, or even a particular face seen from different angles. As we sat in admiration, it seemed that only the most mystical or muddled could deny that this was the scientific path to understanding consciousness: eventually to build a machine of such complexity that consciousness would spontaneously evolve within it, and shed light on that elusive issue, the physical basis of mind.

Then I recalled a conversation I had had with this man a few nights previously, as he admitted to an irritating conundrum. No one would even begin to argue that the machines featuring in his research were conscious: so it was hard to see how being conscious would improve the current efficiency of the silicon system. I suddenly realized what my particular problem was.

It struck me that these artifical and blatantly unconscious brains had already far exceeded the learning and perceptual feats of, say, a very small baby. On the other hand a baby, by most people's lights, is conscious. It is assumed that the baby is capable of feelings that no one would attribute to even the most sophisticated computer. And yet, as we were shown, the artificial systems already humming, can far outstrip the child in the sophistication of their "behaviours". It dawned on me that to work on even cleverer tricks of perception and learning was not therefore a helpful way to go, at least if you wanted to understand how consciousness occurs. Why aren't scientists concentrating on what the arguably conscious system, the baby, has already? A modest portfolio of crude feelings, emotions.

The catch here is that emotion is such an emotive word: it conjures up thoughts of extremes, such as joy and depression, or of highly personalized experiences, such as love. A baby would not be expected to swoon at a Mozart sonata: yet newborns may well derive "pleasure" from the sensation of suckling, or staring at mobiles. At the start of life, it is easy to think of feelings of pain and pleasure as unfocussed, becoming increasingly finely tuned, diversified and idiosyncratic as we grow up and plough our particular furrow through life. Instead of tricks with the senses, of Pavlovian "learning" by stimulus-response trial and error, it suddenly seemed to me that consciousness was primarily about emotion - the most basic yet elusive ingredient, growing in range and depth as the brain did.

Indeed, my thoughts ran on, it is feelings that are modified by consciousness- changing drugs such as Prozac, which in turn act on specific chemicals in the brain. In the past it had always been drugs that had worried me: computer buffs had never really bothered with the chemical specificity of the brain, and how a certain chemical would have a totally different effect from any other. I could now see why the omission of drugs and brain chemicals from computer models of the brain had been, for me at least, such a stumbling block on the silicon path.

The issue is not whether current computers are conscious or not - no one expects them to be - but rather the problem lies in the strategy that, by building machines that get better and better at doing things that do not require consciousness, the missing ingredient, the feelings that are the essence of our awareness, will somehow materialize. It is like adding ingredients to an increasingly impressive and subtle curry, in the hope that the taste will spontaneously emerge as that of Baked Alaska.

! Susan Greenfield is a neuroscientist at the University of Oxford, and Gresham Professor of Physic, London.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in