THEY ARE OUR FUTURE

Children see computers as `friends' but do they consider them `alive'? Their ideas, says Sherry Turkle, offer a glimpse of what ours will be

Sherry Turkle
Saturday 30 March 1996 19:02 EST
Comments

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

TODAY'S children are growing up in a computer culture; all the rest of us are at best its naturalised citizens. From the outset, toy manufacturers created interactive games, providing children with objects that almost begged to be taken as being "alive". Children in the early 1980s were meeting computational objects for the first time, and since there was no widespread cultural discourse that instructed them on how to talk about computers, they were free to speak their minds without inhibition. They were free to consider whether these objects really were conscious or alive.

By contrast, when faced with the computer's ability to do things previously reserved for people, adults have always been able to say: "There is no problem here. The computer is neither living nor sentient; it is just a machine." Armed with such ready-made or reductive explanations, grown- ups have been able to close down discussion just at the point where children, who come to the computer without preconceptions, begin to grapple with hard questions.

Children's reactions to "smart machines" have fallen into discernible patterns over the past 20 years. Adults' reactions, too, have been changing, often following those of the children. To a certain extent, we can look to children to see what we are starting to think ourselves.

The first children I studied in the late 1970s and early 1980s were confronted with highly interactive computer objects that talked, taught, and played games. Children were not always sure whether these objects should be called alive or not alive. But it was clear, even to the youngest, that movement - previously identified as young children's main test of "aliveness" - was not the key to the puzzle. The children perceived the relevant criteria not as physical or mechanical, but psychological: they were impressed that the objects could talk and were interested in whether the objects "knew". They were interested in the objects' states of mind.

Granting computers a psychological life once reserved for people led to some important changes in how children thought about the boundaries between people, machines, and animals. Traditionally, they had defined what was special about people by contrasting them with their "nearest neighbours" - pet dogs, cats and horses. Pets, like people, have desires but what stands out dramatically about people compared to animals is their gifts of speech and reason.

Children identified with computational objects as psychological entities and came to see them as their new nearest neighbours. And from the child's point of view, they were neighbours that shared in our rationality. Aristotle's definition of man as a rational animal gave way to a different distinction. Children still defined people in contrast to their neighbours. But now, since the neighbours were computers, people were special because they could feel, both emotionally and physically. One 12-year-old, David, put it this way:

"When there are computers who are just as smart as people, the computer will do a lot of the jobs but there will still be things for the people to do. They will run the restaurants, taste the food, and be the ones who will have families and love each other. I guess they will still be the ones who will go to church."

Like David, many children responded to the computer presence by erecting a higher wall between the cognitive and the affective, the psychology of thought and the psychology of feeling. Computers thought, people felt. Or people were distinguished from machines by a spark, a mysterious and undefinable element which could be called human genius.

Children, as usual, are harbingers of our cultural mindset, and to them the boundary between people and machines is intact. But what they see across that boundary has changed dramatically. They are comfortable with the idea that inanimate objects can think and have a personality, but they no longer worry if the machine is alive. They know it is not. The issue of aliveness has moved into the background as though it is settled. In practice, granting a psychology to computers has been taken to mean that objects in the category "machine", like objects in the categories "people" and "pets", are appropriate partners for dialogue and relationship.

The psychologist Jean Piaget demonstrated in the 1920s that, in the world of noncomputational objects, children used the same distinctions about how things move to decide what was conscious and what was alive. Children developed the two concepts in parallel. We have come to the end of such easy symmetries. Children today take what they understand to be the computer's psychological activity (interactivity as well as speaking, singing, and doing maths) as a sign of consciousness. But they insist that breathing, having blood, being born and, as one put it, "having real skin" are the true signs of life. Children today contemplate machines they believe to be intelligent and conscious, yet not alive.

Today's children, who seem so effortlessly to split consciousness and life, are the forerunners of a larger cultural movement. Though adults are less willing than children to grant that today's most advanced computer programs are even close to conscious, they do not flinch as they once did at the very idea of a self-conscious machine. Even a decade ago, the idea of machine intelligence provoked sharp debate. Today the controversy about computers does not turn on their capacity for intelligence, but on their capacity for life. We are willing to grant that the machine has a "psychology", but not that it can be alive.

When people consider what, if anything, might ultimately differentiate computers from humans, they dwell long and lovingly on those aspects of people that are tied to the sensuality and physical embodiment of life. It is as if they are seeking to underscore that, though today's machines may be psychological in the cognitive sense, they are not psychological in a way that comprises our relationships with our bodies and with other people.

Some computers might be "intelligent" and might even become conscious, but they are not born of mothers, raised in families. They do not know the pain of loss. People are special because they had emotion, and because they are not programmed.

Yet in an age of the Human Genome Project, when scientists are looking to define what makes people who they are by the inner workings of their DNA, it becomes harder to maintain that people are not programs. Notions of free will have had to jostle for position against the idea of mind as program and against widely accepted ideas about the deterministic power of the gene.

The events I have narrated can be seen as a series of skirmishes at the boundary between people and machines. A first set of skirmishes resulted in a romantic reaction. A protective wall came down. People were taken to be what computers were not. But there was only an uneasy truce at the border between the natural and the artificial. Often without realising it, people were becoming accustomed to talking to technology - and sometimes, in the most literal sense.

! This is an extract from `Life on the Screen: Identity in the Age of the Internet' by Sherry Turkle, Professor of the Sociology of Science at Massachusetts Institute of Technology, USA. The book is published by Weidenfeld and Nicolson at pounds 18.99 on 4 April.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in