Here’s why you need to start saying ‘please’ and ‘thank you’ to Alexa
The AI explosion is fuelled not by a vast leap in technology, but in accessibility, writes Jared Shurin. We greet Alexa in the morning, chat with Siri while we exercise, and share a cubicle with ChatGPT. What does that mean for how we should approach our relationship with tech?
Daddy, why didn’t you say ‘please’?”
It took me a moment to understand why I was being chided by my four-year-old, especially on a topic as important as the “magic word”. As any parent can attest, transitioning one’s child to “breakfast, please” from “chompy chomp now” is a long-term campaign. In order to model the right behaviour, we have spent years being exceedingly polite around the home.
It turns out that my lapse in judgement was due to, of all things, Alexa. I had barked an order to our friendly corporate surveillance robot and was listening to the football scores while I dressed for the day. My annoyingly perceptive child had pounced on my mistake. “Daddy, you say please to me and to mummy, why don’t you say ‘please’ to Alexa?”
The obvious answer is that Alexa is a hunk of wires, harvesting our data for a large corporation and charging us for the privilege. It does not deserve basic courtesy, any more than I would apologise to my toaster. But Alexa isn’t a toaster: it is designed to emulate human speech patterns. It – she (as coded) – listens, talks, and responds like one of us. Alexa is not human, but humanised.
Nor is Alexa the only example. Our contact with many machines is now less mechanical than it is conversational. The recent proliferation of AI, or generative learning, for example, is less about back-end sorcery and more about the ease of the interface. We don’t need to go under the hood to use AI: we simply talk to ChatGPT. We have a chat with it, like we do with Alexa – or another human being. The AI explosion is fuelled not by a vast leap in technology, but by accessibility.
As a result, conversations between humans and machines are now happening at an unprecedented rate. We greet Alexa in the morning, chat with Siri while we exercise, and share a cubicle with ChatGPT. Contact with humanised technology is no longer a rarity – it is fast becoming the norm.
But, as my son gleefully pointed out, our conversational basics have lapsed. The normal rules of etiquette seem to be suspended. Should we ask Siri how her day was? Should we make small talk with ChatGPT before asking for our web content? Should we say “please” to Alexa? As technology becomes more humanised, is there now an obligation to treat it humanely?
In my current, primary occupation as “role model to a small child,” the answer is obviously “yes”. Children imitate their parents’ behaviour. In my case, that means less swearing, more vegetables, and, always remembering to say “please” and “thank you” when I’m in conversation with anyone – or anything.
Even at four, my child understands that Alexa isn’t a real person, not like grandma or Spider-Man. However, this is superseded by the commandment that when you ask someone to do something, you have to say “please”. To set a good example for my son, if nothing else, I am required to treat Alexa with some modicum of dignity.
But the issue goes deeper than simply being a good role model. To my credit, when we first installed Alexa, I did, tentatively, say “please” and “thank you” – but Alexa “herself” told me, gently, that I didn’t need to. She is manufactured to be the subject of discourtesy. But etiquette is not a frivolous social convention, it is a set of easily replicable behaviours that establish and reinforce ethical patterns in our daily lives.
There’s no formal law against someone jumping ahead of us in line, but it is still socially unacceptable – etiquette prohibits it. Etiquette levels the playing field so everyone has their turn, it provides fairness and order across all our everyday activities; and it ensures a basic level of mutual respect.
Laws help, of course, and I’d personally endorse any MP who enacts one about standing on the left of an escalator. However, it is impossible to legislate for every conceivable interaction between humans. Etiquette provides guidelines that people can draw from and apply fluidly to whatever the occasion demands. When we first learn these guidelines, they may seem unnecessarily complicated. However, it is still easier to adhere to simple principles of politeness than to develop a complex set of specific rules for every individual circumstance.
Our new, AI-infused society also requires principles. As the interactions between humans and humanised technology increase exponentially, they are no longer taking place solely under controlled, predictable circumstances. The complexity, speed, and scale of these interactions makes imposing rules an impossible task.
We absolutely need legislation to set an overarching framework for AI that protects jobs, privacy, intellectual property, and human safety – but we also need etiquette to guide us through the uncountable volume of micro-interactions that can, and will, take place every day. We need to devise principles that all of us can apply, to ensure a fair, orderly, and ethical use of technology.
Developing this etiquette quickly is also essential, because, without scaremongering, there is already evidence that our interactions with humanised AI can have a negative impact. The classic example is, perhaps, Microsoft’s Tay. First unleashed on Twitter in 2016, Tay was designed to adapt from its conversations with other users. In less than a day, Tay began to post openly racist and sexist tweets, including Holocaust denial.
Tay is not the only example. The vast majority of AI-generated imagery (up to 96 per cent) is pornography, and a huge proportion of that is revenge porn, or sexualised pictures of celebrities, politicians, and other prominent female figures. AI doesn’t know what’s wrong.
Similarly, when first released, ChatGPT would helpfully craft pro-Putin propaganda or vaccine disinformation at user request. Although some basic guard rails have since been put in place, these can be easily circumvented. Users can, through their conversations, make AI into worse “people”.
Perhaps most worrying of all, the underpinning usefulness of AI as a knowledge engine is being corroded. Even as the volume of human/AI interactions has increased, researchers have found that the factual quality of ChatGPT’s outputs have been deteriorating, even as its language skills have improved. Its accuracy has decreased, even as its persuasive skills have become more polished. We are teaching AI the art of “truthiness”: how to be a convincing liar.
Humanised AI also has an impact on humanity as a whole. Researchers have concluded that, as technologies present more “humanlike attributes”, their human users are more likely to perceive and respond to them as human actors. Sadly, this means we bring all our human prejudices with us.
Studies have found that we assign gender stereotypes to AI based on their voices – even finding that the perceived gender of the voice had an impact on whether or not the user would listen to its recommendations. There are similar studies when it comes to biases towards regional or ethnic accents. How we develop a code of conduct, or etiquette, in how we interact with AI is not only an urgent issue, but one that’s fraught with ethical challenges.
Do unemotional entities warrant our feelings for them? Can we learn to love and respect anyone, no matter how “other” they are – including artificial humans?
Perhaps this is ultimately not about the machine, but about the human. Whether or not a machine feels, we do, and with that comes an obligation. Being a human means acting humanely. A kind of compassion, even for AI, might feel like a challenge – but it is, perhaps, a necessary one.
By establishing an etiquette, a set of respectful behaviours, we can help set the “ground rules” as we rush into a future full of humanised technology. If a four-year-old can learn to treat technology with empathy, then so can we.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments