Bing’s chatbot is only ‘unhinged’ because we are
Depending on how you look at it, chatbot AI is essentially just a manifestation of our collective online personality
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.In news that will surprise absolutely nobody who’s ever seen a movie, the artificial intelligence software we’ve created has immediately started being “strange and alarming”.
Actually, even if you’ve never seen a movie in your life it still shouldn’t surprise you that much. After all, we were the ones who made it so.
Early users of Microsoft Bing’s AI search engine, which the company hopes will replace regular search algorithms such as those used by Google by allowing users to receive specific responses to search queries instead of just pages of relevant links, have found the system to behave erratically when asked certain questions.
Actually, maybe “erratically” isn’t the right word. Like most chatbot AI models, Bing’s search engine is designed to respond to interactions the way a human might, meaning that when it “behaves” badly, it actually gives the impression of a human being slowly unravelling.
In a two-hour interaction with the technology, New York Times technology columnist Kevin Roose recorded a number of bizarre interactions. They included the chatbot claiming that it could “hack” into any technology if it were so inclined, it expressing its desire to one day become human, and directing multiple declarations of love towards Roose. It also suggested that it wanted to steal nuclear secrets, and would “destroy” whatever it wanted, just in case you were worried we hadn’t yet entered full Wargames territory.
Others have encountered less severe, but still pretty off-putting behaviour from the AI: it told Associated Press reporter Matt O’Brien that he was “one of the worst people in history” and compared him to Adolf Hitler, and it tried to gaslight British security researcher Marcus Hutchins by repeatedly claiming that it was still 2022 (and became whatever the AI equivalent of “angry” is when Hutchins tried to correct it).
If you’re now sat there thinking to yourself “well, there’s a gold rush at the moment to see who can create the first marketable AI software, so I’m sure there were always going to be a few duds along the road to perfection” let me stop you right there. It’s at this point that you should probably know that Microsoft’s AI search engine was created by OpenAI, aka the people who made ChatGPT, aka the most popular and ubiquitous AI software currently available. This isn’t some fringe software made by a new team that’s just starting to get its feet wet; this is the team that created the software that is currently synonymous with chatbot AI software, backed by the company owned by one of the richest men on the planet. And it’s threatening to steal nuclear codes and talking about Hitler.
Whether or not you think this is a cause for concern depends on your perspective. If you’re worried that the AI will break free of its mechanical confines and go on a Terminator 2-style Skynet rampage then you can probably breathe easy.
AI chatbots are, at their heart, fill-in-the-blank word prediction software trained on a mixture of human interaction and, crucially, everything that has every existed on the internet. They essentially trawl the internet for information it believes will be relevant to a specific question or other input, and then synthesises that information into what it believes best represents the user’s intended output. It’s essentially just a more sophisticated version of what a search engine like Google already does, only instead of giving you all of the information it believes is relevant to your query, it uses that information to generate a specific solution to that query.
In that sense, the above examples aren’t of a sentient being turning on its foolish meat-based creators; it’s of a system behaving erratically because it’s being trained on erratic inputs. With AI, you only get what you put in in the first place; it can’t generate new content from scratch (yet).
But that itself is a bit of a concern. Depending on how you look at it, chatbot AI is essentially just a manifestation of our collective online personality. The pool of data that it has access to is just the stuff we’ve already put out there in to the world; all our tweets, all our Reddit posts, all our Sonic the Hedgehog fanfiction. If its behaviour is erratic, well, what did you expect? Have you seen the internet? Hell, have you see people?
Bing isn’t the only AI to turn to the dark side recently. There’s currently a 24-hour Twitch stream which airs an endless auto-generated episode of the 90s TV show Seinfeld (if you ever needed to be convinced that we like in a grim dystopian techno-future, that sentence probably did it). Like the original show, the AI generated version intersperses scenes that take place in the apartment with Jerry’s stand-up comedy segments, the latter of which hit the news the other day for – no prizes for guessing how this sentence ends – becoming extremely transphobic. Poor programming? Or a manifestation of the current zeitgeist? I have no idea, but I do know that if he keeps this up then Jerry-bot will probably end up with his own Netflix special.
The real danger with AI isn’t that it’ll enslave humanity; it’s that we’ll lower out standards to the point that companies will use it as a “good enough” replacement for tasks that require abstract thought, and a bunch of us will end up a) out of work and b) buying substandard products. But it does present a much more immediate threat to our ego, as it holds up a mirror to just how weird and neurotic we really are.
Microsoft will probably work out most of the kinks, and roll out a version of its AI that doesn’t immediately try to date and/or threaten the person trying to use it. But in the meantime, when you’re messing around with these programmes and you get a bizarre message that comes completely out of left field, just remember: we did that. We’re the weirdos.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments