We have to do something about anonymous trolling, even if the government won’t
In the likely absence of state intervention, and inaction on the part of the social network providers, users may have no choice than resorting to take matters into their own hands, thinks Chris Blackhurst
Some time soon, the Online Safety Bill will be presented to the House of Commons.
There’s no doubt it will be a watered-down version of what was originally proposed. At her first Prime Minister’s Questions, Liz Truss indicated as much, saying she would be bringing forward the legislation, but that there would be “tweaks” to protect free speech.
Michelle Donelan, the new DCMS Secretary, has reiterated the changes will focus on loosening planned restrictions for adults, not children. “The bits in relation to children and online safety will not be changing – and that is the overarching objective of the bill and why we put it in our manifesto.”
Earlier this week, Donelan stressed on Radio 4’s Today programme she had barely settled into her job. “I will be looking at the bill in the round – but my clear objective is to get this bill back to the house quickly, to edit the bit that we’ve been very upfront that we’re editing, and to make sure that we get it into law, because of course we want it in law as soon as possible to protect children when they’re accessing content online.”
Declaration of interest: I’m a supporter of Clean Up The Internet, the campaigning organisation. We’re pushing, hoping, that the statute will not be neutered too much, that one area in particular, which we’ve made a target, the misuse of anonymous accounts, will remain.
If so, users will be able to limit the material they receive only to verified accounts. That will go some distance towards removing the cowardly, nasty trolling that many people have to endure.
It seems obvious to me, though, that in their desire to protect free speech, Truss and her colleagues will only partially address the issue of toxicity towards adults and that is, if we’re fortunate. They may move against anonymity, but even if they do, that still leaves the issue of hateful messages sent from identifiable sources.
In the absence of state intervention and let us face it, inaction on the part of the social network providers who would rather not be limiting traffic at all, users may have no choice than resorting to take matters into their own hands. Fortunately, for those who can afford it, new services are springing up, offering to act as a filter.
One such is the British company Arwen, which is just celebrating its second birthday. Founded by Matthew McGrory, who used to run IT for Fulham Football Club and Brands Hatch, before going into app development, Arwen uses AI to continually scan its subscribers’ Twitter, Facebook and Instagram feeds. Artificial intelligence searches every comment they receive in real time, looking for 24 different types of abusive and unwanted content across 29 languages.
The comments are classified as “safe”, “suspect” or “severe”. The latter is hidden from view in under a second. “Suspect” comments are examined closely and “safe” speaks for itself. Repeat offenders can be blocked and if needs be, reported to the network operator and the police.
The subscriber gives permission to collect their data, 24 hours a day, 365 days a year, and then Arwen goes to work, says McGrory. They’re looking for hateful language but also unsolicited material that may cause offence, like bots popping up in a message promoting porn or cryptocurrencies. “Brands, corporates, celebrities, individuals, they don’t want the association with that sort of stuff, they want it removed.”
The AI can also earmark escalating threats. “We’re offering free trials to MPs at the moment, to help them locate bad actors,” he says.
It’s clever technology able to deal with ambiguities. “It can flag the particular use of a word and yet pass the same word in another context. So, for instance, there can be mention of a black-tie event, which is harmless, but the same person can receive a message calling them a ‘black beep beep’ which is clearly offensive.”
At present, they’re claiming a 92 per cent success rate, but they expect that to rise to 95 per cent and higher still, meaning virtually all the material the client deems offensive is removed.
For individuals with up to 200,000 followers, Arwen charges £30 a month; for those with more (one customer is Lewis Hamilton with 43m followers) the fee climbs to a maximum of £1,000 per month.
The firm is still in its infancy, but the numbers are rising rapidly. So far, Arwen has signed up 140 social media accounts, including those of around 50 corporate brands. They include Premiership football clubs. In all, McGrory claims “we’re protecting 75.1 million people from toxicity, spam and unwanted content every minute of every day”.
It’s a pity that this type of service is only available to those who can pay. “The government is indicating it won’t address the toxic abuse side of things in the Online Safety Bill,” says McGrory. “The door is left open for us.”
For the wealthy, it’s not much for peace of mind, a small amount to pay to know that you and your followers will not be subjected to hateful, nasty content.
Ideally, it should be available to all for free, but that is not going to happen. This is better than nothing, Arwen is fulfilling a need and smartly filling a void, which, given the state of internet regulation, the intransigence of the social media giants and the government’s wavering position, is probably the best we can expect right now.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments