Facebook builds bots so it can watch how they harass each other
Each bot has a different personality, so Facebook can prepare for behaviour from malicious users
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Facebook has revealed new information about the bots it uses to simulate bad human behaviour.
The army of bots, called the Web-Enabled Simulation (WES), uses artificial intelligence to mimic the behaviour of Facebook users, such as liking posts and sending friend requests.
Each of these bots is trained on a different personality type. Some might seek out victims, while others might be more susceptible to scammers.
Now, the bots can run on a parallel version of Facebook, with the company’s researchers studying them in order to combat abuse on their platforms.
Facebook’s team had engineered situations where bots simulate interactions with other bots, which are rewarded for acquiring more bots to scam.
The simulation is called WW (“Dub Dub”) because it is a smaller recreation of the World Wide Web.
The code of WW is based on Facebook’s real code base but does not look like the website that we know. The bots interactions, and their results, are recorded numerically rather than visually through a GUI (Graphic User Interface).
The bots cannot interact with anything other than bots - users will not meet bots on the main Facebook platform – but Facebook is trying to make their behaviour authentic enough that it matches real-life behaviour from human users.
The main benefit of this work is twofold: the first is with regards to its scale, as Facebook can run thousands of simulations without affecting users.
The second is with regards to Facebook’s architecture, with the bots discovering new weaknesses. While right now they are being trained on behaviour Facebook has seen before, they could potentially be trained to do things that the company has not seen before.
This would give Facebook an advantage when combating malicious users, as the company can prepare for such action before it happens.
Facebook had provided details of its botnet system before, but shed more light on its activities in a recent roundtable.
“We need to train the bots to behave in some sense like real users,” Mark Harman, professor of computer science at University College London and a research scientist at Facebook, told Venturebeat.
“We don’t have to have them model any particular use, so they just have to have the high-level statistical properties that that real users exhibit … But the simulation results we get a much closer much more faithful to the reality of what real users would do.”
Harman compared the work that the bots were doing to city planners trying to manage busy roads – viewing how the traffic moves in a simulation and then designing around that, including variables like speed bumps.
"We apply ‘speed bumps’ to the actions and observations our bots can perform, and so quickly explore the possible changes that we could make to the products to inhibit harmful behaviour without hurting normal behaviour,” Harman said to The Verge.
“We can scale this up to tens or hundreds of thousands of bots and therefore, in parallel, search many, many different possible [...] constraint vectors.”
“Unlike in a traditional simulation, where everything is simulated, in web-based simulation, the actions and observations are actually taking place through the real infrastructure, and so they’re much more realistic.”
Harman says that it has already seen some strange behaviour from the bots but told The Verge he would not go into detail because of concerns that it would tip off scammers.
“There is a strong relationship with AI-assisted gameplay,” said Hartman to ZDnet. “Simulated game players are a little bit like our bots. We are automating the process of making the game ever-more challenging, because we want to make it harder for potentially sophisticated and well-skilled bad actors.”
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments