Facebook is building a 'shadow' social network just for bots
'Bots must be suitably isolated from real users to ensure that the simulation does not lead to unexpected interactions with real users,' researchers warn
Your support helps us to tell the story
As your White House correspondent, I ask the tough questions and seek the answers that matter.
Your support enables me to be in the room, pressing for transparency and accountability. Without your contributions, we wouldn't have the resources to challenge those in power.
Your donation makes it possible for us to keep doing this important work, keeping you informed every step of the way to the November election
Andrew Feinberg
White House Correspondent
Facebook has developed a shadow social network inhabited entirely by bots in an effort to better understand how trolls and scammers operate on its platform.
The Web-Enabled Simulation (WES) was revealed in a research paper that explains how artificial intelligence sims that mimic human behaviour are being deployed on a hidden version of Facebook.
The researchers hope to learn through the bots' interactions how people abuse the social network to scam other users or exploit their personal information.
The simulation allows the bots to perform the same kinds of actions that a regular Facebook user can, such as liking posts and sending friend requests.
Each bot is modelled on different personality types that might use Facebook, meaning some may be built to seek out targets, while others will include traits that make them susceptible to scams.
"It uses a software platform to simulate real-user interactions and social behaviour on the real platform infrastructure... Unlike traditional simulation, in which a model of reality is created, a WES system is built on a real-world software platform," the research paper states.
"The promise of WES is realistic, actionable, on-platform simulation of complex community interactions that can be used to better understand and automatically improve deployments of multi-user systems."
Facebook researchers developing the simulation said it would help detect bugs within the world's largest social network, which counts around 2.5 billion users around the world.
Thousands of different scenarios can run simultaneously on the simulation, which will be used to automatically recommend updates and changes that could improve a human user's experience.
It is only lines of computer code that separate the AI bots from real Facebook users, though the researchers noted the risk of the experiment spilling over into the public version of the social network.
"Bots must be suitably isolated from real users to ensure that the simulation, although executed on real platform code, does not lead to unexpected interactions between bots and real users," the paper states.
"Despite this isolation, in some applications bots will need to exhibit high end user realism, which poses challenges for the machine learning approaches used to train them.
Subscribe to Independent Premium to bookmark this article
Want to bookmark your favourite articles and stories to read or reference later? Start your Independent Premium subscription today.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments