Facebook has released a paper on its research site research.fb.com/ that is has a create a scaled down version of Facebook for just bots.  Facebook has create this  Web-Enabled Simulation and called it WW.

A WES is a way to record behaviour where  a set of users have to fulfil objectives

The data collected on the WES system is then integrated into Facebook AI assisted game play software.  This is then used to train bots to use the system in the same way that users do..

The example Facebook has given about how it will use this system will be to train bots to act like “bad actors”.  A bad actor is a users that contravenes community standards. The bots are designed to act autonomously and are rewarded to complete certain goals.

The goal of this system is for better user testing and allow for more complex testing of the platform than current standard user testing would allow.  The goals of the system are:

1. To Simulate bad actors:- The systems bots tries to dupe existing security and auditing tools to post violating content.  If autonomous bots have been able to do so the engineers at Facebook can create ways to stop it.  Allowing the platform to secure itself against exploits and threats that have not even happened yet on the live platform.

2. Search for bad content – Once the tool has found a way in which violating content has been shared the bots can then find real users and external bots that are behaving in a similar way improving the live site by removing and blocking them
3. Develop New Way to Stop bad actors – The system looks for ways to stop the content violators from distributing content, this might mean that identification of such content is not needed as they may deploy updates that will make the distribution at scale of this type of content a lot harder to achieve.