Facebook Has Just Shut Down 583 Million Fake Accounts in First Quarter Of 2018

By Andrew Alpin, 16 May 2018

Till Facebook has been on the receiving end in the wake of the Cambridge Analytica scandal, but what should be regarded in favor of the social media Giant is the sheer volume of accounts that take a mammoth effort to monitor. Given that and the in numerous third-party apps working and partnering with Facebook, there is bound to be some problems that of course is ultimately sorted out.

1Facebook just closed 583 million fake accounts

In the latest news that proves the platform is doing its bit in trying to weed out the scamsters, almost 583 million fake accounts were identified and shut down in the first quarter of 2018 alone. 

Image Source: www.technotification.com

2Facebook has implemented Artificial Intelligence for moderation

Yes! That figure is right. Using its new Artificial intelligence based software for moderation; Facebook has actually shut down all 583 million fake accounts across the globe as stated by the media giant. This also reveals how the social networking platform has been exploited and abused by scammers.

 

Image Source: cbsistatic.com

3The network has revamped its moderation to identify malicious content

Facebook is continuously revamping its algorithms to identify malicious content used exclusively for spreading hate and promoting nefarious activities. Now Facebook is revealing for the first time ever how such content was being published on the platform. 

Image Source: www.misfitgeek.com

4837 million spam posts removed

The action couldn’t have come at a better time when Facebook has been accused of manipulating elections and changing the course of criminal behavior. The network stated that between January and March almost 837 million pieces of spam content was removed. In a renewed effort of moderating its 1.5 billion accounts, Facebook found almost 583 million accounts were fake. In a cleanup, it removed all 583 million fake accounts. Facebook issues a community standards enforcement report every quarterly. 

Image Source: www.carlostrentini.com.br

5Huge content such as hate speech, nudity and terrorism posts were taken down

Facebook also stated that the majority of the accounts were fake, and spam. In moderating posts, 837 million spam posts were identified. Facebook also acted upon 2.5 million hate speech posts, 3.4 million posts depicting graphic violence, 1.9 million terrorist propaganda and 21 million posts comprising of sexual activity and nudity. All these were removed from the network. 

Image Source: adgully.com

6What the vice president had to say

Vice President of Facebook’s policy for Europe, Africa, and the Middle East, Richard Allen said “This is the start of the journey and not the end of the journey and we’re trying to be as open as we can.” Alex Schultz VP of data analytics revealed that graphic violence content almost tripled every quarter which promoted and influenced real world conflicts. 

Image Source: www.absatzwirtschaft.de

7Removing posts is more effective than labeling them

Schultz said that moderation guidelines only included distinguishing between adult content, suicidal posts, violence, bullying etc. But when it involves imagery involving children, labeling isn’t as effective as removing the posts entirely and this is what Facebook is planning to do. “We’re much more focused in this space on protecting the kids than figuring out exactly what categorization we’re going to release in the external report,” said Schultz. 

Image Source: ytimg.com

898.5% success with Artificial Intelligence moderation

The new method of Facebook moderating such negative content is by using its newly established artificial-intelligence-based technology which helps it to find content and moderate it faster. It is much more effective than human moderation to detect spam and fake accounts. Facebook now claims that its AI has had a 98.5 % success rate in detecting fake accounts and where spam was concerned the success rate is 100%. 

Image Source: www.softzone.es

9With Facebook A.I moderation, you don’t need to report hate content

The AI doesn’t need users to flag inappropriate content or report it. It just identifies such content on its one and then simply removes it. Previously one had to report content which would be moderated or analyzed by humans. 

Image Source: www.thehansindia.com

10Hate speech is difficult to detect than adult images

While it has had immense success in removing the fake content and accounts, blatant nudity is much easier to detect than hate speech. Image recognition software makes it easy for the network to recognize such images but subtle language and innuendos make it more difficult to detect promotion of hatred, propaganda and call to arms. 

Image Source: .express.co.uk

11New initiatives for more transparency

Facebook has also taken new initiatives to improve transparency in the last few months where the network released new versions of its guidelines on what is and what isn’t allowed on the site. This comes after a year of its secret rules on content moderation. Political advertisers will also have to be authenticated and state their affiliation alongside advertisements. 

Image Source: factordaily.com

12Dangerous precedent or healthy experience?

Facebook says that everyone should remain vigilant and report such things that do nothing for the world community at large. On the other hand, some expert’s feel that such activity by networks like Facebook which are modernized platforms of public information, there’s no monitoring of what they put up and what they take down. It is their discretion which could create a dangerous precedent of bias or it could also create a pleasant environment for millions based on factual information. 

Image Source: www.redusers.com

13Facebook just put a step in the right direction

Platforms like Facebook are a major media hub for billion people around the world and there should also be a watchdog in place to ensure that such recent developments that prove the integrity of the network continue. However, one doesn’t really have that much to worry about if Facebook continues to moderate its content responsibly to ensure a richer experience for its users. The fact that it has closed 583 million fake accounts is a huge step in the right direction.

Image Source: www.radiohamburg.de


Facebook Twitter