Some 120 closed cybercrime and scam groups were identified on Facebook by security journalist Brian Krebs in just two hours of research on April 12. Although Facebook deleted them hours after they were reported, research would likely turn up more groups, as few even bother to hide their schemes and most promote them openly.
“We investigated these groups as soon as we were aware of the report, and once we confirmed that they violated our Community Standards, we disabled them and removed the group admins,” said Pete Voss, Facebook’s communications director. “We encourage our community to report anything they see that they don’t think should be in Facebook, so we can take swift action.”
The deleted groups had an average of 300,000 members communicating in English, but there might be tons of other groups in more languages, Krebs believes. Illegal activity appears to be flourishing on the platform as the groups handled activity such as spamming, wire fraud, account takeovers, phony tax refunds and DDoS-for-hire. Some 419 scams and botnet creation tools were also identified.
A significant majority of groups advertised the sale and use of stolen credit and debit card accounts, while others promoted mass-hacking techniques to take over online banking services and accounts at companies such as Amazon, Netflix, PayPal and Google.
Most of the groups identified were active for about two years. To be accepted, prospective members were asked to take part in a number of suspicious activities.
“Some had existed on Facebook for up to nine years; approximately ten percent of them had plied their trade on the social network for more than four years,” Krebs writes.
Although some groups push for cyber fraud and countless illegal activities, in clear breach of Facebook’s community standards policy, the social network lacks an automated tool to check them for abuse. It appears Facebook relies on users to report them.
“As technology improves, we will continue to look carefully at other ways to use automation,” Facebook concluded. “Of course, a lot of the work we do is very contextual, such as determining whether a particular comment is hateful or bullying. That’s why we have real people looking at those reports and making the decisions.”