Is Facebook’s Anti-Abuse System Broken?
Security Bot last edited by
Facebook has built some of the most advanced algorithms for tracking users, but when it comes to acting on user abuse reports about Facebook groups and content that clearly violate the company:undefined:’:undefined:s :undefined:“:undefined:community standards,:undefined:”:undefined: the social media giant:undefined:’:undefined:s technology appears to be woefully inadequate.
Last week, Facebookdeleted almost 120 groups totaling more than 300,000 members. The groups were mostly closed :undefined:—:undefined: requiring approval from group administrators before outsiders could view the day-to-day postings of group members.
However, the titles, images and postings available on each group:undefined:’:undefined:s front page left little doubt about their true purpose: Selling everything from stolen credit cards, identities and hacked accounts to services that help automate things like spamming, phishing and denial-of-service attacks for hire.
To its credit, Facebook deleted the groups within just a few hours of KrebsOnSecurity sharing via email a spreadsheet detailing each group, which concluded that the average length of time the groups had been active on Facebook was two years. But I suspect that the company took this extraordinary step mainly because I informed them that I intended to write about the proliferation of cybercrime-based groups on Facebook.
That story,Deleted Facebook Cybercrime Groups had 300,000 Members, ended with a statement from Facebook promising to crack down on such activity and instructing users on how to report groups that violate it its community standards.
In short order, some of the groups I reported that were removed re-established themselves within hours of Facebook:undefined:’:undefined:s action. I decided instead of contacting Facebook:undefined:’:undefined:s public relations arm directly that I would report those resurrected groups and others usingFacebook:undefined:’:undefined:s stated process. Roughly two days later I received a series replies saying that Facebook had reviewed my reports but that none of the groups were found to have violated its standards. Here:undefined:’:undefined:s a snippet from those replies:
Perhaps I should give Facebook the benefit of the doubt: Maybe my multiple reports one after the other triggered some kind of anti-abuse feature that is designed to throttle those who would seek to abuse it to get otherwise legitimate groups taken offline :undefined:—:undefined: much in the way that pools of automated bot accounts have been known to abuse Twitter:undefined:’:undefined:s reporting system to successfully sideline accounts of specific targets.
Or it could be that I simply didn:undefined:’:undefined:t click the proper sequence of buttons when reporting these groups. The closest match I could find in Facebook:undefined:’:undefined:s abuse reporting system were, :undefined:“:undefined:Doesn:undefined:’:undefined:t belong on Facebook,:undefined:”:undefined: and :undefined:“:undefined:Purchase or sale of drugs, guns or regulated products.:undefined:”:undefined: There was/is no option for :undefined:“:undefined:selling hacked accounts, credit cards and identities,:undefined:”:undefined: or anything of that sort.
In any case, one thing seems clear: Naming and shaming these shady Facebook groups viaTwitter seems to work better right now for getting them removed from Facebook than using Facebook:undefined:’:undefined:s own formal abuse reporting process. So that:undefined:’:undefined:s what I did on Thursday. Here:undefined:’:undefined:s an example:
Within minutes of mytweeting about this, the group was gone. I alsotweeted about :undefined:“:undefined:Best of the Best,:undefined:”:undefined: which was selling accounts from many different e-commerce vendors, including Amazon and eBay:
That group, too, was nixed shortly after my tweet. And so it went for other groups I mentioned in my tweetstorm today. But in response to that flurry of tweets about abusive groups on Facebook, I heard from dozens of other Twitter users who said they:undefined:’:undefined:d received the same :undefined:“:undefined:does not violate our community standards:undefined:”:undefined: reply from Facebook after reporting other groups that clearly flouted the company:undefined:’:undefined:s standards.
Pete Voss, Facebook:undefined:’:undefined:s communications manager, apologized for the oversight.
:undefined:“:undefined:We:undefined:’:undefined:re sorry about this mistake,:undefined:”:undefined: Voss said. :undefined:“:undefined:Not removing this material was an error and we removed it as soon as we investigated. Our team processes millions of reports each week, and sometimes we get things wrong. We are reviewing this case specifically, including the user:undefined:’:undefined:s reporting options, and we are taking steps to improve the experience, which could include broadening the scope of categories to choose from.:undefined:”:undefined:
Facebook CEO and founderMark Zuckerberg testified before Congress last week in response to allegations that the company wasn:undefined:’:undefined:t doing enough to halt the abuse of its platform for things like fake news, hate speech and terrorist content.It emerged that Facebook already employs 15,000 human moderators to screen and remove offensive content, and that it plans to hire another 5,000 by the end of this year.
:undefined:“:undefined:But right now, those moderators can only react to posts Facebook users have flagged,:undefined:”:undefined: writesWill Knight, for Technologyreview.com.
Zuckerberg told lawmakers that Facebook hopes expected advances in artificial intelligence or :undefined:“:undefined:AI:undefined:”:undefined: technology will soon help the social network do a better job self-policing against abusive content. But for the time being, as long as Facebook mainly acts on abuse reports only when it is publicly pressured to do so by lawmakers or people with hundreds of thousands of followers, the company will continue to be dogged by a perception that doing otherwise is simply bad for its business model.
Make ISO from DVD
In this case I had an OS install disk which was required to be on a virtual node with no optical drive, so I needed to transfer an image to the server to create a VM
Find out which device the DVD is:lsblk
Output:NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 464.8G 0 part ├─centos-root 253:0 0 50G 0 lvm / ├─centos-swap 253:1 0 11.8G 0 lvm [SWAP] └─centos-home 253:2 0 403G 0 lvm /home sdb 8:16 1 14.5G 0 disk /mnt sr0 11:0 1 4.1G 0 rom /run/media/rick/CCSA_X64FRE_EN-US_DV5
Therefore /dev/sr0 is the location , or disk to be made into an ISO
I prefer simplicity, and sometimes deal with the fallout after the fact, however Ive repeated this countless times with success.dd if=/dev/sr0 of=win10.iso
Where if=Input file and of=output file
I chill out and do something else while the image is being copied/created, and the final output:8555456+0 records in 8555456+0 records out 4380393472 bytes (4.4 GB) copied, 331.937 s, 13.2 MB/s
Recreate postrgresql database template encode to ASCIIUPDATE pg_database SET datistemplate = FALSE WHERE datname = 'template1';
Now we can drop it:DROP DATABASE template1;
Create database from template0, with a new default encoding:CREATE DATABASE template1 WITH TEMPLATE = template0 ENCODING = 'UNICODE'; UPDATE pg_database SET datistemplate = TRUE WHERE datname = 'template1'; \c template1 VACUUM FREEZE;