According to a Washington Post article from November 21st, Facebook’s algorithms managed to catch that anti-white and anti-male hate speech is more prevalent on the platform than any other sort.
Interestingly, but also of little surprise, the three writers from the Washington Post piece – Elizabeth Dwoskin, Nitasha Tiku, and Craig Timberg – find this to be a problem because they apparently think anti-white and anti-male sentiments on the platform should largely be ignored.
After a long diatribe about how there are posts and/or comments on Facebook that were racist or expressed bigotry toward minorities and those hosting non-heterosexual proclivities, the Washington Post article noted that there was a problem – because most of the hate speech caught by Facebook’s algorithms caught anti-white or anti-male hate speech.
“Yet racist posts against minorities weren’t what Facebook’s own hate speech detection algorithms were most commonly finding. The software, which the company introduced in 2015, was supposed to detect and automatically delete hate speech before users saw it. Publicly, the company said in 2019 that its algorithms proactively caught more than 80 percent of hate speech.
But this statistic hid a serious problem that was obvious to researchers: The algorithm was aggressively detecting comments denigrating White people more than attacks on every other group, according to several of the documents. One April 2020 document said roughly 90 percent of ‘hate speech’ subject to content takedowns were statements of contempt, inferiority and disgust directed at White people and men, though the time frame is unclear. And it consistently failed to remove the most derogatory, racist content.”
Think about that for a moment – these Washington Post writers find it a “serious problem” that Facebook was finding that comments or posts that expressed “contempt, inferiority, and disgust directed at White people and men” consisted of “roughly 90 percent” of detected “hate speech” and were having action taken against the posts or comments.
But the article gets worse in terms of what the writers put forth – as they called the targeting of anti-white and anti-male posts and comments on Facebook a series of “errors” because the social media company employed a set of “race blind rules”.
“One of the reasons for these errors, the researchers discovered, was that Facebook’s ‘race-blind’ rules of conduct on the platform didn’t distinguish among the targets of hate speech.”
Not only were the folks at the Washington Post upset about the aforementioned “errors”, but they were also perturbed that Facebook wasn’t actively taking steps about posts or comments that had the n-word or c-word (namely because those words are often used in jest or among racially-identifying groups as terms of endearment).
“In addition, the company had decided not to allow the algorithms to automatically delete many slurs, according to the people, on the grounds that the algorithms couldn’t easily tell the difference when a slur such as the n-word and the c-word was used positively or colloquially within a community.”
Also, this ragtag group of Washington Post writers feel as though that Facebook was wrong for addressing posts on the platform such as “men are pigs” – likening the expression that broad-brushes an entire gender as being “less harmful”.
“The algorithms were also over-indexing on detecting less harmful content that occurred more frequently, such as “men are pigs,” rather than finding less common but more harmful content.”
Make no mistake, these folks at the Washington Post, specifically Elizabeth Dwoskin, Nitasha Tiku, and Craig Timberg, want to revel in a world (and obviously online platforms) where it is perfectly fine to denigrate white men for either their race or gender while having their sensitivities regarding their own race and gender catered to and protected from insults.
They want to wield the proverbial sword with impunity that they simultaneously cry about is being used to cudgel them – and they spell it out in black-and-white. Frankly, those are concerning sentiments.