Our Kafkaesque Journey With Facebook’s New And Improved Fake News Catcher
So the Social Network Beginning With F has been in a lot of hot water lately about the amount of fake news and spam that it was putting out. To which the Hoodie-Wearer-In-Chief announced that they’d be stepping up the number of hall monitors they had on hand to reduce the amount of fake news being posted.
This was already a tricky solution to pull off, for, as many observers had noted, deciding what is or isn’t propaganda is a thin line, and while there are plenty of stories that are clearly false. (“Kim Kardashian revealed to be space alien”) there are many that walk that thin line between opinion and fact. This is as true of politics as it is of health and wellness, which is actually the area that sees the most profitable fake news. (It should come as no surprise that many more people are willing to learn about how avocados can help improve their sex lives than pedophiliac pizza parlors.)
What’s more, Facebook’s security chief Alex Stamos has noted that the increased vigilance is likely to result in an increase in the number of “false positives” or posts wrongly flagged as offensive.
Like this one, from TV[R]EV.
I posted it on my Facebook page last Thursday (and not as a “Public” post either, FWIW) and within minutes it had been flagged by the Facebook Police as spam and removed. (See screenshot below.)
This was indeed odd because (a) the headline was in no way offensive or spam-like. It was, if anything, fairly straightforward and we were far from the only ones expressing that thought, (b) the article was critical of Facebook, though only mildly so, suggesting that while things were going poorly for them, it would not take much for them to recover. Hmmmm.
Now to their credit, about 15 minutes after dutifully reporting that the article was not, in fact, spam, they replied and said (in so many words) “Ooops. Our bad. It’s back up.”
So an overall happy ending, but still…
There was a definite Kafka-esque feel to the entire interaction: we never learned why they thought it was spam, who made that call and what made them instantly change their minds? Was it an overzealous new employee? A specific keyword they were searching for? The fact that it expressed a negative opinion about Facebook? Did someone else flag it for them? Was it because we used "Share This" to post it?
Truth is we’ll never know.
This is the same experience we’ve had with Twitter where decisions about what content to take down and which accounts to close are also made in some faceless room by faceless people. To the point where, when a friend was being harassed by a troll for several days, we had to reach out to friends who worked at Twitter to get someone to do something about it. Which of course raises the whole issue of privilege and unequal access to power. Because, TBH, one of the first things that popped into our heads when this happened was who do we know at Facebook who can help us. Few people have that option, which is what makes the whole Kafkaesque nature of the process even more distressing. (NB: In Twitter’s defense, the aforementioned incident happened about three years ago. We hear they’ve improved their harassment procedures since then.)
Now while TV[R]EV is comprised of a number of very smart people, none of us are security experts, so we’re not going to try and come up with an all-encompassing solution to the fake news problem. Other than to say that transparency is a virtue, particularly in these sorts of situations, where people are likely to be upset, maybe even traumatized if they’re being harassed, and having a clearly laid out, step-by-step path as to how the situation is being handled and an actual human to interact with would go a long way to removing the pain. The more Kafkaesque the journey is, the tougher it is to believe that Facebook is trying to actually do the right thing.
Think about it Zuck.