Wednesday, August 22, 2018

Facebook Assigns Users with a 'Reputation Score', Helping to Sort Issue Reports

Facebook Assigns Users with a 'Reputation Score', Helping to Sort Issue Reports



Critics of Facebook, and the power it now wields, will no doubt be cracking their knuckles as they prepare to respond to this one.
This week, The Washington Post has reported that Facebook assigns users with a “reputation score”, which predicts an individual’s trustworthiness on a scale from zero to 1.
Before you jump the gun, the “reputation” we’re talking about here specifically relates to the reporting of false news stories, not a person’s general trustworthiness in everyday life.
As part of the platform’s efforts to slow the circulation of fake news, Facebook relies on user reports to detect such content. When a user flags something as false, Facebook then investigates – but in giving users that capacity, Facebook has also found that many people will flag news stories which aren’t necessarily incorrect.
As explained by Facebook’s Tessa Lyons:
“It’s not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher”
Indeed, as political debates have increasingly crowded into News Feeds, so too have questions about mainstream news outlets, and where the truth actually lies in such coverage.
To counter this, Facebook passes on all reports of false or misleading news to third-party fact-checkers, and if those fact-checkers find that a story flagged as false is actually true, that goes against the trustworthiness score of the reporting user, putting their future complaints lower down the list.
“One of the signals we use is how people interact with articles. For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false-news feedback more than someone who indiscriminately provides false-news feedback on lots of articles, including ones that end up being rated as true.”
The main impetus here is likely to reduce workload – given Facebook has 2.2 billion active users, you can imagine that the amount of reports it sees every day is significant. In order to ensure it’s able to action the most relevant, most accurate reports first, Facebook is now able to de-prioritize flags from people who are clearly reporting such for alternative purpose. Once a user has put through a few reports which are debunked, their reputation score drops, putting their future reports way down the list.
Given this, the process actually makes perfect sense. Really, it’s less about a person’s “trustworthiness”, as such, and more about their motivations, why they’re reporting such articles at all. If a malicious actor is simply trying to sway the conversation by reporting coverage, they lose that privilege.
But then, of course, the question would be raised as to who decides what’s true, who are these third-party fact checkers who are now the arbiters of truth on The Social Network? That’s another debate, but in this usage, the process seems fairly logical, and isn’t about making individual value judgments.
And the fight against fake news is increasingly important – in another report published by The New York Times, researchers studying anti-refugee attacks in Germany found a direct correlation between racial violence and Facebook use.
As per the report:
“Towns where Facebook use was higher than average, like Altena, reliably experienced more attacks on refugees. That held true in virtually any sort of community - big city or small town; affluent or struggling; liberal haven or far-right stronghold - suggesting that the link applies universally.”
According to the research, wherever Facebook usage was “one standard deviation above the national average, attacks on refugees increased by about 50%.”
“The uptick in violence did not correlate with general web use or other related factors; this was not about the internet as an open platform for mobilization or communication. It was particular to Facebook.”
And while the theory of the report is that such incidents are largely related to Facebook’s algorithm and it’s propensity towards engagement (polarizing posts seeing more interaction, and thus, higher reach), at least part of that process could also be linked to the sharing of false news reports, questionable updates which heavily lean towards inflaming race relations, specifically.
If Facebook can remove more of these, it can only be beneficial – and if there’s a flagging system to help them sort the reliability of such reports, and streamline the process, that should be encouraged.
Essentially, it’s important that the narrative of Facebook’s user rating system for reports doesn’t veer into the inevitable quagmire of overt censorship, as, at least based on the descriptions we have, it isn’t about that. It does seem like a valuable option, and with more links showing the connection between The Social Network and societal divides, it’s important that Facebook does all it can to address such.