I originally posted this on my Facebook feed. I understand the irony in posting complaints about FB’s algorithms on a feed that is controlled by that same algorithm.
Facebook has created the misinformation problem it has, but won’t take real steps to fix it.
Facebook helped create the environment that fed into the echo chambers that spread misinformation. Their current attempts to fix this are doomed to failure and could have unintended consequences. And they will not fix the root cause of the issue because it would hurt them financially.
Facebook desperately wants to squash misinformation. Their current method is to flag any post on certain subjects with a warning and link to what Facebook thinks is accurate information. There is no review of the posts which are tagged. Write any post, fact or false, with certain keywords and the warnings come up.
This has the consequence of classifying both true and untrue content together. Every post about these subjects is suspect. But Facebook will make sure to tell us all the truth.
An open platform should not set themselves up as the arbiter of what is true and correct. Aside from the fact they can be wrong, this can end up with two unwanted results.
First, there are those who want FB to be regulated. This move to try to self-regulate content sends a signal that content on these open platforms should be regulated. As objectionable as it is to have FB tell me what is true, imagine some sub committee made of up government employees or, worse, partisans appointed by what ever party happens to be in charge telling you what is true or false.
The second undesired result is that FB becomes a publisher not a platform. The natural next step beyond telling users what is true is to actively stop users from seeing what is false.
FB becomes a publisher, and is liable for what is allowed on its channel. And that goes beyond political speech. All sorts of copyright issues come into play. IP owners may not sue a 20 year old for posting their property without permission, but there’s money to be made in suing the publisher who posts it.
The biggest problem is that these attempts to stop the spread of misinformation and false information attack a symptom of a problem Facebook created and amplified with its own algorithms.
FB created news feeds which “feed” our own confirmation biases and create echo chambers for misinformation. The way to fix that is not for FB to tell me what is fact or fiction, but to change the algorithms to show a wider range of ideas. That addresses a core issue with the platforms in general.
People form common interest ties. They post common interest content. FB sees that you interact with that content and those friends and pages. They show you more of that content. Facebook bragged about this change a few years ago. They are showing us more of what we like and less of what we don’t.
FB would say this makes your experience on the platform better. It also make FB more profitable.
Companies buy exposure on my newsfeed from FB. These companies enjoy a very targeted approach to buying this space on my feed. If Facebook can narrow the types of posts, which represent the sort of interests I have, they can offer a better deal to advertisers. If I see posts and information across 100 interest areas, and interact with a broader range of people and pages, companies have to spend more, across a broader range, to get me to buy their stuff. If FB can lower that range to 75 or 50 areas of interest, their ad placements become more effective. Companies buy more ads and FB makes more money.
They have been doing this for years. Here’s how this practice led to the rapid sharing of what people think is problematic information. The medium inherently causes transmission issues.
Social media’s inherent requirement to distill complex, nuanced content down to simpler ideas comes into play. The “TL: DR” -too long, didn’t read- response was created because reading long and complex information online is hard. (Thanks for read this long and complex content in the internet)FB needs us to keep scrolling, so we can see more ads. So they prioritize images and videos, and downplay text. Any post or comment over a few sentences gets shortened with a “see more” link, so you can quickly scroll past it.
The result is complex issues reduced to memes and emotional entreaties. Now add the FB algorithm.
So person A has their friend group. A political meme gets shared from a page. Several people share it in that group. FB’s computers take note that content from that page was popular in this group of people. Meanwhile, another meme which didn’t fit into the group’s biases was seen and the group did not share it. FB notes that content was not popular.
Now, when that first page posts something, the algorithm doesn’t know whether it’s true or not. It just shows the content to the group. Meanwhile, content showing a contrary opinion from the 2nd source is not shown to them.
This goes on for literally years. Information that the group likes and that affirms their biases is reinforced to FB as what should be in their feed. Contrary information is reduced. Because FB shows us what we like, we eventually end up in an echo chamber. Ideas we welcome get reshared and commented on and liked. Ideas we don’t like, get seen less often.
Now election time comes. FB’s algorithm cannot distinguish fact from fiction. So it shares both true and false information with the group. And since FB has learned that contrary opinions don’t get the attention, they cut them out of the feed.
One day a piece of untrue information is shared. It fits what the group has previously interacted with, so the algorithm shows the group. No contrary information is provided. Person A sees that lots of their friends have shared the info. And since it fits into a preferred bias, and little to no opposing views are shared, person A believes it. And shares it, too.
Cut to today. We’ve got rampant misinformation and questionable sources being shared on Facebook. How do we fix it?
Facebook labels anything in the subject as potentially false and provides links to what FB thinks is true. This is bad policy.
To really fix it, FB has to stop tailoring newsfeeds the way it does. They need to broaden what is shown to users. Any page or person I have shown any interest in, by liking for friending them, should have the same opportunity to show up on my feed as those I regularly like or comment on.
This will impact the advertising dollars FB uses to operate. And that is why we see ham handed bandaids like what is happening now, instead of real change in the root causes of the issue.
Now, this isn’t all FB’s fault. We will still have confirmation bias and a tendency to resist what we don’t agree with. But FB can help by not reinforcing those tendencies. What they are doing now is wrong headed and will end badly.