Not Even a Rebuttal – The problem with fake news

This last year has revealed a deep problem with modern media.

With rbutr, I had hoped to fight back against misinformation, and help stop the spread of lies, and bring nuanced debate back to complex issues.

What I never expected when creating rbutr, was to find myself in a world where completely fabricated headlines would become a primary source of information for people.

Misinformation everywhere toy story memerbutr was about bringing thoughtful critique and counter-points to biased thinking. It was meant to provide more information when key facts have been omitted. It was meant to provide nuance when complex issued have been presented simplistically.

It seemed like a valid goal. It felt so important. I was sure rbutr was an important piece of the puzzle against misinformation and false beliefs.

But instead, we now find ourselves in a position where this extra level of information provided by rbutr seems to be completely pointless. Why bother providing more facts, and more nuance when the starting point is completely fabricated from the beginning?

What debate is to be had when the other side isn’t even participating in the real world?

When a fabricated headline can be shared hundreds of thousands of times, but the rebuttal, a necessarily boring “this is not true”, gets a few hundred shares, you know the system is fundamentally flawed. And that flaw needs to be fixed before any amount of debate will have any impact at all.

The facts have fallen to the wayside as people have become obsessed with sharing what makes them feel good. And this is disastrous for all of us.

What is the harm?

This is a question I have struggled with over the last few years when trying to communicate the importance of correcting misinformation. It proves to be a challenging thing to quantify in a way which doesn’t create its own host of controversies.

Donald Trump and Pope FrancisIt is easy to say that the headline “Pope Francis Shocks World, Endorses Donald Trump for President” is false, and be 100% certain of that proclamation. To then figure out what the consequences of that headline being shared 960,000 times on Facebook alone is much harder to evaluate. Did it help trump get elected? Maybe. How much? And what is the harm in that? Is it even harmful? Where do you begin to quantify such a thing?

On one hand, I can very easily say that electing a president who has incited a good deal of violence against minorities, indicated that his minister for education will be a creationist, and that his head of EPA will be a climate change denier will be incredibly damaging to the education of America’s children, and to the environment of the planet – but I have no idea how you would quantify that position, let alone sufficiently justify it.

It seems to me like developing a system for quantifying harm of misinformation may well be a valuable pursuit.

Perhaps such a metric would motivate companies like Facebook to take action on the spread of these fabrications?

Solutions?

I don’t like whingeing. And I hate people who sit on the sideline criticising everything without ever providing potential solutions or viable suggestions. But I haven’t yet solved this problem. I guess in some ways, the problem has only just really revealed itself. We all knew it was there. Just below the surface. But now it is out in the open.

The misinformation age really is here, and it is far more popular than anyone ever expected.

Facebook LogoOne Quick Suggestion for Facebook Though

Facebook has come under a lot of fire for making this situation possible. By allowing fabricated “news” to be spread unchecked throughout its platform, it is at least partially responsible for the spread of that misinformation. A lot of people are calling Facebook out because of this.

It seems like it wouldn’t be too hard for Facebook to implement a fact checking system on non-controversial issues. Pope Francis didn’t endorse Trump – there is no controversy there. Facebook should flag that article (and add the source website to a database of “doubtful news sources”) and either prevented people from sharing it, or highlight it as Untrue on the news feed. Hell, they could even start highlighting Satirical pieces too while they are at it. How hard is it to group satirical news websites together, and always tag them as Satirical, just to save people from misunderstanding the joke?

rbutr has a place in all of this too.

Facebook could easily check links against rbutr’s database and show the rebuttals for articles to users as they are about to share the rebutted post. Or it could show that rebuttals exist to articles directly on the news feed. Or, the lowest impact option is to just always include the top rebuttal in that list of “People also shared” articles which comes up after you click on a link. This isn’t as effective as the other options though, because many people just read the headlines then click on the share button when it resonates with what they believe. They won’t get to see the rebuttal until after they have shared it.

All of these actions have an impact. And chipping away at the problem is better than hoping it will go away.

Share Button