Last monday, the 15th of May 2017, I was fortunate enough to attend a Workshop on Digital Misinformation organised as part of the The 11th International Conference on Web and Social Media. The workshop was a huge success, and incredibly valuable to myself, and as far as I can tell, everyone who attended. I’m still buzzing with excitement a week later.
Following up from the workshop though, I have several books worth of stuff to say. I will be working on those articles over the coming weeks (I’m a slow writer), but for now this post is just a quick brain dump of some thoughts and outcomes that I want to share with you to keep you up to date on what has happened and what I am now working on.
Meeting the Right People
First of all, my main objective was a complete success. I met people from Facebook, Google, Twitter, and Mozilla, and have opened a dialogue with them about our solution to misinformation online. It is just a first step, but it is a very important one.
The next phase of my efforts on the global rebuttal concept is almost entirely dependant on those organisations and others like them (the gatekeepers of all online information) buying into my vision enough to participate in the process. I’m building a solution for the entire web here, and that won’t work if people don’t encounter it everywhere they go on the web. So I need participation from the web giants. I don’t need any sort of commitments any time soon, but I do need someone to keep picking up the phone when I call.
No One Is Working On This
My single largest take away from the last few weeks is that: No one is working on this.
In preparation for the workshop I read through a number of the ‘Reconnaissance Readings‘ documents, and watched some panels from other related events, and could already very clearly see the single-minded focus that people working on fake news and misinformation all seem to share. That is, to summarise it quite bluntly: “More good. Less bad.”
Aside from the general acknowledgement that “We should educate people/the youth better”, the only solution anyone in journalism, academia, or the tech sphere is working on with respect to misinformation and fake news, is trying to improve how we determine what is or isn’t fake so that we can suppress the spread of the bad stuff (less bad!), and amplify the true stuff (more good!).
This solution is so obviously and intuitively “the right solution”, that, as far as I can tell, no one is even considering other options. There seems to be a complete blind spot that this one solution (which honestly comes down to simply trying to control information) might in fact not be the solution at all. That there might be other options and that maybe we should be researching those too…
I wrote about this before the workshop and have written critically about these trending-towards-censorship approaches in the past. What is different now though, is that it is clear that my lone voice here, really is quite alone.
So, more than ever, I feel a duty to get my apparently unique perspective out there.
If I am wrong, then the world of experts should convince me pretty easily. Obviously they’ll have to get through my confirmation bias, but I’m pretty motivated to find out the truth here, since I am investing all of my time and resources into this project.
If I am right however, then hopefully we can get some more money and research time directed at the right targets, and not spending that time and money bashing our heads against what are actually several thousand year old philosophical questions obfuscated by the modern world’s complexity (ie: What is true?).
The Scientific Method vs Communicating the Truth
With so many people working on various methods to programmatically discern true and false information, we better be sure that the ultimate solution to the problem actually requires that sort of capability.
A system which relies entirely on critical analysis, like the one I advocate for, does not require any such knowledge. So perhaps we should be doing more research into whether critiquing everything helps or not? Does critiquing scientifically factual claims harm the readers of those critiques? Does the benefit of a system which repeatedly demonstrates methods of critical analysis outweigh the costs? There are so many unanswered questions in this space, it is a real shame that no one is even looking there.
It seems to me like the scientific method, scientific skepticism, the principle of falsification and the socratic method are all variations on the one reliable concept: We don’t know what is true. We never really can. But we can use a systematic process of constantly challenging every belief in a structured public way until we slowly get rid of the clearly false beliefs, leaving only good ones behind. They may still be false, but at least we’d be less wrong than we were before.
It works for science. So why does everyone think that it can’t work for public communication of knowledge?
Why must the defenders of science and truth resort to religious-like dogmatic declarations of truth, as if the public can’t possibly reach the correct conclusions on their own?! Why does it feel like science defenders are the new catholic papacy, firmly standing their ground that only they can be trusted to communicate the truth?
Perhaps it is time for a reformation of all information communication?
Perhaps it is time that the socratic method be let free once again, and authoritarian issuances of truth be challenged all the time, everywhere. And instead of us winning by controlling what people believe, we win by teaching people how to think critically.
The Socratic Method – Systematised
To that end, I have tentatively decided to name the next version of the rbutr database Socrates.
I have not written much about this publicly yet, but this next phase has been my focus for many months now.
The plan is to start the system from scratch, coding the database and its rules with collaboration from Facebook, Google, Twitter, Mozilla, and Microsoft in particular, ensuring that ‘Socrates’ can give them exactly what they need to find it valuable and reliable to their needs, while still achieving its simple goal of organising existing (and newly created) online content so that critiques are always available from the pages they critique.
Created as a non-profit organisation along the lines of the W3C, Mozilla, or MediaWiki, the goal is to have all information-delivering-platforms (from browsers to social media to small websites) choose to use Socrates to give them the best rebuttals to each URL they deliver to internet users, and integrate that content into their display – whatever that may look like.
This is not a small goal I have set for myself, but it is achievable. And that’s all I need to make it happen. After all, if it is the only option that will work as the long-term solution against the spread of misinformation, then someone has to do it.
Next major goal is fundraising. I think we can build the new version of the platform for around $150,000. That, plus organisation registration, management, and other admin costs of around $100,000 give me a goal of $250,000 to take this vision to a fully functional alpha ready to test with partner platforms. Likely all deliverable within 1 year of funding.
Virtually everything I am doing now is to secure that funding.
My immediate steps consist of formalising my roadmap, drawing up a White Paper description of what Socrates is and how it will work, and writing a number of articles arguing my positions on this issue which I will publish to the misinformation fighting crowds.
I will continue to work with our academic partners to design and conduct better experiments to challenge my assumptions and validate target outcomes, and I will push for more academics to do the same.
Follow Our Progress
Also, email me if you’d like to join us in our Slack channels where we all work on making this stuff happen.
Thanks for reading!