Review of The Promise and Peril of Real-Time Corrections to Political Misperceptions by Garret and Weeks

Research published in the Computer Supported Cooperative Work (CSCW) Journal shows that apps which highlight incorrect information and provide real time corrections may actually be less effective at changing the minds of people who have a pre-disposition to agree with the incorrect information, than time-delayed correction techniques are.

Here I will review the paper, and then reflect on its findings from the perspective of our efforts (rbutr) to create an alternative system of ‘error correction’ in the form of a semantic linkage between claim-rebuttal webpage pairs.

Summary of the Paper

Methods Used

An experiment was set up to test whether real time corrections of inaccurate information on the internet helps to correct the false beliefs being spread. To achieve this they organised a diverse sample of 574 participants and polled them about their familiarity and attitude on five different key political issues. The participants were then split in to 3 groups and were either:

  1. shown an article with erroneous information in it, then asked to complete an image comparison task for 3 minutes, then shown correcting information (delayed correction)
  2. shown an article with erroneous information in it which had the erroneous information highlighted, a warning about a third party identifying false information in the article, and the complete text of the correcting information displayed at the bottom of the page (real-time correction)
  3. shown an article with erroneous information in it (no correction)

The participants then completed another brief concluding questionnaire to assess their ultimate position on the relevant issue, and assess how effective the correcting information had been for each of the 3 methodologies used.

Results

Garrett and Weeks Promise and Peril Results

As expected, people shown a correction had better accuracy in reporting the reality of the situation than people not shown a correction at all, and people shown the correction while viewing the content had better accuracy than the people who saw the correction after a delay. However, when analysed on the basis of how the participants felt about the subject matter before seeing the information, it seems like the immediate correction worked more so to reinforce existing beliefs than it did actually correct false perceptions.

So people who were already in favour of the correcting information answered very accurately, while people who were against the correcting information answered almost as poorly as the ‘no correction’ group. People neutral on the subject matter gained a slight improvement in their accuracy over the delayed correction group.

This suggests that people tend to hold more firmly on to their prior beliefs when encountering embedded corrections than they do with delayed corrections. Since the objective is to change minds away from false beliefs to correct beliefs, it would appear that delayed correction may be more beneficial than real-time correction, or at least that real-time correction methods need their delivery method to be altered.

Discussion

The concern that apps like Dispute Finder (after which the study was essentially modeled)  and Hypothes.is may fail to actually change the minds of people who use them leads the authors to explore some ideas which may help future real-time correction services to more effectively persuade people. The first recommendation is to focus on the source of the correction, highlighting how people trust certain people more, particularly sources which are ideologically similar to themselves. The second recommendation is to consider framing the corrections according to how the recipient might better respond to it. The third and final recommendation is to consider having the correction follow a self-affirming experience, which has been shown to reduce bias in other studies.

Reflecting on the Results, from rbutr’s Perspective

The implications of this paper may be important in rbutr’s future, so it is important for us to consider the results and the suggestions made. However, in doing so it is also going to be important to understand the limitations on how this research impacts on rbutr specifically. To start with, this research was done before the authors had any awareness of rbutr’s existence, thus there is no reference to it. The inline highlighting and embedded corrections tested most closely resemble the workings of Dispute Finder (no longer available), and potentially Hypothesis (not yet released), while rbutr has no inline highlighting and makes no claim to ‘correct’ information. As such, rbutr is already largely outside of the scope of this paper even though the end goals are largely the same.

First Thing First

The research conducted by Garrett and Weeks here is really framed around an end game perspective. That is, the idea of having a diverse group of people consuming corrections in a meaningful way implies that the application has been successful enough to reach a large and diverse group of people. I think it is worth noting that many attempts have been made to create a successful real-time correction service, including an attempt by Google itself, and not one has succeeded yet. This research could be viewed as an effort to see if all of the effort is even worth it, though the authors point out that they aren’t trying to discourage efforts, but rather attempt to help design better systems.

From a practical perspective though, any company working on this problem must necessarily start with the first problem first: How do you reach critical mass? Without the people, it does not matter one bit how successfully you can change people’s minds, because no one is reading the corrections anyway! Whereas if you do manage to achieve critical mass, then you can experiment (split test, poll, research) with your ample user-base and hopefully figure out how to better deliver the corrections.

That said, I don’t want to appear disparaging of this research at all – this is a crucial first step in setting up the standards for when one of the real-time correction efforts finally achieves that critical mass.

The Problems with Growing A Correction Service Community

Although this paper is framed around the end game and bypasses the struggle to reach it, the method of delivering corrections may actually have a significant impact on the early stage growth. It is possible that an app which changes people’s minds very effectively is actually incredibly undesirable to someone when they are evaluating whether they want to install the app in the first place. Confirmation bias tends to act not only on how we interpret new information, but also on how we seek out new information, and if an app promises to change our beliefs, a lot of people may avoid it simply because they don’t want their beliefs changed. And even if they do install it, when that app starts to contradict their firmly held beliefs, they are prone to discount the app itself as biased, as evidenced by this sort of behaviour already all over the internet (conservapedia on wikipedia for example, or even references 22 and 53 within this paper)

Garret and Weeks also differentiate between verifiable facts (like Obamas birth place and whether vaccines are safe) and contestable ‘facts’ (aspects of competing political ideologies for example). What this reasonable stance fails to take in to account though, is that there is a large percentage of the population who refuse to accept that verifiable facts exist. They are ardent followers of the concept of  “my truth’ vs ‘your truth’ and any attempts to share ‘The Truth’ with these people will almost certainly result in disenfranchising them. If you disenfranchise one of the key target markets that you hope will be persuaded by your corrections, then you have lost before you have begun.

Take No Position

rbutr has found a way around all of these problems by simply not taking a stance on any issue. rbutr makes no claims to the truth of any argument or rebuttal in our system, and allows users to add their own rebuttals and counter-rebuttals as they wish. This avoids disenfranchising people because it isn’t rbutr asserting the truth, or claiming that they are wrong, but some author on some website that they probably don’t like anyway. rbutr gets to be the neutral mediator between two or more website which may be completely biased, aggressive, condescending, arrogant or whatever, but ultimately not reflect negatively on rbutr itself no matter how resistant the reader is to the message of the rebuttal. Not that we hope to get aggressive and condescending rebuttals, but the internet is certainly full of them and there is no doubt they will be used. Our hope is that the more productive rebuttals, the friendly and inclusive arguments will be the ones most often voted to the top for any given claim, and thus most commonly seen.

But before we get in to the effectiveness of our system, why would anyone install rbutr? For the same reasons that most people would uninstall a real-time correction service: because they have firmly held beliefs which they want to see defended. rbutr is a system which allows them to defend their beliefs. People who care passionately about a specific subject will feel motivated to use rbutr to reach those who disagree with them. They will install rbutr to rebut claims they view as wrong.

With the app now installed, they will also find themselves exposed to rebuttals to every other subject that they are less firmly-entrenched in, and thus susceptible to belief alteration on those subjects.

Mis-Informing People

Of course, the concern which everyone immediately raises when they understand rbutr’s system is the fear that rbutr can be used to spread mis-information as much as it can be used to correct mis-information. And to some extent that is true; rbutr will rebut good and bad exactly the same, but is there really anything to actually worry about? I have addressed this concern many times in the past, and as such have added the basic response to rbutr’s FAQ page, which in itself lists 3 other responses to the concern.

In light of this research though, it is interesting to reflect on how rbutr is different to the standard ‘correction’ approach, which assumes a factually authoritative role for the application, with some access to the truth which it’s users may not have. There are few places that this authoritative approach works with people – for example university lectures, where people are open to new information, usually already in some frame of agreement with the subject matter, and otherwise generally ignorant and ready to be filled with the ‘facts’. But this is different to changing the minds of people who already have a belief.

Outside of educational institutions,  rarely is someone receptive to being ‘corrected’. So how do people change their minds? Well of course they have to do it themselves, but what leads them there is, I believe, more often than not a process of ‘discussion’. The individual has to choose to engage in the subject matter. From there, they need to hear arguments from a diferent perspective to the one they hold, but they also need to feel like their own position is represented and adequately responded to. Without the give and take process of a discussion, they will feel ostracised and ignored, their beliefs and reasons minimised, made to feel worthless.

For example, a discussion between a geo-centrist and a modern qualified astronomer is not invalidated by the fact that the geo-centrist is arguing from a position of error. It is a necessary component for the discussion to take place, and the chance of the geo-centrist somehow convincing the astronomer that the earth is the center of the universe is incredibly slim. The astronomer would also not be worried about undecided bystanders listening in on the conversation becoming geo-centrists, because surely the astronomers experience, knowledge and arguments would be more compelling than that of the geo-centrist, who has no evidence and no experience.

This common real-world sort of discussion is what rbutr is in effect recreating by allowing websites to be connected to one another as claim-rebuttal pairs. Where a claim can be linked to a rebuttal, and that rebuttal can be linked as a claim to another rebuttal, creating in effect a virtual discussion between the authors. The visitors who land on any of the claim pages will thus take the place of the bystander, and while they may be presented ‘incorrect’ arguments as part of the process, the ‘correct’ side has adequate opportunity to counter those errors in their rebuttal, and fix any risk of the undecided bystander falling for the erroneous arguments.

Resisting Correction

Having now outlined this process, the central concern of the paper by Garrett and Weeks is now juxtaposed against rbutr’s system. rbutr doesn’t correct the information on the page they are reading; it provides  entry in to a discussion about the subject matter, allowing multiple opposing perspectives to be heard on the subject in a series. This hopefully lowers the defenses which many people seem to have when being corrected, and also repeats the process of showing the correcting perspective, inter spliced with the arguments the user would find agreeable.  This taps in to two things mentioned in the discussion section of the paper: 1. That this research only looks at a single exposure to a correction, and that a stream of factual corrections could have a more pronounced affect, and 2. That self-affirming content “such as news stories that reinforce their political values” shown before corrections could lower defenses. With rbutr showing alternatively agreeable and corrective arguments, the users seems more likely to be persuaded to the correct information after a couple of steps.

Suggestions Followed

In spite of all of the difference between rbutr and the standard annotation based real-time correction model, we have already applied a small change to rbutr inspired by the results and suggestions made in this paper. Based on the idea that delaying the correction may reduce resistance to the correction, and that self-affirming and positive experiences immediately prior to correction may also help, we have implemented a 3 second interruption page which is displayed before the user is taken to a rebuttal. This interrupt page at this stage simply has a positive heading and displays a picture of a cute baby animal in an attempt to create a (brief) positive experience which may also help create focus in the user (ref).

Of course, the benefits of this interruption page  for changing people’s minds are still far from certain, but by introducing it now we are doing two things. Firstly, we are making the page normal for our users early on, rather than springing it on them at a later date. And secondly, we are tapping in to a huge social phenomenon which may attract both more users and more activity within our system.

With the success of the ‘LOLCats’ concept, looking at pictures of kittens and other cute baby animals has become one of the standard joke ‘purposes’ of the internet. There is no denying that this activity is incredibly popular and sought after by many internet users. By incorporating such a popular cultural phenomenon in to our system, rbutr may attract more users, may entice more activity from the users, may attract more media attention for the juxtaposition between cute animal pictures and serious debate subject matter, and still might even make users more open to contrary evidence.

So this small change could help address the concerns raised in this paper, while also addressing the main concerns every real-time correction sort of service needs to worry about: how will we get enough users to be of any value?

Conclusion

People are resistant to correcting information, but worse than that, they are resistant to seeking correction. Finding ways to attract a wide variety of people to actually use the application is the first step, changing their mind is the second step. Hopefully rbutr has struck upon the right method of achieving exactly that by not ostracising specific groups of people who disagree with the scientific consensus, and by tapping in to popular social phenomenons in the interest of facilitating genuine inter-website discussion.

 

Complete Citation

Garrett, R. K., & Weeks, B. E. (2013, February 23–27). The Promise and Peril of Real-Time Corrections to Political Misperceptions. Paper presented at the Proceedings of the ACM 2013 conference on Computer Supported Cooperative Work (CSCW 2013), San Antonio, TX. doi: 10.1145/2441776.2441895
You can access the paper here: The Promise and Peril of Real-Time Corrections to Political Misperceptions, by R Kelly Garrett and Brian E Weeks
You can access their slides from the CSCW’13 conference
There is a shorter version of the research posted in the blog Follow the Crowd
More information about the team behind the research is on their website: Misperceptions in an Internet Era.

 

Share Button