If something is said often enough it must be true! — Availability Bias or the Illusory-Truth Effect

There is no place like home. There is no place like home. There is no place like home. There is no place like home.

yellloudWhile repeat­ing the end­ing line to the Wizard of Oz may have worked well enough for Dorothy, it doesn’t work in the same way for us. Or does it? Sometimes it appears that peo­ple treat claims as true, or at least more valid, when they hear them reg­u­lar­ly. One can eas­i­ly find evi­dence of this when look­ing around on the inter­net. Through the ease of pub­li­ca­tion and pro­mul­ga­tion in the mod­ern era of social media, it is rel­a­tive­ly easy for inac­cu­ra­cies, mis­nomers and bla­tant lies to spread like wild­fire. But why is it that even when they are obvi­ous­ly false, or resound­ly cor­rect­ed, that many peo­ple still believe them to be true? Well it seems that there is some truth to the old adage ‘if it is said often enough it becomes true.’
Welcome to Cognitive Bias Wednesday — today look­ing at the Availability Bias or the Illusory-Truth Effect.

Although hearsay, scut­tle­butt and old-wives tales may account for some of the repeat­ed claim evi­dence, it appears that the cog­ni­tive rab­bit hole goes a bit deep­er than this. In 1977 Hasher et. al. ran a study look­ing at how rep­e­ti­tion of infor­ma­tion affect­ed the believ­abil­i­ty of it. 1 Surprisingly they found that not only did par­tic­i­pants respond more con­fi­dent­ly see­ing the repeat­ed infor­ma­tion, the usu­al symp­tom of a Remember-Know task, but they also rat­ed the valid­i­ty high­er, than the nov­el infor­ma­tion. Their abstract for the piece high­lights their find­ings suc­cinct­ly:

Subjects rat­ed how cer­tain they were that each of 60 state­ments was true or false. The state­ments were sam­pled from areas of knowl­edge includ­ing pol­i­tics, sports, and the arts, and were plau­si­ble but unlike­ly to be specif­i­cal­ly known by most col­lege stu­dents. Subjects gave rat­ings on three suc­ces­sive occa­sions at 2-week inter­vals. Embedded in the list were a crit­i­cal set of state­ments that were either repeat­ed across the ses­sions or were not repeat­ed. For both true and false state­ments, there was a sig­nif­i­cant increase in the valid­i­ty judg­ments for the repeat­ed state­ments and no change in the valid­i­ty judg­ments for the non­re­peat­ed state­ments. Frequency of occur­rence is appar­ent­ly a cri­te­ri­on used to estab­lish the ref­er­en­tial valid­i­ty of plau­si­ble state­ments. 2

morala-in-politica-if-you-repeat-a-lie-often-enoughOf the great­est sur­prise here is that final sen­tence: ‘fre­quen­cy of occur­rence is a cri­te­ri­on … [for the] valid­i­ty of plau­si­ble state­ments.’ In oth­er words they found that the more seem­ing­ly plau­si­ble mate­r­i­al was repeat­ed, the more it was believed as fac­tu­al. To trans­late this into the mod­ern social media era, take the seem­ing­ly ridicu­lous, but vague­ly plau­si­ble, claims regard­ing ‘Chemtrails,’ flu­o­ri­dat­ed water or the ide­ol­o­gy of ISIS. While the claims bear lit­tle to no fac­tu­al basis, if repeat­ed often enough they begin to attain an air of social plau­si­bil­i­ty. The facts sur­round­ing the mat­ters at hand have not changed one iota, but the more it is shared and re-shared, the more it is believed and repeat­ed as mantra as it appears repeat­ed­ly on people’s Facebook walls and Twitter feeds.

Indeed, the more these claims get repeat­ed and shared, the more like­ly it is to cause an avail­abil­i­ty cas­cade. 3 The avail­abil­i­ty cas­cade is effec­tive­ly the result of a par­tic­u­lar ‘fac­toid’ or ‘unfac­toid’ going viral, and gain­ing sig­nif­i­cant social plau­si­bil­i­ty by the avail­abil­i­ty bias. The degree of sen­sa­tion­al­ism and click­bait present in our mod­ern news media is just one exam­ple of this type of cas­cade.

Furthermore, if you are in an aca­d­e­m­ic field like I am, don’t get all high and mighty over not falling prey to the avail­abil­i­ty bias. We have our own two spe­cial instances of it: the NAA and FUTON bias­es.  The NAA bias rep­re­sents the ‘No Abstract Available’ con­di­tion, where arti­cles have reduced cita­tion and engage­ment rates if the abstract for the arti­cle is not pub­licly avail­able. The FUTON bias is the reverse and finds that where the mate­r­i­al is avail­able as ‘Full Text On Net,’ i.e. open pub­lish­ing or sim­i­lar, the arti­cle is engaged with at a high­er rate. As one Lancet study observed this leads to ‘concentrat[ing] on research pub­lished in jour­nals that are avail­able as full text on the inter­net, and ignor[ing] rel­e­vant stud­ies that are not avail­able in full text.’ 4

imagenoise_SIGNALmlab2What does this mean then? Well sim­ply put it’s a ques­tion of Signal-to-Noise ratio. 5 If arti­cles that pro­pose some vague the­o­ry that sounds plau­si­ble but goes against the aca­d­e­m­ic evi­dence are left to fes­ter and be shared around, then they gain a veneer of plau­si­bil­i­ty. One such cat­e­go­ry of arti­cles in my cur­rent field (the­ol­o­gy) are the repeat­ed ‘Jesus myth’ pieces that come out every Christmas, with pre­dictable reg­u­lar­i­ty. Such as this one from last year: http://theconversation.com/weighing-up-the-evidence-for-the-historical-jesus-35319 To main­tain an appro­pri­ate SNR there needs to be appro­pri­ate respons­es to such arti­cles, such as this one from John Dickson: http://www.abc.net.au/religion/articles/2014/12/24/4154120.htm Or take the claims of var­i­ous health relat­ed arti­cle that are shared reg­u­lar­ly around Facebook, these too need robust counter claims. Because unfor­tu­nate­ly the ‘live and let live’ or the ‘sweep it under the rug and let it die’ approach­es only allow the view­points to fes­ter, and with enough avail­abil­i­ty (shares and reshares) they become plau­si­ble in the pub­lic sphere and con­scious­ness.

So in short, even though sim­ply repeat­ing things ad infini­tum or just yelling them loud­er should not work in the pub­lic square, it unfor­tu­nate­ly does affect opin­ion and plau­si­bil­i­ty. As annoy­ing, dis­taste­ful and time con­sum­ing as it may be, inac­cu­rate claims need to be refut­ed, and done so on such a medi­um that it allows for such pub­lic avail­abil­i­ty. To sim­ply ignore them reduces the sig­nal-to-noise ratio and rein­forces the avail­abil­i­ty bias.

Comment below and let me know of your appli­ca­tions of the avail­abil­i­ty bias, and even what oth­er bias­es you would like me to look at. For those that have asked the Dunning-Kreuger effect is com­ing up soon.

Chris

About Chris

Notes:

  1. Hasher, L., Goldstein, D., & Toppino, T. (1977). Frequency and the con­fer­ence of ref­er­en­tial valid­i­ty. Journal of Verbal Learning and Verbal Behavior, 16, 107- 112.
  2. emph. mine
  3. Kuran, Timur and Sunstein, Cass R., Availability Cascades and Risk Regulation. Stanford Law Review, Vol. 51, No. 4, 1999; U of Chicago, Public Law Working Paper No. 181; U of Chicago Law & Economics, Olin Working Paper No. 384. Available at SSRN: http://ssrn.com/abstract=138144
  4. Wentz, R. (2002). “Visibility of research: FUTON bias”. The Lancet 360 (9341): 1256–1256.

    As a side note, this is one rea­son why many of the arti­cles I refer to are behind pay walls. I delib­er­ate­ly choose non OA research, so per­haps I’m exhibit­ing the reverse FUTON bias.

  5. Yes! Finally some vague ref­er­ence to my tele­coms & radio back­ground.