The Streisand Threshold

The Streisand Threshold: Choosing one’s battles

There is a tremendous amount of pseudoscience and other misinformation that circulates– seemingly unimpeded– on the internet. Consequently, it can be a daunting task for those of us who fight against it to know which targets are an effective use of our time and effort. There are various considerations that might inform one’s decision about where (or toward whom) to direct one’s science advocacy or skeptical outreach:

One’s level of personal interest and aptitude in a topic usually plays a big role in that decision. Responsible bloggers or podcasters will generally refrain from expounding confidently on topics about which they lack sufficient background knowledge to give a fair and accurate account of the state of the field.

It may depend in part on whether the blogosphere already appears saturated with articles about a particular topic. That’s not to say that there’s anything inherently wrong with doing one’s own take on a topic that many others have written about, per se. Even if one has nothing new to bring to the topic, it can still be good practice, and allows for some insight into the standard of quality set by other writers, and what it takes to match that. Sometimes something as simple as a different presentation style or a different audience can get the message through in a novel manner. However, one might prefer to find under-filled niches over re-inventing the wheel.

It may also depend in part on the extent to which one perceives the myth (and/or its proponent) to be dangerous. That’s something that depends partly on the content of the message one is considering countering, how prevalent it is, and the harm that could be caused if the public accepted the misinformation as fact.

The Streisand Effect

For cases in which a myth or charlatan has not yet achieved any substantial reach, we must consider whether or not shining the light of science and reason on it might backfire by exposing it to a larger audience than it previously had.

This phenomenon is known as the Streisand effect. It is named as such due to an incident a few years ago in which photos of the house of famous singer, Barbara Streisand, were leaked to the internet. Streisand demanded the pictures be taken down, which ended up publicizing their existence to a far greater extent than would likely have been the case had she said nothing at all.

Streisand Estate

Her case is far from the only one in which an attempt to repress something backfired spectacularly.

An unflattering Super Bowl Halftime show pic that Beyonce’s publicist wanted removed from the web.

With this in mind, it’s reasonable to wonder how likely it is that debunkers might unintentionally boost the popularity of the claims they are countering (and/or their proponents) by the act of publicly refuting them.

The Converse of the Streisand Effect

On the other hand, there have also been instances in which popular misconceptions were essentially ignored by experts in hopes that they’d fade into obscurity, which then resulted in specious ideas gradually gaining more and more momentum. This is essentially what happened with the rise of the anti-GMO movement. Many scientists figured that as more and more research results were published, that the evidence would speak for itself, and those preliminary fears would eventually be alleviated. Instead, the anti-GMO movement just kept growing until there emerged an enormous gap between science and public perception on the topic of genetically engineered food safety, (larger than for any other publicly controversial scientific topic, according to PEW reports). It wasn’t until the anti-GMO movement had a good 15-year head start (give or take) and a well-established disinformation campaign on the internet that an appreciable number of scientists and science advocates really started to fight back. Some of the myths had already become so firmly cemented in people’s minds that replacing them with more accurate information has been an uphill battle. Certain talking points seem to be recycled perpetually, regardless of how many times they’ve been debunked, thus making science outreach feel to many like a Sisyphean task.

Whether or not you (the reader) like Genetically Engineered crops is not relevant to the central point of this section, but for anyone interested, I’ve taken on many of the aforementioned myths here, here, here, here, here, here, here, here, here, and here.

Rather, the point is that there existed (and still exists) a huge gap between science and public perception on the topic of Genetically Engineered foods, and ignoring it didn’t make it go away. It’s hard to say what might or might not have happened if there had been a greater push-back against the rising anti-GMO movement in the 90s and early 2000s, but it’s clear that ignoring it didn’t help. This is just one example, but it shows that there exists a flip side to the Streisand effect, and that it should therefore not always be a deterrent to countering misinformation.

Don’t Cry Wolfe

In late 2015 through 2016, many public figures involved in science outreach (myself included) ran a campaign called Don’t Cry Wolfe. Its purpose was to expose the dangerous misinformation of an inexplicably popular public figure by the name of David “Avocado” Wolfe, and encourage people not to share his posts or boost his reach which, even at that time, was already considerable. During the campaign, I recall some commenters on my FB page raising the question of whether the adage that “any publicity is good publicity” applied here. Without knowing what it was called, they were expressing concern that the campaign might result in the Streisand effect. Due to insufficient data, it’s difficult to determine what the net result of the Don’t Cry Wolfe campaign truly was, but there were no superficially obvious signs that it backfired.

Image from the #DontCryWolfe campaign.

Don’t Cry Wolfe was also an interesting case in that Wolfe’s style of roping in followers with seemingly innocuous posts and then hitting them over the head with pseudoscience had created a situation in which even people who would normally avoid dangerous woo were following Wolfe’s page and sharing his (less batshit crazy) posts, thus unwittingly increasing his audience and reach even more. Consequently, a lot of people in the online skeptics’ community who simply hadn’t noticed what Wolfe was all about immediately unfollowed him once they caught wind of the Don’t Cry Wolfe campaign.

More importantly, although there were probably multiple reasons it didn’t backfire spectacularly, I suspect the main reason was because he already had nearly 6 million followers at the time, whereas most participants in the campaign had only on the order of tens to hundreds of thousands. There was no keeping the proverbial cat in the bag (Wolfe in the bag?) by ignoring him at that point. Despite the number of science advocates involved, it was just not realistic that we would unwittingly lure in more new Wolfe fans than we dissuaded, especially given the skeptical disposition of our audiences. We knew ignoring him wouldn’t work, and we had nothing to lose by trying.

The Streisand Threshold

We’ve seen examples of both the Streisand effect and its converse at different ends of the spectrum. This raises the question of whether there exists a threshold somewhere between the extremes representing a demarcation between cases in which the Streisand effect does or doesn’t apply. Although I can’t say for certain, I suspect there probably does exist such a threshold, even if the boundary is fuzzy.

Based on the above examples, I would guess that it depends largely on the discrepancies in the reach of the exposure and the exposed. Barbara Streisand and Beyonce Knowles are extremely famous individuals, so when they or their publicists attempt to take down some unflattering picture or piece of information, it draws massive public attention. On the other hand, someone like David Avocado Wolfe was already reaching so many people that there was no real risk in some medium sized science pages calling him out publicly, and there was no chance in hell he was going to just wither away and disappear by continuing to ignore him.

The Tale of Nutritarian Nancy, PhD, BS, WTF (or whatever)

In 2015, there was a small FB page run by a “holistic practitioner” who called herself Nutritarian Nancy, PhD, who had apparently acquired some fluff degree from some online holistic nutrition degree mill, and who was making a lot of bogus fear mongering health claims. Imagine a less successful version of Vani Hari (the Food Babe).

Some science advocates didn’t appreciate her spreading misinformation, and resented her propping herself up with what they took to be illegitimate accolades. So, they shared her posts in groups and would swarm her comments sections, sometimes with reasoned rebuttals and sometimes (unfortunately) with plain old angry rants. I recall being concerned that bombarding her page might boost her reach and render her much more dangerous than her little page ever could have become without that engagement boost.

But it never happened. She eventually changed her page’s name to Natural Nancy, and skeptics and science advocates basically just grew bored with her and stopped paying attention to her. I’m told she eventually changed her page name again after that, but her following never exceeded about 4,500 followers or so. Considering how easy it is to lure people in with pseudoscience (compared to skepticism and science advocacy), it’s fair to consider her efforts a failure for pseudoscience and fear mongering, and a win for science and skepticism.

The Take Home Message

I think the bottom line here is that the Streisand effect isn’t a major problem for participants in anti-pseudoscience outreach unless the reach of the debunker is significantly greater than the reach of the idea and/or person being debunked. We should nevertheless be cautious in borderline cases, because we’ve seen the Streisand effect in action, and we don’t currently know the exact popularity ratios at which it might occur. There may also be other relevant variables we are not aware of, or that are less easily measured. This will necessarily involve some guesswork with borderline cases, but we know that there can be consequences to letting a misinformation vector grow too big before pushing back.

Share

Incommensurability, The Correspondence Principle, and the “Scientists Were Wrong Before” Gambit

Introduction

One of the intrinsic features of the scientific process is that it leads to modifications to previously accepted knowledge over time. Those modifications come in many forms. They may involve simply tacking on new discoveries to an existing body of accepted knowledge without really contradicting prevailing theoretical frameworks. They may necessitate making subtle refinements or adjustments to existing theories to account for newer data. They may involve the reformulation of the way in which certain things are categorized within a particular field so that the groupings make more sense logically, and/or are more practical to use. In rare cases, scientific theories are replaced entirely and new data can even lead to an overhaul of the entire conceptual framework in terms of which work within a particular discipline is performed. In his famous book, The Structure of Scientific Revolutions, physicist, historian, and philosopher of science, Thomas Kuhn referred to such an event as a “paradigm shift.” [1],[2]. This tendency is a result of efforts to accommodate new information and cultivate as accurate a representation of the world as possible.

The “scientists have been wrong before” argument

However, sometimes opponents of one or more areas of mainstream science attempt to recast this self-correcting characteristic of science as a weakness rather than a strength. Anti-GMO activists, anti-vaxxers, young earth creationists, climate science contrarians, AIDS deniers and many other subscribers to unscientific viewpoints have used this as a talking point. The argument is essentially that the fact that scientists revise and sometimes even eliminate old ideas indicates that scientific knowledge is too unreliable to take seriously. They reframe the act of refinement over time as a form of waffling. Based on this, they conclude that whatever widely accepted scientific conclusions they don’t like should therefore be rejected.

Why the “Scientists Have Been Wrong Before” Gambit Exists

The main function of the “scientists have been wrong before” gambit is to serve as a post-hoc rationalization for embracing ideas that are neither empirically supportable nor rationally defensible, and/or rejecting ones that are. Pseudoscience proponents want to focus on perceived errors in science in order to downplay the successful track record of the scientific method. In doing so, they fail to account for the why and the how of scientific transitions. This is also ironic and hypocritical because pseudoscience has no track record worth speaking of at all. Scientific theories are updated when other scientists better meet their burden of proof, and when doing so serves the goal of better understanding the universe. In contrast, the aforementioned gambit is a self-serving attempt to side step the contrarian’s burden of proof in order to resist change.

The argument is disingenuous for a number of reasons, not least of which is that it ignores the ways in which scientific knowledge typically changes over time. Previous observations place constraints on the specific ways in which scientific explanations can change in response to newer evidence. Old facts don’t just magically go away. In order to serve their purpose, reformulations of scientific theories have to account for both old facts and the new. Otherwise, the change would not be an actual improvement on the older explanation, which presumably accounted for at least the older data, but not the newer.

Facts, Laws, and Theories

Before further unpacking this point, I should clarify my use of terminology: in this context, I’m essentially using the term fact to denote repeatedly observed data points. These are independent of the explanations proposed for their existence. Alternatively, one might say that facts report. Scientific Laws are essentially persistent data trends which specify a mathematically predictable relationship between two or more quantities. On the other hand, Scientific Theories are well-supported explanations for why some aspect of the natural world is the way it is and/or how exactly it works. They are consistent with the currently available evidence and make testable predictions that are corroborated by a substantial body of repeatable evidence. In short, facts and laws describe; theories explain.

For example, evolution is both a fact and a scientific theory. This because the fact that populations evolve and the modern scientific theory of evolution (which describes how it occurs) are separate but related concepts. Evolution is formally defined as a statistically significant change of allele frequency in a population over time. *An allele is just genetics jargon for a variant of a particular gene. That is descent with modification. It happens all the time. We witness it constantly. It’s not hypothetical. It’s not speculation. It’s an empirical fact.

The theory of evolution, on the other hand, is an elaborate explanatory framework which outlines how evolution occurs. This includes the mechanisms of natural selection, genetic drift, gene flow, mutation (and much more), and it makes many testable predictions about a wide range of biological phenomena. In science, a theory provides more information than facts or laws, because it connects them in ways that permit the generation of new knowledge. I’ll say it again: facts and laws describe; theories explain.

The Correspondence Principle

It’s true that scientific ideas can be wrong or incomplete and that scientific theories can change with new evidence. However, the argument that this justifies rejecting well-supported scientific theories just because one doesn’t like their conclusions ignores the constraints that prior experimental results place on the ways in which scientific knowledge can realistically change in the future. People advancing the Scientists have been wrong gambit are typically vague and imprecise in their usage of the term, “wrong.” It is often implied that wrong is being used in the sense of “totally factually wrong,” rather than merely incomplete, which is inconsistent both with scientific epistemology and with the history of science. It’s at odds with scientific epistemology, because knowledge in science is generally conceived of in a fallibilistic and/or probabilistic manner rather than in a binary one [12]. It’s at odds with the history of science because it is not generally the case that the data used to support a theoretical claim is entirely 180 degrees mistaken, but rather that the theory is being replaced by a more complete one which, in many cases, simply looks differently. Sure, theories can be expanded and the meaning and implications of experimental data can be conceptually reframed, but new theories can’t be in direct contradiction with the aspects of the old one whose predictions corresponded with experimental data. Unless it can be shown that all prior data consistent with the predictions of the older theory was either fraudulent or due to systematically faulty measurements, this is simply not a viable option.

Another way to put it is that old facts don’t go away so much as their explanations can change in light of newly discovered ones.

This is reflected in what is called the correspondence principle [8].

A Paraphrasing of Bohr’s conception of the Correspondence Principle

Although originally associated with Niels Bohr and the reconciliation of quantum theory with classical mechanics, it illustrates a concept which applies in all areas of science. Essentially, the correspondence principle says that any modifications made to classical mechanics in order to account for the behavior of matter in the microscopic and submicroscopic realms must agree with the repeatedly verified calculations of classical physics when extended to macroscopic scales [9]. However, the overarching concept of older (yet well-supported) scientific theories becoming limiting cases of newer and broader ones is inextricable from advancement of scientific knowledge more generally.

This is why there exist certain facts that will probably never be totally refuted, even if the theories which explain and account for them are subsequently refined and/or placed within the broader context of newer and more comprehensive explanatory frameworks. This is necessarily the case because any candidate for a new scientific theory which proves inferior to the old framework insofar as accounting for the empirical data would be a step backward (not forward) in terms of the degree to which our leading scientific theories map onto the real world phenomena they purport to represent.

As Isaac Asimov put it:

“John, when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together” [16].

The Story of Gravity

Another one of my favorite examples of this is gravity. Our understanding of gravity has undergone multiple changes over the centuries, but none of those updates ever overturned the empirical observation that massive bodies reliably undergo an apparent acceleration towards other massive bodies in a mathematically predictable relationship. Aristotle was wrong about the mass of an object determining the rate at which it fell, and explained it in teleological terms, whereby certain objects were thought to have more “earth-like” properties, so that it was in their nature to belong on the ground [10]. But he didn’t dispute the basic observation that objects fell. Isaac Newton, who developed the inverse square law relationship for gravity, did not develop a theory for why matter behaved this way. He merely described it [11]. Rather than being satisfied with spooky action at a distance, prolific French physicist, astronomer, and mathematician, Pierre-Simone Marquis de Laplace conceptualized gravity in terms of classical field theory, whereby each point in space corresponded to a different value of a gravitational field, such that the field itself was thought of as the thing acting locally on a massive object [5].

The modern theory of gravity (Einstein’s General Relativity) explains it by positing a four dimensional space-time manifold capable of degrees of curvature surrounding massive bodies. In this theory, space-time tells matter how to move, and matter tells space-time how to curve [6]. Like the theory of evolution, general relativity has made many testable and falsifiable predictions that have come to fruition. Moreover, we know that GR cannot be the end of the story either, because the rest of the fundamental forces of physics are better described by quantum field theory (QFT), a formulation to which certain features of GR have notoriously not been amenable [7].

However, not one of these refinements contradicted the basic observations of massive bodies undergoing apparent accelerations in the presence of other massive bodies. Mathematically, it can be shown that Laplace’s formulation was consistent with Newton’s; the difference was in how it was conceptualized. Similarly, in situations involving relatively small masses and velocities, solving the Einstein Field Equations yields predictions that agree with Newton’s and Laplace’s out to several decimal places of precision. And although we don’t yet know for sure what form a successful reconciliation of GR and QFT will ultimately take, we know that it can’t directly contradict the successful predictions that GR and QFT have already made. This exemplifies the point that there exist constraints on the particular ways in which scientific theories can change.

Parsimony and Planetary Motion

I should note that concurrent to the progression of our scientific knowledge of gravity were changes in our understanding of planetary motion, because it demonstrates how the expansion of predictive power is not the only criterion governing theoretical transitions in science. More specifically, the Copernican model of the solar system didn’t actually produce calculations of superior predictive accuracy to the best Geocentric models of his time. Tycho Brahe’s formulation of Ptolemaic astronomy was more accurate. Although Brahe ultimately rejected Heliocentrism, Copernicus’s arguments intrigued him because the his model seemed less mathematically superfluous than the system of epicycles required to make Geocentrism work, yet it yielded results that were more or less in the same ballpark [13]. In other words, what stood out about Copernicus’s model was that, even though it wasn’t quite accurate, it accounted for a lot with a little. It was more parsimonious.

Many of the arguments against the Copernican model had more to do with Aristotelian physics than with the discrepancies in the resulting calculations, some of which were themselves a consequence of Copernicus’s assumption that orbits had to be circular, which was due in part to the philosophical notion that circles were the perfect shape. These problems were of course later resolved by the work of Johannes Kepler and Galileo Galilei; the former used Brahe’s own data to deduce that planets moved in elliptical orbits and swept out equal areas in equal times, whereas the latter formulated the law of inertia and overturned much of the Aristotelian physics upon which many arguments against the Copernican view were based [14]. In combination, Kepler and Galileo laid down much of the groundwork from which Isaac Newton would revolutionize science just a generation later.

The moral of the story, however, is that there are times when parsimony directs the trajectory of further scientific inquiry. It’s not always directed by expanding predictive power. A certain amount of theorizing in science involves what can essentially be understood as a form of data compression. Ultimately, the consistency of theory with empirical reality is the end game, but if a concept can explain more facts more simply and/or with fewer assumptions, then it may be preferred over its leading competitor. It’s certainly preferable to lists of disparate facts lacking any common underlying principles, because science isn’t just about describing empirical phenomena, but about discovering and understanding the rules by which they arise.

This touches on the principle of Occam’s Razor which, insofar as it applies to science, can be roughly paraphrased as the idea that one ought not to multiply theoretical entities beyond that which is needed in order to explain the data [15]. Putting it another way, the more ad hoc assumptions one’s hypothesis requires in order to work, the more likely it is that at least one of them is mistaken.

Or as Newton put it,

We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. Therefore, to the same natural effects we must, as far as possible, assign the same causes” [11].

 

Occam’s Razor is not a rule in science so much as it is a heuristic that sometimes proves useful. Ultimately, our ideas must agree with nature’s results first and foremost. Deference to the empirical world is always paramount, and the universe is under no obligation to meet our arbitrary standards of simplicity or aesthetic preferences, but some prospective theories are better than others at compressing our understanding into more cogent sets of concepts.

Incommensurability

In addition to introducing the idea of paradigm shifts in scientific advancement, Kuhn’s The Structure of Scientific Revolutions (TSoSR) also introduced the concept of incommensurability to describe the relationship between newer and older scientific paradigms. Initially, he introduced this as an umbrella term for any and all conceptual, observational, and/or methodological discrepancies between paradigms, as well as semantic differences in the use of specialized terminology. Kuhn’s own conception of incommensurability evolved considerably in the years following his publications of TSoSR, eventually restricting its applicability to problems with the translation of certain terminology common to both paradigms due to semantic differences arising from the transition to a new conceptual framework [3].

However, the basic idea was essentially that the methods, concepts, and modes of communication involved in disparate scientific paradigm are different enough that anyone from one paradigm attempting to communicate with someone from another would necessarily be speaking at cross-purposes, because they lack common measure. Even the observations themselves are thought to be too theory-laden for concepts and problems to be adequately translated across the theoretical boundaries of the pre and post phases of a scientific revolution. Kuhn himself even used the analogy from Gestalt psychology known as a Gestalt shift [4]. Here’s an example:

Hill, W. E. “My Wife and My Mother-in-Law.” Puck 16, 11, Nov. 1915

Do you see a young woman looking away, or an old woman looking down and to your left? Can you switch back and forth between perspectives? The meaning of any reference to the “nose” of the figure depends on whether one is speaking within the young woman or old woman paradigm. The placement and thickness of the lines does not change during gestalt shifts. What changes is the way in which their meaning is understood.

Analogously, the precise meaning of scientific statements depends on the theoretical framework in terms of which they are being made. The empirical facts that the theories seek to explain have not gone away (though newly obtained data may very well be forcing the change). What changes significantly is the way in which the meaning of the data is conceptualized, and the way in which new questions are framed.

Incommensurability as an attack on the scientific method

Some opportunists might seek to co-opt this notion of incommensurabilty to attack the epistemological integrity of the scientific process itself by exaggerating the degree to which new paradigms invalidate previous scientific knowledge, and to downplay their regions of predictive overlap. However, such attacks would necessarily be weakened by having to account for the constraints the correspondence principle places on which aspects of a scientific theory can change and/or be invalidated by a paradigm shift. To conflate a conceptual change in science with the invalidation of all facets of an older theory is to implicitly presuppose an anti-realist relationship between theory and the empirical phenomena to which it refers.

This is circular reasoning.

The unstated assumption is that no meaningful correspondence relationship exists between scientific concepts and the aspects of the empirical world they purport to represent, therefore changes in how terms are used and how problems are conceptualized precludes the preservation of facts and predictions an earlier model got right. As we saw in the earlier examples of the correspondence principle in action, this is demonstrably false. Many facts and predictions of older theories and paradigms are necessarily carried over to and/or modified to be incorporated into newer ones.

Concluding Summary

Scientific knowledge changes over time, but it does so in the net direction of increasing accuracy. This is one of the strengths of the scientific method: not one of its weaknesses. Most attempts to reframe this as a weakness (invariably via the use of specious mental acrobatics) ignore the constraints necessarily placed on the ways in which scientific theories can change or be wrong.

Many important revolutions in science involve conceptual changes which do not contradict all of the facts and predictions of the older theory, but rather reframe them, restrict them to limiting cases, or expand them to more general ones.

The preservation of certain facts and predictions which are carried over from older theories to newer ones (because the older ones also got them right) can be understood in terms of the correspondence principle.

The validity of the concept of incommensurability between temporally adjacent scientific paradigms is restricted to terminological, conceptual, and sometimes methodological differences between pre and post scientific revolution phases, but does not in any way contradict the correspondence principle.

The fact that scientific ideas can be wrong in principle does not mean that the particular ones the contrarian using this gambit dislikes will be among the discarded, nor that the ways in which it could conceivably be wrong could vindicate the contrarian’s desired conclusion.

Consequently, citing the observation that “scientists have been wrong before” is never a rationally defensible basis with which to justify rejection of scientific ideas which are currently well-supported by the weight of the evidence; only bringing new evidence of comparable quality can do that. If the contrarian is not currently in the process of gathering and publishing the evidence that would supposedly revolutionize some area of science, then they are placing their bet on an underdog based on faith in a future outcome over which they have no influence, and for which they have no rational basis for expecting. This is no more reasonable than believing one is going to win the lottery based on the observation that other people have won the lottery before, and then not even bothering to buy a ticket.  

You don’t know what aspects of our current knowledge will turn out to be incorrect, nor which will be preserved. That’s why the maximally rational position is always to calibrate one’s position to the weight of currently available scientific evidence, and then simply leave room for change in the event that newer evidence arises which justifies doing so.

References

[1] Kuhn, T. S., & Hawkins, D. (1963). The structure of scientific revolutions. American Journal of Physics31(7), 554-555.

[2] Bird, A. (2004). Thomas KuhnPlato.stanford.edu. Retrieved 4 January 2018, from https://plato.stanford.edu/entries/thomas-kuhn/

[3] Sankey, H. (1993). Kuhn’s changing concept of incommensurability. The British Journal for the Philosophy of Science44(4), 759-774.

[4] What Impact Did Gestalt Psychology Have?. (2018). Verywell. Retrieved 4 January 2018, from https://www.verywell.com/what-is-gestalt-psychology-2795808

[5] Laplace, P. S. A Treatise in Celestial Mechanics, Vol. IV, Book X, Chapter VII (1805), translated by N. Bowditch (Chelsea, New York, 1966).

[6] Astronomy, S. (2017). Einstein’s Theory of General RelativitySpace.com. Retrieved 4 January 2018, from https://www.space.com/17661-theory-general-relativity.html

[7] relativity?, A. (2018). A list of inconveniences between quantum mechanics and (general) relativity?Physics.stackexchange.com. Retrieved 4 January 2018, from https://physics.stackexchange.com/questions/387/a-list-of-inconveniences-between-quantum-mechanics-and-general-relativity

[8] Bokulich, A. (2010). Bohr’s Correspondence PrincipleStanford.library.sydney.edu.au. Retrieved 4 January 2018, from https://stanford.library.sydney.edu.au/archives/spr2013/entries/bohr-correspondence/

[9] Bokulich, P., & Bokulich, A. (2005). Niels Bohr’s generalization of classical mechanics. Foundations of Physics35(3), 347-371.

[10] Pedersen, O. (1993). Early physics and astronomy: A historical introduction. CUP Archive.

[11] Newton, I. (1999). The Principia: mathematical principles of natural philosophy. Univ of California Press.

[12] Fallibilism – By Branch / Doctrine – The Basics of Philosophy. (2018). Philosophybasics.com. Retrieved 5 January 2018, from http://www.philosophybasics.com/branch_fallibilism.html

[13] Blair, A. (1990). Tycho Brahe’s critique of Copernicus and the Copernican system. Journal of the History of Ideas51(3), 355-377.

[14] Copernicus, Brahe & Kepler. (2018). Faculty.history.wisc.edu. Retrieved 5 January 2018, from https://faculty.history.wisc.edu/sommerville/351/351-182.htm

[15] What is Occam’s Razor?. (2018). Math.ucr.edu. Retrieved 5 January 2018, from http://math.ucr.edu/home/baez/physics/General/occam.html

[16] Asimov, I. (1989). The relativity of wrong. The Skeptical Inquirer14(1), 35-44.

Share

The One True Argument™

Anyone who has spent much time addressing a lot of myths, misconceptions, and anti-science arguments has probably had the experience of some contrarian taking issue with his or her rebuttal to some common talking point on the grounds that it’s not the “real” issue people have with the topic at hand. It does occasionally happen that some skeptic spends an inordinate amount of time refuting an argument that literally nobody has put forward for a position, but I’m specifically referring to situations in which the rebuttal addresses claims or arguments that some people have actually made, but that the contrarian is implying either haven’t been made or shouldn’t be addressed, because they claim that it’s not the “real” argument. This is a form of No True Scotsman logical fallacy, and is a common tactic of people who reject well-supported scientific ideas for one reason or another. In some cases this may be due to the individual’s lack of exposure to the argument being addressed rather than an act of subterfuge, but it is problematic regardless of whether or not the interlocutor is sincere.

The dilemma is that there are usually many arguments for (and variations of) a particular position, so it’s not usually possible for someone to respond to every possible permutation of every argument that has ever been made against a particular idea (scientific or otherwise). The aforementioned tactic takes advantage of this by implying that the skeptic is attacking a strawman on the grounds that what they refuted was not the “real” main argument for their position. In comment sections on my page, I’ve referred to this as The One True ArgumentTM fallacy. It’s a deceptive way for the contrarian to move the goalpost while deflecting blame back onto the other person by accusing them of misrepresentation. The argument being addressed has been successfully refuted, but instead of acknowledging that, the interlocutor introduces a brand new argument (often just as flawed as the one that was just deconstructed), and accuses the person debunking it of either not understanding or not addressing The One True ArgumentTM.

Some brands of science denial have brought this to the level of an integrative art form. If argument set A is refuted, they will cite argument set B as The One True ArgumentTM, but if argument set B is refuted, they will either cite argument set A or argument set C as The One True ArgumentTM. If argument sets A, B, and C are all refuted in a row, they’ll either bring out argument set D, or they will accuse the skeptic of relying on verbosity, and will attempt to characterize detailed rebuttals as some sort of vice or symptom of a weak argument (even though the skeptic is merely responding to the claimant’s arguments). I really wish I was making this up, but these are all techniques I’ve seen science deniers use in debates on social media or on their own blogs. Of course, the volume of the rebuttal cannot be helped due to what has come to be known as Brandolini’s Law AKA Brandolini’s Bullshit Asymmetry Principle (coined by Alberto Brandolini), which states that the amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it.

The argumentation tactics of sophisticated science deniers and other pseudoscience proponents (or even the less sophisticated ones) could probably fill an entire book, but this is one that I haven’t seen many people address, and it comes up fairly often.

For example, many opponents of genetically engineered food crops claim that they are unsafe to eat, and that they are not tested. Often when someone takes the time to show that they are actually some of the most tested foods in the entire food supply, and that the weight of evidence from decades of research from scientists all across the world has converged on an International Scientific Consensus that the commercially available GE crops are at least as safe and nutritious as their closest conventional counterparts, the opponents will downplay it as not being the “real” issue. In some cases they will appeal to conspiracy theories or poorly done outlier studies that have been rejected by the scientific community, but in other instances they will invoke The One True ArgumentTM fallacy. They will claim that nobody is saying that GMOs are unsafe to eat, and that the problem is the overuse of pesticides that GMOs encourage, or that the problem is that the patents and terminator seeds allegedly permit corporations to sue farmers for accidental cross contamination and monopolize the food supply by prohibiting seed saving.

Of course, these arguments are similarly flawed. GMOs have actually helped reduce pesticide use: not increase it, (particularly insecticides) [1],[2],[3], and have coincided with a trend toward using much less toxic and environmentally persistent herbicides [4]. Plant patents have been common in non-GMO seeds too since the Plant Patent Act of 1930, terminator seeds were never brought to market, the popularity of seed saving had already greatly diminished several decades before the first GE crops, and there are still no documented cases of non-GMO farmers getting sued by GMO seed companies for accidental cross-contamination.

However, although the follow up arguments are similarly flawed, the fact is that many organizations absolutely are claiming that genetically engineered food crops are unsafe. I’m not going to give free traffic to promoters of pseudoscience if I can help it, but one need only to plug in the search terms “gmo + poison” or “gmo + unsafe” to see a plethora of less-than-reputable websites claiming precisely that. The point is that it’s dishonest to pretend that the person rebutting such claims isn’t addressing the “real” contention, because there is no one single contention, and the notion that the foods are unsafe is a very common one.

Another example occurred just the other day on my page. I posted a graphic depicting some data showing how effective vaccines have been at mitigating certain infectious diseases. A commentator responded as shown here:

I responded thusly:

Putting aside the fact that information on vaccine ingredients is easy to obtain (they are laid out in vaccine packaging inserts), and the fact that increasing life expectancy and population numbers suggest that, if there is any nefarious plot to depopulate the planet, the perpetrators have been spectacularly unsuccessful so far, the point is that this exemplifies The One True ArgumentTM tactic.

Another common example is when scientists meticulously lay out the arguments and evidence for how we know that global warming and/or climate change are occurring. There are many common contrarian responses to this, some of which employ the One True Argument fallacy, such as when the contrarian claims that nobody actually rejects the claim that the change is occurring, bur rather they doubt that human actions have played any significant role in it.

Of course, the follow up claim is similarly flawed, since we know that climate changes not by magic but rather when acted upon by physical causes (called forcings), none of which are capable of accounting for the current trend without the inclusion of anthropogenically caused increases in atmospheric concentrations of greenhouse gases such as CO2. This is because most of the other important forcings have either not changed much in the last few decades, or have been moving in the opposite direction of the trend (cooling rather than warming). I’ve explained how solar cycles, continental arrangement, albedo, Milankovitch cycles, volcanism, and meteorite impacts can affect the climate with hundreds of citations from credible scientific journals here, here, here, here, here, here, here, here, here, here, here, and here.

 In this instance, although it has become more common than in the past for climate science contrarians to accept the conclusion that climate has been changing but reject human causation, there are still plenty who argue that the warming trend itself is grand hoax, and that NASA, NOAA, (and virtually every other scientific organization on the planet) has deliberately manipulated the data to make money. If you doubt this, all you need to do is enter “global warming + hoax + fudged data” into your favorite search engine to see an endless list of webmasters making this claim. In fact, in one study, the position that “it’s not happening” at all was the single most common one expressed in op-ed pieces by climate science contrarians between 2007 – 2010 [10]. Their abundance even increased towards the end of that time period, so it’s flat out untrue that the push-back against the science has centered only on human causation and/or the eventual severity of the problem. 

The truth is that there was never anything nefarious going on with the temperature data adjustments. Similar adjustments are performed on data in most scientific fields. They were completely legitimate and scientifically justified. There have even been additional studies in which the assumptions and reasoning behind the ways in which the data was adjusted have been scrutinized and compared to data from reference networks, and the same procedures produced readings that were MORE accurate than the raw non-adjusted data: not less [5],[6],[7].[8].[9]. This is nicely explained here, but I digress; the main point here is not just that the follow-up arguments tend to be similarly flawed, but rather that this technique could in principle be used indefinitely to move the goal posts ad infinitum.

It’s easy to see that this also forces a strategic decision on the part of the skeptic or science advocate. Do you nail them down on their use of this tactic? Do you respond to the follow-up argument they’ve presented as the “real” issue? Do you do both? If so, are there any strategic disadvantages to doing both? Would it make the response excessively long? If so, does that matter? If so, how much can it be compressed by improved concision without sacrificing accuracy and/or important details? Disingenuous argumentative tactics like these put the contrarian’s opponents in a position where he or she has to make these kinds of strategic decisions rather than simply focusing on the veracity of specific claims.

As I alluded to earlier, this is not a free license to construct actual strawmen of other people’s positions and ignore their explanations when they attempt to clarify their arguments and their conclusions, because people do that too, and that’s no good either. But the One True ArgumentTM fallacy refers specifically to when a refutation to a common argument is mischaracterized as a strawman as a means of introducing a different argument while trying to construe it as the skeptic’s fault for addressing the argument they addressed instead of some other one. It’s dishonest, it’s based on bad reasoning, you shouldn’t use it, and you should point it out when others do. 

References:

[1] Brookes, G., & Barfoot, P. (2017). Environmental impacts of genetically modified (GM) crop use 1996–2015: impacts on pesticide use and carbon emissions. GM crops & food, (just-accepted), 00-00.

[2] Klümper, W., & Qaim, M. (2014). A meta-analysis of the impacts of genetically modified crops. PloS one9(11), e111629.

[3] National Academies of Sciences, Engineering, and Medicine. (2017). Genetically Engineered Crops: Experiences and Prospects. National Academies Press (pg. 117-119).

[4] Kniss, A. R. (2017). Long-term trends in the intensity and relative toxicity of herbicide use. Nature communications8, 14865.

[5] Jones, P. D., & Moberg, A. (2003). Hemispheric and large-scale surface air temperature variations: An extensive revision and an update to 2001. Journal of Climate16(2), 206-223.

[6] Brohan, P., Kennedy, J. J., Harris, I., Tett, S. F., & Jones, P. D. (2006). Uncertainty estimates in regional and global observed temperature changes: A new data set from 1850. Journal of Geophysical Research: Atmospheres111(D12).

[7] Jones, P. D., Lister, D. H., Osborn, T. J., Harpham, C., Salmon, M., & Morice, C. P. (2012). Hemispheric and large‐scale land‐surface air temperature variations: An extensive revision and an update to 2010. Journal of Geophysical Research: Atmospheres117(D5).

[8] Hausfather, Z., Menne, M. J., Williams, C. N., Masters, T., Broberg, R., & Jones, D. (2013). Quantifying the effect of urbanization on US Historical Climatology Network temperature records. Journal of Geophysical Research: Atmospheres118(2), 481-494.

[9] Hausfather, Z., Cowtan, K., Menne, M. J., & Williams, C. N. (2016). Evaluating the impact of US Historical Climatology Network homogenization using the US Climate Reference Network. Geophysical Research Letters.

[10] Elsasser, S. W., & Dunlap, R. E. (2013). Leading voices in the denier choir: Conservative columnists’ dismissal of global warming and denigration of climate science. American Behavioral Scientist57(6), 754-776.

Share
  • .