The Streisand Threshold

The Streisand Threshold: Choosing one’s battles

There is a tremendous amount of pseudoscience and other misinformation that circulates– seemingly unimpeded– on the internet. Consequently, it can be a daunting task for those of us who fight against it to know which targets are an effective use of our time and effort. There are various considerations that might inform one’s decision about where (or toward whom) to direct one’s science advocacy or skeptical outreach:

One’s level of personal interest and aptitude in a topic usually plays a big role in that decision. Responsible bloggers or podcasters will generally refrain from expounding confidently on topics about which they lack sufficient background knowledge to give a fair and accurate account of the state of the field.

It may depend in part on whether the blogosphere already appears saturated with articles about a particular topic. That’s not to say that there’s anything inherently wrong with doing one’s own take on a topic that many others have written about, per se. Even if one has nothing new to bring to the topic, it can still be good practice, and allows for some insight into the standard of quality set by other writers, and what it takes to match that. Sometimes something as simple as a different presentation style or a different audience can get the message through in a novel manner. However, one might prefer to find under-filled niches over re-inventing the wheel.

It may also depend in part on the extent to which one perceives the myth (and/or its proponent) to be dangerous. That’s something that depends partly on the content of the message one is considering countering, how prevalent it is, and the harm that could be caused if the public accepted the misinformation as fact.

The Streisand Effect

For cases in which a myth or charlatan has not yet achieved any substantial reach, we must consider whether or not shining the light of science and reason on it might backfire by exposing it to a larger audience than it previously had.

This phenomenon is known as the Streisand effect. It is named as such due to an incident a few years ago in which photos of the house of famous singer, Barbara Streisand, were leaked to the internet. Streisand demanded the pictures be taken down, which ended up publicizing their existence to a far greater extent than would likely have been the case had she said nothing at all.

Streisand Estate

Her case is far from the only one in which an attempt to repress something backfired spectacularly.

An unflattering Super Bowl Halftime show pic that Beyonce’s publicist wanted removed from the web.

With this in mind, it’s reasonable to wonder how likely it is that debunkers might unintentionally boost the popularity of the claims they are countering (and/or their proponents) by the act of publicly refuting them.

The Converse of the Streisand Effect

On the other hand, there have also been instances in which popular misconceptions were essentially ignored by experts in hopes that they’d fade into obscurity, which then resulted in specious ideas gradually gaining more and more momentum. This is essentially what happened with the rise of the anti-GMO movement. Many scientists figured that as more and more research results were published, that the evidence would speak for itself, and those preliminary fears would eventually be alleviated. Instead, the anti-GMO movement just kept growing until there emerged an enormous gap between science and public perception on the topic of genetically engineered food safety, (larger than for any other publicly controversial scientific topic, according to PEW reports). It wasn’t until the anti-GMO movement had a good 15-year head start (give or take) and a well-established disinformation campaign on the internet that an appreciable number of scientists and science advocates really started to fight back. Some of the myths had already become so firmly cemented in people’s minds that replacing them with more accurate information has been an uphill battle. Certain talking points seem to be recycled perpetually, regardless of how many times they’ve been debunked, thus making science outreach feel to many like a Sisyphean task.

Whether or not you (the reader) like Genetically Engineered crops is not relevant to the central point of this section, but for anyone interested, I’ve taken on many of the aforementioned myths here, here, here, here, here, here, here, here, here, and here.

Rather, the point is that there existed (and still exists) a huge gap between science and public perception on the topic of Genetically Engineered foods, and ignoring it didn’t make it go away. It’s hard to say what might or might not have happened if there had been a greater push-back against the rising anti-GMO movement in the 90s and early 2000s, but it’s clear that ignoring it didn’t help. This is just one example, but it shows that there exists a flip side to the Streisand effect, and that it should therefore not always be a deterrent to countering misinformation.

Don’t Cry Wolfe

In late 2015 through 2016, many public figures involved in science outreach (myself included) ran a campaign called Don’t Cry Wolfe. Its purpose was to expose the dangerous misinformation of an inexplicably popular public figure by the name of David “Avocado” Wolfe, and encourage people not to share his posts or boost his reach which, even at that time, was already considerable. During the campaign, I recall some commenters on my FB page raising the question of whether the adage that “any publicity is good publicity” applied here. Without knowing what it was called, they were expressing concern that the campaign might result in the Streisand effect. Due to insufficient data, it’s difficult to determine what the net result of the Don’t Cry Wolfe campaign truly was, but there were no superficially obvious signs that it backfired.

Image from the #DontCryWolfe campaign.

Don’t Cry Wolfe was also an interesting case in that Wolfe’s style of roping in followers with seemingly innocuous posts and then hitting them over the head with pseudoscience had created a situation in which even people who would normally avoid dangerous woo were following Wolfe’s page and sharing his (less batshit crazy) posts, thus unwittingly increasing his audience and reach even more. Consequently, a lot of people in the online skeptics’ community who simply hadn’t noticed what Wolfe was all about immediately unfollowed him once they caught wind of the Don’t Cry Wolfe campaign.

More importantly, although there were probably multiple reasons it didn’t backfire spectacularly, I suspect the main reason was because he already had nearly 6 million followers at the time, whereas most participants in the campaign had only on the order of tens to hundreds of thousands. There was no keeping the proverbial cat in the bag (Wolfe in the bag?) by ignoring him at that point. Despite the number of science advocates involved, it was just not realistic that we would unwittingly lure in more new Wolfe fans than we dissuaded, especially given the skeptical disposition of our audiences. We knew ignoring him wouldn’t work, and we had nothing to lose by trying.

The Streisand Threshold

We’ve seen examples of both the Streisand effect and its converse at different ends of the spectrum. This raises the question of whether there exists a threshold somewhere between the extremes representing a demarcation between cases in which the Streisand effect does or doesn’t apply. Although I can’t say for certain, I suspect there probably does exist such a threshold, even if the boundary is fuzzy.

Based on the above examples, I would guess that it depends largely on the discrepancies in the reach of the exposure and the exposed. Barbara Streisand and Beyonce Knowles are extremely famous individuals, so when they or their publicists attempt to take down some unflattering picture or piece of information, it draws massive public attention. On the other hand, someone like David Avocado Wolfe was already reaching so many people that there was no real risk in some medium sized science pages calling him out publicly, and there was no chance in hell he was going to just wither away and disappear by continuing to ignore him.

The Tale of Nutritarian Nancy, PhD, BS, WTF (or whatever)

In 2015, there was a small FB page run by a “holistic practitioner” who called herself Nutritarian Nancy, PhD, who had apparently acquired some fluff degree from some online holistic nutrition degree mill, and who was making a lot of bogus fear mongering health claims. Imagine a less successful version of Vani Hari (the Food Babe).

Some science advocates didn’t appreciate her spreading misinformation, and resented her propping herself up with what they took to be illegitimate accolades. So, they shared her posts in groups and would swarm her comments sections, sometimes with reasoned rebuttals and sometimes (unfortunately) with plain old angry rants. I recall being concerned that bombarding her page might boost her reach and render her much more dangerous than her little page ever could have become without that engagement boost.

But it never happened. She eventually changed her page’s name to Natural Nancy, and skeptics and science advocates basically just grew bored with her and stopped paying attention to her. I’m told she eventually changed her page name again after that, but her following never exceeded about 4,500 followers or so. Considering how easy it is to lure people in with pseudoscience (compared to skepticism and science advocacy), it’s fair to consider her efforts a failure for pseudoscience and fear mongering, and a win for science and skepticism.

The Take Home Message

I think the bottom line here is that the Streisand effect isn’t a major problem for participants in anti-pseudoscience outreach unless the reach of the debunker is significantly greater than the reach of the idea and/or person being debunked. We should nevertheless be cautious in borderline cases, because we’ve seen the Streisand effect in action, and we don’t currently know the exact popularity ratios at which it might occur. There may also be other relevant variables we are not aware of, or that are less easily measured. This will necessarily involve some guesswork with borderline cases, but we know that there can be consequences to letting a misinformation vector grow too big before pushing back.


Incommensurability, The Correspondence Principle, and the “Scientists Were Wrong Before” Gambit


One of the intrinsic features of the scientific process is that it leads to modifications to previously accepted knowledge over time. Those modifications come in many forms. They may involve simply tacking on new discoveries to an existing body of accepted knowledge without really contradicting prevailing theoretical frameworks. They may necessitate making subtle refinements or adjustments to existing theories to account for newer data. They may involve the reformulation of the way in which certain things are categorized within a particular field so that the groupings make more sense logically, and/or are more practical to use. In rare cases, scientific theories are replaced entirely and new data can even lead to an overhaul of the entire conceptual framework in terms of which work within a particular discipline is performed. In his famous book, The Structure of Scientific Revolutions, physicist, historian, and philosopher of science, Thomas Kuhn referred to such an event as a “paradigm shift.” [1],[2]. This tendency is a result of efforts to accommodate new information and cultivate as accurate a representation of the world as possible.

The “scientists have been wrong before” argument

However, sometimes opponents of one or more areas of mainstream science attempt to recast this self-correcting characteristic of science as a weakness rather than a strength. Anti-GMO activists, anti-vaxxers, young earth creationists, climate science contrarians, AIDS deniers and many other subscribers to unscientific viewpoints have used this as a talking point. The argument is essentially that the fact that scientists revise and sometimes even eliminate old ideas indicates that scientific knowledge is too unreliable to take seriously. They reframe the act of refinement over time as a form of waffling. Based on this, they conclude that whatever widely accepted scientific conclusions they don’t like should therefore be rejected.

Why the “Scientists Have Been Wrong Before” Gambit Exists

The main function of the “scientists have been wrong before” gambit is to serve as a post-hoc rationalization for embracing ideas that are neither empirically supportable nor rationally defensible, and/or rejecting ones that are. Pseudoscience proponents want to focus on perceived errors in science in order to downplay the successful track record of the scientific method. In doing so, they fail to account for the why and the how of scientific transitions. This is also ironic and hypocritical because pseudoscience has no track record worth speaking of at all. Scientific theories are updated when other scientists better meet their burden of proof, and when doing so serves the goal of better understanding the universe. In contrast, the aforementioned gambit is a self-serving attempt to side step the contrarian’s burden of proof in order to resist change.

The argument is disingenuous for a number of reasons, not least of which is that it ignores the ways in which scientific knowledge typically changes over time. Previous observations place constraints on the specific ways in which scientific explanations can change in response to newer evidence. Old facts don’t just magically go away. In order to serve their purpose, reformulations of scientific theories have to account for both old facts and the new. Otherwise, the change would not be an actual improvement on the older explanation, which presumably accounted for at least the older data, but not the newer.

Facts, Laws, and Theories

Before further unpacking this point, I should clarify my use of terminology: in this context, I’m essentially using the term fact to denote repeatedly observed data points. These are independent of the explanations proposed for their existence. Alternatively, one might say that facts report. Scientific Laws are essentially persistent data trends which specify a mathematically predictable relationship between two or more quantities. On the other hand, Scientific Theories are well-supported explanations for why some aspect of the natural world is the way it is and/or how exactly it works. They are consistent with the currently available evidence and make testable predictions that are corroborated by a substantial body of repeatable evidence. In short, facts and laws describe; theories explain.

For example, evolution is both a fact and a scientific theory. This because the fact that populations evolve and the modern scientific theory of evolution (which describes how it occurs) are separate but related concepts. Evolution is formally defined as a statistically significant change of allele frequency in a population over time. *An allele is just genetics jargon for a variant of a particular gene. That is descent with modification. It happens all the time. We witness it constantly. It’s not hypothetical. It’s not speculation. It’s an empirical fact.

The theory of evolution, on the other hand, is an elaborate explanatory framework which outlines how evolution occurs. This includes the mechanisms of natural selection, genetic drift, gene flow, mutation (and much more), and it makes many testable predictions about a wide range of biological phenomena. In science, a theory provides more information than facts or laws, because it connects them in ways that permit the generation of new knowledge. I’ll say it again: facts and laws describe; theories explain.

The Correspondence Principle

It’s true that scientific ideas can be wrong or incomplete and that scientific theories can change with new evidence. However, the argument that this justifies rejecting well-supported scientific theories just because one doesn’t like their conclusions ignores the constraints that prior experimental results place on the ways in which scientific knowledge can realistically change in the future. People advancing the Scientists have been wrong gambit are typically vague and imprecise in their usage of the term, “wrong.” It is often implied that wrong is being used in the sense of “totally factually wrong,” rather than merely incomplete, which is inconsistent both with scientific epistemology and with the history of science. It’s at odds with scientific epistemology, because knowledge in science is generally conceived of in a fallibilistic and/or probabilistic manner rather than in a binary one [12]. It’s at odds with the history of science because it is not generally the case that the data used to support a theoretical claim is entirely 180 degrees mistaken, but rather that the theory is being replaced by a more complete one which, in many cases, simply looks differently. Sure, theories can be expanded and the meaning and implications of experimental data can be conceptually reframed, but new theories can’t be in direct contradiction with the aspects of the old one whose predictions corresponded with experimental data. Unless it can be shown that all prior data consistent with the predictions of the older theory was either fraudulent or due to systematically faulty measurements, this is simply not a viable option.

Another way to put it is that old facts don’t go away so much as their explanations can change in light of newly discovered ones.

This is reflected in what is called the correspondence principle [8].

A Paraphrasing of Bohr’s conception of the Correspondence Principle

Although originally associated with Niels Bohr and the reconciliation of quantum theory with classical mechanics, it illustrates a concept which applies in all areas of science. Essentially, the correspondence principle says that any modifications made to classical mechanics in order to account for the behavior of matter in the microscopic and submicroscopic realms must agree with the repeatedly verified calculations of classical physics when extended to macroscopic scales [9]. However, the overarching concept of older (yet well-supported) scientific theories becoming limiting cases of newer and broader ones is inextricable from advancement of scientific knowledge more generally.

This is why there exist certain facts that will probably never be totally refuted, even if the theories which explain and account for them are subsequently refined and/or placed within the broader context of newer and more comprehensive explanatory frameworks. This is necessarily the case because any candidate for a new scientific theory which proves inferior to the old framework insofar as accounting for the empirical data would be a step backward (not forward) in terms of the degree to which our leading scientific theories map onto the real world phenomena they purport to represent.

As Isaac Asimov put it:

“John, when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together” [16].

The Story of Gravity

Another one of my favorite examples of this is gravity. Our understanding of gravity has undergone multiple changes over the centuries, but none of those updates ever overturned the empirical observation that massive bodies reliably undergo an apparent acceleration towards other massive bodies in a mathematically predictable relationship. Aristotle was wrong about the mass of an object determining the rate at which it fell, and explained it in teleological terms, whereby certain objects were thought to have more “earth-like” properties, so that it was in their nature to belong on the ground [10]. But he didn’t dispute the basic observation that objects fell. Isaac Newton, who developed the inverse square law relationship for gravity, did not develop a theory for why matter behaved this way. He merely described it [11]. Rather than being satisfied with spooky action at a distance, prolific French physicist, astronomer, and mathematician, Pierre-Simone Marquis de Laplace conceptualized gravity in terms of classical field theory, whereby each point in space corresponded to a different value of a gravitational field, such that the field itself was thought of as the thing acting locally on a massive object [5].

The modern theory of gravity (Einstein’s General Relativity) explains it by positing a four dimensional space-time manifold capable of degrees of curvature surrounding massive bodies. In this theory, space-time tells matter how to move, and matter tells space-time how to curve [6]. Like the theory of evolution, general relativity has made many testable and falsifiable predictions that have come to fruition. Moreover, we know that GR cannot be the end of the story either, because the rest of the fundamental forces of physics are better described by quantum field theory (QFT), a formulation to which certain features of GR have notoriously not been amenable [7].

However, not one of these refinements contradicted the basic observations of massive bodies undergoing apparent accelerations in the presence of other massive bodies. Mathematically, it can be shown that Laplace’s formulation was consistent with Newton’s; the difference was in how it was conceptualized. Similarly, in situations involving relatively small masses and velocities, solving the Einstein Field Equations yields predictions that agree with Newton’s and Laplace’s out to several decimal places of precision. And although we don’t yet know for sure what form a successful reconciliation of GR and QFT will ultimately take, we know that it can’t directly contradict the successful predictions that GR and QFT have already made. This exemplifies the point that there exist constraints on the particular ways in which scientific theories can change.

Parsimony and Planetary Motion

I should note that concurrent to the progression of our scientific knowledge of gravity were changes in our understanding of planetary motion, because it demonstrates how the expansion of predictive power is not the only criterion governing theoretical transitions in science. More specifically, the Copernican model of the solar system didn’t actually produce calculations of superior predictive accuracy to the best Geocentric models of his time. Tycho Brahe’s formulation of Ptolemaic astronomy was more accurate. Although Brahe ultimately rejected Heliocentrism, Copernicus’s arguments intrigued him because the his model seemed less mathematically superfluous than the system of epicycles required to make Geocentrism work, yet it yielded results that were more or less in the same ballpark [13]. In other words, what stood out about Copernicus’s model was that, even though it wasn’t quite accurate, it accounted for a lot with a little. It was more parsimonious.

Many of the arguments against the Copernican model had more to do with Aristotelian physics than with the discrepancies in the resulting calculations, some of which were themselves a consequence of Copernicus’s assumption that orbits had to be circular, which was due in part to the philosophical notion that circles were the perfect shape. These problems were of course later resolved by the work of Johannes Kepler and Galileo Galilei; the former used Brahe’s own data to deduce that planets moved in elliptical orbits and swept out equal areas in equal times, whereas the latter formulated the law of inertia and overturned much of the Aristotelian physics upon which many arguments against the Copernican view were based [14]. In combination, Kepler and Galileo laid down much of the groundwork from which Isaac Newton would revolutionize science just a generation later.

The moral of the story, however, is that there are times when parsimony directs the trajectory of further scientific inquiry. It’s not always directed by expanding predictive power. A certain amount of theorizing in science involves what can essentially be understood as a form of data compression. Ultimately, the consistency of theory with empirical reality is the end game, but if a concept can explain more facts more simply and/or with fewer assumptions, then it may be preferred over its leading competitor. It’s certainly preferable to lists of disparate facts lacking any common underlying principles, because science isn’t just about describing empirical phenomena, but about discovering and understanding the rules by which they arise.

This touches on the principle of Occam’s Razor which, insofar as it applies to science, can be roughly paraphrased as the idea that one ought not to multiply theoretical entities beyond that which is needed in order to explain the data [15]. Putting it another way, the more ad hoc assumptions one’s hypothesis requires in order to work, the more likely it is that at least one of them is mistaken.

Or as Newton put it,

We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. Therefore, to the same natural effects we must, as far as possible, assign the same causes” [11].


Occam’s Razor is not a rule in science so much as it is a heuristic that sometimes proves useful. Ultimately, our ideas must agree with nature’s results first and foremost. Deference to the empirical world is always paramount, and the universe is under no obligation to meet our arbitrary standards of simplicity or aesthetic preferences, but some prospective theories are better than others at compressing our understanding into more cogent sets of concepts.


In addition to introducing the idea of paradigm shifts in scientific advancement, Kuhn’s The Structure of Scientific Revolutions (TSoSR) also introduced the concept of incommensurability to describe the relationship between newer and older scientific paradigms. Initially, he introduced this as an umbrella term for any and all conceptual, observational, and/or methodological discrepancies between paradigms, as well as semantic differences in the use of specialized terminology. Kuhn’s own conception of incommensurability evolved considerably in the years following his publications of TSoSR, eventually restricting its applicability to problems with the translation of certain terminology common to both paradigms due to semantic differences arising from the transition to a new conceptual framework [3].

However, the basic idea was essentially that the methods, concepts, and modes of communication involved in disparate scientific paradigm are different enough that anyone from one paradigm attempting to communicate with someone from another would necessarily be speaking at cross-purposes, because they lack common measure. Even the observations themselves are thought to be too theory-laden for concepts and problems to be adequately translated across the theoretical boundaries of the pre and post phases of a scientific revolution. Kuhn himself even used the analogy from Gestalt psychology known as a Gestalt shift [4]. Here’s an example:

Hill, W. E. “My Wife and My Mother-in-Law.” Puck 16, 11, Nov. 1915

Do you see a young woman looking away, or an old woman looking down and to your left? Can you switch back and forth between perspectives? The meaning of any reference to the “nose” of the figure depends on whether one is speaking within the young woman or old woman paradigm. The placement and thickness of the lines does not change during gestalt shifts. What changes is the way in which their meaning is understood.

Analogously, the precise meaning of scientific statements depends on the theoretical framework in terms of which they are being made. The empirical facts that the theories seek to explain have not gone away (though newly obtained data may very well be forcing the change). What changes significantly is the way in which the meaning of the data is conceptualized, and the way in which new questions are framed.

Incommensurability as an attack on the scientific method

Some opportunists might seek to co-opt this notion of incommensurabilty to attack the epistemological integrity of the scientific process itself by exaggerating the degree to which new paradigms invalidate previous scientific knowledge, and to downplay their regions of predictive overlap. However, such attacks would necessarily be weakened by having to account for the constraints the correspondence principle places on which aspects of a scientific theory can change and/or be invalidated by a paradigm shift. To conflate a conceptual change in science with the invalidation of all facets of an older theory is to implicitly presuppose an anti-realist relationship between theory and the empirical phenomena to which it refers.

This is circular reasoning.

The unstated assumption is that no meaningful correspondence relationship exists between scientific concepts and the aspects of the empirical world they purport to represent, therefore changes in how terms are used and how problems are conceptualized precludes the preservation of facts and predictions an earlier model got right. As we saw in the earlier examples of the correspondence principle in action, this is demonstrably false. Many facts and predictions of older theories and paradigms are necessarily carried over to and/or modified to be incorporated into newer ones.

Concluding Summary

Scientific knowledge changes over time, but it does so in the net direction of increasing accuracy. This is one of the strengths of the scientific method: not one of its weaknesses. Most attempts to reframe this as a weakness (invariably via the use of specious mental acrobatics) ignore the constraints necessarily placed on the ways in which scientific theories can change or be wrong.

Many important revolutions in science involve conceptual changes which do not contradict all of the facts and predictions of the older theory, but rather reframe them, restrict them to limiting cases, or expand them to more general ones.

The preservation of certain facts and predictions which are carried over from older theories to newer ones (because the older ones also got them right) can be understood in terms of the correspondence principle.

The validity of the concept of incommensurability between temporally adjacent scientific paradigms is restricted to terminological, conceptual, and sometimes methodological differences between pre and post scientific revolution phases, but does not in any way contradict the correspondence principle.

The fact that scientific ideas can be wrong in principle does not mean that the particular ones the contrarian using this gambit dislikes will be among the discarded, nor that the ways in which it could conceivably be wrong could vindicate the contrarian’s desired conclusion.

Consequently, citing the observation that “scientists have been wrong before” is never a rationally defensible basis with which to justify rejection of scientific ideas which are currently well-supported by the weight of the evidence; only bringing new evidence of comparable quality can do that. If the contrarian is not currently in the process of gathering and publishing the evidence that would supposedly revolutionize some area of science, then they are placing their bet on an underdog based on faith in a future outcome over which they have no influence, and for which they have no rational basis for expecting. This is no more reasonable than believing one is going to win the lottery based on the observation that other people have won the lottery before, and then not even bothering to buy a ticket.  

You don’t know what aspects of our current knowledge will turn out to be incorrect, nor which will be preserved. That’s why the maximally rational position is always to calibrate one’s position to the weight of currently available scientific evidence, and then simply leave room for change in the event that newer evidence arises which justifies doing so.


[1] Kuhn, T. S., & Hawkins, D. (1963). The structure of scientific revolutions. American Journal of Physics31(7), 554-555.

[2] Bird, A. (2004). Thomas Retrieved 4 January 2018, from

[3] Sankey, H. (1993). Kuhn’s changing concept of incommensurability. The British Journal for the Philosophy of Science44(4), 759-774.

[4] What Impact Did Gestalt Psychology Have?. (2018). Verywell. Retrieved 4 January 2018, from

[5] Laplace, P. S. A Treatise in Celestial Mechanics, Vol. IV, Book X, Chapter VII (1805), translated by N. Bowditch (Chelsea, New York, 1966).

[6] Astronomy, S. (2017). Einstein’s Theory of General Retrieved 4 January 2018, from

[7] relativity?, A. (2018). A list of inconveniences between quantum mechanics and (general) relativity? Retrieved 4 January 2018, from

[8] Bokulich, A. (2010). Bohr’s Correspondence Retrieved 4 January 2018, from

[9] Bokulich, P., & Bokulich, A. (2005). Niels Bohr’s generalization of classical mechanics. Foundations of Physics35(3), 347-371.

[10] Pedersen, O. (1993). Early physics and astronomy: A historical introduction. CUP Archive.

[11] Newton, I. (1999). The Principia: mathematical principles of natural philosophy. Univ of California Press.

[12] Fallibilism – By Branch / Doctrine – The Basics of Philosophy. (2018). Retrieved 5 January 2018, from

[13] Blair, A. (1990). Tycho Brahe’s critique of Copernicus and the Copernican system. Journal of the History of Ideas51(3), 355-377.

[14] Copernicus, Brahe & Kepler. (2018). Retrieved 5 January 2018, from

[15] What is Occam’s Razor?. (2018). Retrieved 5 January 2018, from

[16] Asimov, I. (1989). The relativity of wrong. The Skeptical Inquirer14(1), 35-44.


Glyphosate and the Gut Microbiome: Another Bad Argument Annihilated


Glyphosate is a broad spectrum herbicide that was first introduced by the Monsanto company in the 1970s under the brand name Roundup. The already popular product grew even more popular among farmers upon the introduction of various commodity crops which were genetically engineered to resist the herbicide while it killed the surrounding weeds with which the crops would otherwise compete for water and nutrients. Glyphosate went off patent back in the year 2000, and since then many manufacturers have cashed in on its popularity [1]. Although it is of unusually low toxicity, glyphosate receives a level of scrutiny and vehemence of criticism that is disproportionate to its actual established risks [2],[3],[4]. This is attributable in part to its ubiquity in modern conventional farming, but it’s likely even more attributable to its association with Monsanto, against which a large and well-organized counter-movement has emerged [5].

Consequently, many different arguments have been formulated and circulated among this counter-movement and beyond. The purpose of this piece is address one of those arguments in particular. More specifically, on numerous occasions I have heard glyphosate critics argue that glyphosate should be opposed because it might alter the microbiome in humans. In a post on his facebook page, The Mad Virologist discussed a recently published study on the effects of glyphosate on gut microorganisms, and inspired me to unpack the microbiome argument against glyphosate and explain what’s wrong with it.


Glyphosate binds to and inhibits the action of an enzyme known as EPSP synthase, which plants need in order to make three important aromatic amino acids: phenylalanine, tyrosine, and tryptophan via what’s known as the shikimic acid pathway, which occurs in plants, bacteria, fungi, algae and some protozoan parasites [6],[7]

Image c/o Zucko et al 2010 [37].

Glyphosate does this by acting as what’s called an uncompetitive inhibitor. That means that it can only bind to the enzyme-substrate complex – the substrate being shikimate-3-phosphate in this case – and cannot bind the enzyme when the substrate is unbound [8],[9]. Upon binding to the enzyme-substrate complex, glyphosate prevents the complex from forming its product, 5-enopyruvylshikimate-3-phosphate (EPSP). Normally the complex would form EPSP by reacting with another molecule called phosphoenol pyruvate (PEP), but sufficient concentrations of glyphosate reduces the number of units of the enzyme-substrate complex available to form their product. The shikimic acid pathway doesn’t exist in us. Humans and other mammals, for example, can’t make those amino acids at all to begin with, so we get them directly from our food. Plants need those amino acids in order to grow and to make proteins, so if they are unable to synthesize them, they can’t grow, and therefore they die.

Additionally, mammals such as ourselves have lived in co-evolutionary association with myriad microorganisms whose aggregate is referred to as the microbiome. The roles of the microbiome in human health and the effects resulting from changes in its composition are active areas of scientific investigation [33].

Image: retrieved from

However, our collective knowledge of the relationship between the microbiome and human health is still in its infancy. Consequently, the topic is an easy target for exploitation by proponents of pseudoscience who would leverage it as a promotional tool for their own agendas, and/or extrapolate to claims which overstep what the current body of scientific literature actually supports [34],[35],[36].

The Gut Microbiome Argument Against Glyphosate

Keeping that in mind, the reasoning underlying the gut microbiome argument against glyphosate can be summarized as follows:

1. The makeup of a person’s gut microbiome is relevant to human health in ways which are only recently starting to be elucidated.

2. Bacteria possess the shikimic acid pathway and can use it to synthesize aromatic amino acids.

3. Glyphosate inhibits a key enzyme used in the shikimic acid pathway.

4. Therefore, glyphosate might be altering people’s microbiome in detrimental ways.

Simple enough?


Is This Argument Biologically Plausible?

However, the problem is that this argument flies in the face of one of the basic principles of microbiology: that microbes grow in the presence of abundant nutrients. As I’ve explained on several occasions when this has come up in my facebook comment sections, bacteria shouldn’t need to synthesize aromatic amino acids when they are literally bathing in them in the gut, therefore this argument against glyphosate is grasping at straws and not plausible. A recent study tested this more formally (in vivo) [10].

How did I know in advance that this was extremely unlikely to be a major issue before this research? It was because I knew that gut bacteria live in… Wait for it… the GUT!!! Where aromatic amino acids are abundant. That means they will continue to grow if the final product of a given biosynthetic pathway is supplemented to them – which is what we are doing by supplementing them with aromatic amino acids through the food we eat – even in the presence of something that inhibits that specific pathway.

This principle is the basis for experiments that allow scientists to functionally characterize which genes’ enzymes act on which substrates in a given biochemical pathway (called functional complementation analysis), and has been in common use for the last century or so as a method for ascertaining the specific steps of varous metabolic pathways, and/or the genes which code for the enzymes which catalyze each reaction [11],[12],[13],[14].

For an example of how this immersion technique has been used, consider the elucidation of the arginine synthesis pathway in N. crassa fungi by Srb et al 1944 [15]. The authors used radiation to induce mutations in the cells, and then performed a genetic screen to isolate those with mutations relevant to the arginine synthesis pathway. This was accomplished by growing colonies of mutants in a medium which included arginine, and then in one which lacked arginine. Cells which grew in an arginine-containing medium but not without it were deemed incapable of synthesizing their own arginine, and were subsequently grown under four different conditions:

  1. In a medium lacking ornithine, citrulline, and arginine.
  2. The same medium as 1, except supplemented with ornithine only (no citrulline or arginine).
  3. The same medium as 1, except supplemented with citrulline only (no ornithine or arginine).
  4. The same medium as 1, except supplemented with arginine only (no ornithine or citrulline).

The results were as follows: 

Image c/o Biological Science 4th ed [16].

This implied that there were three types of mutants. Some had mutations preventing them from producing functional copies of the enzyme responsible for catalyzing the reaction to produce ornithine from its precursor, some for the enzyme responsible for catalyzing the reaction to produce citrulline from ornithine, and some for the enzyme responsible for catalyzing the reaction to produce arginine from citrulline. This is a simple textbook example, but the point here is that supplementing cells with the end product of a metabolic pathway negates the need for the cell to synthesize it itself through that pathway. This particular example used bread mold, but the same principle applies to bacteria.

Moreover, the Shikimic acid pathway is also metabolically expensive, so it’s not likely that the bacteria are actively using this pathway in the presence of abundant aromatic amino acids (i.e. phenylalanine, tyrosine, and tryptophan), especially when they are in competition with other microbes [17]. So, unless the person (the host) is literally starving to death, then it is far more likely gut bacteria are taking them in the easy way by just absorbing them from their environment.

If the host actually is literally starving to death or suffering from severe malnutrition, then they have far bigger and more urgent problems to worry about than their gut microbiome. Starvation and severe malnutrition themselves cause harm [18]. Consequently, parsing out and identifying harm to the host attributable to malnutrition and distinguishing it from harm to the host due to glyphosate-induced alterations to the microbiome would be problematic, especially considering that any hypothetical problems caused by the latter would be avoided by mitigating or preventing the former.

None of this is new or controversial, which is part of the reason why researchers never bothered with a full blown in vivo experiment until recently on the effects of glyphosate on the microbiome. It is also the reason why the results of the recent study should not be surprising.

Earlier Studies

Earlier studies on glyphosate’s effects on bacteria were either full of methdological problems, and/or not setup in such a way as to test the question of how it affects the microbiome in vivo, where aromatic amino acids are abundant. I’ll start with the lowest hanging fruit before dealing with more credible studies, for which the strengths and limitations are more subtle.

Samsel and Seneff

Computer scientist Stephanie Seneff is an anti-vaccine, anti-GMO, and anti-glyphosate activist who claims that GMO foods cause concussions and suggests that glyphosate in vaccines have contributed to school shootings and the Boston Bombing [19],[20]. Seriously, you can’t even make this shit up, but I digress. She and her co-author, a retired consultant by the name of Anthony Samsel, published a series of papers in a predatory pay-to-play journal (entropy) implicating glyphosate in a whole host of conditions (including celiac disease, MS, Parkinson’s, cancer, and autism), many of which involved convoluted non-sequitur arguments based on glyphosate’s alleged effects on the microbiome [21],[22]. Eric from Skeptoid has meticulously broken down the plethora of flaws and red flags in that paper, which would take way too long to reiterate here [23]. To get an idea of just how terrible that paper is, Thoughtscapism points out that it has actually been used as an example of how to spot bogus science journals: a little factoid I found far too hilarious to omit [24],[22].

Other Earlier Studies

This 1986 study showed significant growth inhibition, but only at glyphosate concentrations on the order of a millimolar or more, which is thousands of times the amounts realistically occurring in the gut from food [25]. To put this into perspective, legumes are the food crop with the highest allowed pesticide residue limit in the US (5.0 ppm) [26]. 5.0 ppm = 5.0 mg of glyph/kg of legumes, and glyphosate has a molar mass of 169.07 g/mol.

So, if we estimate that an average full stomach is roughly 1 L in volume while assuming homogeneous distribution, then we get that millimolar concentrations in the gut would involve (1 L)*(10^-3 mol of glyph/L)*(169.07 g of glyph/mol of glyph)*(10^3 mg/g) = 169.07 mg of glyphosate.

If we then assume the maximum permissible amount of glyphosate on the food crop with the highest maximum allowable glyphosate residue limit, we can calculate that millimolar concentrations in the gut by dividing the mass of glyphosate required to achieve millimolar concentrations by the mass of glyphosate per unit of mass of legumes at the maximum allowable residue limits.

When we do that, we find that it would require ingesting about 33.8 kg of legumes (or about 74.5 lbs).

i.e. (169.07 mg glyph)/(5.0 mg of glyph/kg of legumes) = 33.8 kg of legumes.

This of course assumes 100% absorption, which, as neuroscientist/geneticist/toxicologist, Alison Bernstein (aka Mommy, PhD) explains here, is actually not the case. So, the actual amount of legumes required to reach such concentrations in the gut may actually be many times higher than my sample estimate.

As Thoughtscapism points out, even at those extreme doses, the bacteria were not killed, but rather grew at a slower rate, and even that effect was partially mitigated when the researchers supplemented the bacteria with aromatic amino acids to simulate conditions likely to occur in the gut [27]. This 2010 study suffered from similar limitations [28].

Similarly, the following study showed a significant reduction in colony forming units (CFU) in vitro, but the concentrations were again on the order of a millimolar (and up to 29.5 mM), and no aromatic amino acids were supplemented to any of the test groups, which again means that it cannot be extrapolated to the gut microbiome where aromatic amino acids are abundant [29].

The Danish Study

In the new study, researchers from Denmark mapped the microbiome of Sprague Dawley rats using next generation sequencing techniques both before and after exposure both to high doses glyphosate and a commercial glyphosate formulation [10]. The researchers found that even doses 50 times that European Acceptable Daily Intake value (ADI = 0.5 mg/kg of body mass) had limited effects on microbiome composition over the course of two weeks, and that glyphosate’s effects on prototrophic bacteria growth was highly dependent on the availability of aromatic amino acids in the intestinal environment. If you are thinking that two weeks isn’t very long, you have to consider the fact that the average generational time for bacteria is roughly on the order of about 20-30 minutes (or often even less). That means that two weeks represents something on the order of (2 wks)*(7 days/wk)*(24 hrs/day)*(2-3 generations/hr) = 672–1,008 generations. Given the life expectancy of Sprague Dawley rats relative to humans, this duration is also comparable to roughly a year and a half in the life of a human [30].

What this means is that anyone continuing to promote the wrongheaded argument that glyphosate can affect health by altering the composition of the microbiome will have to hypothesize a completely new mechanism by which this is supposed to occur (preferably a biologically plausible one). This is because the reasoning behind this argument is based on the premise that glyphosate-induced inhibition of the shikimic acid pathway in gut microorganisms should prevent them from growing due to their (wrongly) assumed dependence on it for the synthesis of aromatic amino acids. This hypothesis predicts that hundreds of generations of bacteria should not be permitted to grow normally if this effect is occurring to any meaningful degree. The evidence falsifies this prediction.


The claim that glyphosate harms human health via disruption of the microbiome was never a biologically plausible one, because it only makes sense when the system is not being viewed as a whole. Ironically, glyphosate and GE food opponents like to say that they take a holistic approach, but this is not a holistic argument, because it ignores the environment in which the microbiome exists.

We know that organisms don’t bother synthesizing compounds they can already get from their environment. Knocking out one step of a biochemical pathway and growing microorganisms on different media with various substrates is a tried and true classical method for identifying which substrates are involved in a given pathway and/or the enzymes which catalyze their reactions. We also know that the human gut contains abundant aromatic amino acids alleviating the need for resident microorganisms to synthesize them. Running out of them is not a concern because they are replenished multiple times per day. The exception to this would be cases of starvation or malnutrition, in which case malnutrition would be the problem to address: not glyphosate. Despite this, in vivo research has been done, and reaffirms exactly what theoretical predictions would imply. Gut microorganisms grew and replicated for hundreds of generations, thus contradicting the predictions of the hypothesis under discussion.

In order to continue to argue that glyphosate had some other negative effect on the microbiome which would be undetectable within the first several hundred or more generations, a contrarian would have to either postulate a different mechanism by which this could be rendered into a testable scientific hypothesis, or appeal to vague and unspecified unknowns.

In the former case, this would constitute an abandonment of the original argument in place of a new hypothesis leading to predictions distinct from those of the hypothesis under discussion. Essentially, this would mean conceding (either explicitly or implicitly) that the original claim was false (or at least not supported), and then moving the goalpost to a new claim based on a different mechanism.

In the latter case, such vague and half-baked speculation could be applied just as easily to virtually anything. It makes no specific postulates and thus makes no testable predictions, and is therefore unscientific. It is what we sometimes refer to as “not even wrong” [31].

– Cred Hulk

For more on glyphosate and common myths about it, Thoughtscapism has put together the most comprehensive piece I’ve ever seen on the subject for a general audience [32].


[1] Glyphosate | History of glyphosate. (2017). Retrieved 10 December 2017, from

[2] (2017). Retrieved 10 December 2017, from

[3] Hulk, C. (2015). Glyphosate toxicity: Looking past the hyperbole, and sorting through the facts. By Credible HulkThe Credible Hulk. Retrieved 10 December 2017, from

[4] Scientific evidence that Roundup is dangerous has been mounting.. (2017). Greenpeace International. Retrieved 10 December 2017, from

[5] Millions march against GM crops. (2013). the Guardian. Retrieved 10 December 2017, from

[6] Glyphosate | Glyphosate: mechanism of action. (2017). Retrieved 10 December 2017, from

[7] Starcevic, A., Akthar, S., Dunlap, W. C., Shick, J. M., Hranueli, D., Cullum, J., & Long, P. F. (2008). Enzymes of the shikimic acid pathway encoded in the genome of a basal metazoan, Nematostella vectensis, have microbial origins. Proceedings of the National Academy of Sciences105(7), 2533-2537.

[8] Sammons, R. D., Gruys, K. J., Anderson, K. S., Johnson, K. A., & Sikorski, J. A. (1995). Reevaluating glyphosate as a transition-state inhibitor of EPSP synthase: Identification of an EPSP synthase. cntdot. EPSP. cntdot. glyphosate ternary complex. Biochemistry34(19), 6433-6440.

[9] Alibhai, M. F., & Stallings, W. C. (2001). Closing down on glyphosate inhibition—with a new structure for drug discovery. Proceedings of the National Academy of Sciences98(6), 2944-2946.

[10] Nielsen, L. N., Roager, H. M., Frandsen, H. L., Gosewinkel, U., Bester, K., Licht, T. R., … & Bahl, M. I. (2018). Glyphosate has limited short-term effects on commensal bacterial community composition in the gut environment due to sufficient aromatic amino acid levels. Environmental Pollution233, 364-376.

[11] Hudson, A. O., Harkness, T. C., & Savka, M. A. (2016). Functional Complementation Analysis (FCA): A Laboratory Exercise Designed and Implemented to Supplement the Teaching of Biochemical Pathways. JoVE (Journal of Visualized Experiments), (112), e53850-e53850.

[12] Sohaskey, C. D., & Wayne, L. G. (2003). Role of narK2X and narGHJI in hypoxic upregulation of nitrate reduction by Mycobacterium tuberculosis. Journal of bacteriology185(24), 7247-7256.

[13] Smits, T. H., Balada, S. B., Witholt, B., & van Beilen, J. B. (2002). Functional analysis of alkane hydroxylases from gram-negative and gram-positive bacteria. Journal of bacteriology184(6), 1733-1742.

[14] Salcedo, E., Cortese, J. F., Plowe, C. V., Sims, P. F., & Hyde, J. E. (2001). A bifunctional dihydrofolate synthetase–folylpolyglutamate synthetase in Plasmodium falciparum identified by functional complementation in yeast and bacteria. Molecular and biochemical parasitology112(2), 239-252.

[15] Srb, A., & Horowitz, N. H. (1944). The ornithine cycle in Neurospora and its genetic control. Journal of Biological Chemistry154(1), 129-139.

[16] Freeman, S. (2017). Biological Science (6th ed.). Edinburgh Gate Harlow Essex CM20 2JE England. Pearson Education.

[17] Hibbing, M. E., Fuqua, C., Parsek, M. R., & Peterson, S. B. (2010). Bacterial competition: surviving and thriving in the microbial jungle. Nature Reviews Microbiology8(1), 15-25.

[18] Correia, M. I. T., & Waitzberg, D. L. (2003). The impact of malnutrition on morbidity, mortality, length of hospital stay and costs evaluated through a multivariate model analysis. Clinical nutrition22(3), 235-239.

[19] Seneff Claims GMOs Cause Concussions. (2015). Science-Based Medicine. Retrieved 10 December 2017, from

[20] Who is Stephanie Seneff?. (2017). VAXOPEDIA. Retrieved 10 December 2017, from

[21] Anthony Samsel (n.d.) LinkedIn [Profile page]. Retrieved Dec 10. 2017, from

[22] A guide to detecting bogus scientific journals. (2015). Sci-Phy. Retrieved 10 December 2017, from

[23] Roundup and Gut Bacteria. (2013). Skeptoid. Retrieved 10 December 2017, from

[24] →, V. (2016). 2.-3. Glyphosate and Health Effects A-ZThoughtscapism. Retrieved 10 December 2017, from

[25] Fischer, R. S., Berry, A. L. A. N., Gaines, C. G., & Jensen, R. A. (1986). Comparative action of glyphosate as a trigger of energy drain in eubacteria. Journal of bacteriology168(3), 1147-1154.

[26] (2017). Retrieved 10 December 2017, from

[27] →, V. (2016). 4. Does Glyphosate Harm Gut Bacteria?Thoughtscapism. Retrieved 10 December 2017, from

[28] Ahemad, M., & Khan, M. S. (2011). Toxicological effects of selective herbicides on plant growth promoting activities of phosphate solubilizing Klebsiella sp. strain PS19. Current microbiology62(2), 532-538.

[29] Shehata, A. A., Schrödl, W., Aldin, A. A., Hafez, H. M., & Krüger, M. (2013). The effect of glyphosate on potential pathogens and beneficial members of poultry microbiota in vitro. Current microbiology66(4), 350-358.

[30] Andreollo, N. A., Santos, E. F. D., Araújo, M. R., & Lopes, L. R. (2012). Rat’s age versus human’s age: what is the relationship?. ABCD. Arquivos Brasileiros de Cirurgia Digestiva (São Paulo)25(1), 49-51.

[31] Burkeman, O. (2005). Briefing: Not even wrongthe Guardian. Retrieved 10 December 2017, from

[32] 17 Questions About Glyphosate. (2016). Thoughtscapism. Retrieved 10 December 2017, from

[33] Wang, Y., & Kasper, L. H. (2014). The role of microbiome in central nervous system disorders. Brain, behavior, and immunity38, 1-12.

[34] Germ theory denialism and the magical mystical microbiome – RESPECTFUL INSOLENCE. (2015). RESPECTFUL INSOLENCE. Retrieved 10 December 2017, from

[35] Forbes Welcome. (2017). Retrieved 10 December 2017, from

[36] Gut Check. Probiotics and Metabiome.. (2015). Science-Based Medicine. Retrieved 10 December 2017, from

[37] Zucko, J., Dunlap, W. C., Shick, J. M., Cullum, J., Cercelet, F., Amin, B., … & Long, P. F. (2010). Global genome analysis of the shikimic acid pathway reveals greater gene loss in host-associated than in free-living bacteria. BMC genomics11(1), 628.


Genetically Engineering Foods Involves Greater Precision and Lower Risk of Unintentional Changes Than Traditional Breeding Methods


There exists an international scientific consensus that existing genetically engineered foods are at least as safe as their closest corresponding non-GMO counterparts [1]. This consensus is drawn on decades of research and thousands of studies. Despite this fact, there still exists a broad gap between the science and public perception of the topic. According to PEW reports, this gap is wider than on any other topic for which a strong scientific consensus exists [2].

This is significant because we know several other scientific topics have remained highly controversial in the court of public opinion for decades after having achieved mainstream acceptance among experts. Young Earth Creationists have been trying out various strategies for roughly a century to undermine the teaching of evolution in public schools in the US [3]. Rejection of Anthropogenic Climate change is ubiquitous in the US, and even extends to the POTUS himself, despite the weight of the evidence and resulting scientific consensus to the contrary [4],[5],[6],[7]. Although its prevalence has waxed and waned over time, vaccine opposition has been a near constant presence ever since the discovery of the cowpox vaccine [8],[9]. So, for there to exist an even bigger gap between science and public perception on GE foods than for any of these other topics, is no trivial matter.

One of the most common reasons given for trepidation with respect to GE foods is the idea that it’s a more radical way of altering our foods than more traditional methods of artificial selection, and that we therefore can’t know whether or not GE foods are bad until they’ve been around for several more human generations. This view relies on an argument from ignorance logical fallacy, and presents a double standard with no biologically plausible justification. One fact that proponents of this view rarely acknowledge is that it also leads to the generation of testable hypotheses, the results of which actually suggest the exact opposite. Allow me to explain.

The Scientific Consensus

In previous posts, I’ve pointed out what we mean when we speak of the international scientific consensus on GE food safety consists of two major claims:

  • All currently approved GE crops have been tested on a case-by-case basis and the weight of the evidence suggests they are at least as safe as their closest non-GE counterparts.
  • Nothing about the process makes unpredicted dangers any more intrinsically likely with modern molecular GE techniques than other methods of altering an organism’s genome.

I’ve already outlined the evidence for the first point in previous posts [1]. However, the second point directly bears upon the aforementioned double standards and arguments from ignorance comprising many anti-GMO fears and talking points. The purpose of this article is to explain how we know this to be the case.

The Genetic Code is a Universal Language

Before delving into what I believe to be the most intelligently-formulated versions of this set of concerns over GMO food crops, I want to clear up some even more pervasive perceptions that are based almost exclusively on malformed understandings of basic genetics and the evolutionary relationships between species. See, most applications of transgenesis involve the insertion of no more than one to three new genes. And although it is generally well-accepted that humans have been altering our food for thousands of years, those who are distrustful of biotechnology often argue that that doesn’t count on the grounds that cross-breeding doesn’t entail the insertion of a gene from a totally different species, phyla, or even kingdom of life. The argument usually takes the form of “selective cross-breeding is not remotely the same as putting a fish gene into a tomato!

Derek from Veritasium

As of this writing, there are no GM plants on the market whose transgenes were derived from animals, but that is tangential to the even bigger flaw and this line of thinking. The argument is based on the assumption that the number and/or identity of genes altered by a process makes less of a difference than the source of the single gene.

However, this is an incorrect way of looking at it, because at least for life on earth, all species are related by common descent, the genetic code is a nearly universal language, and most genes are not species proprietary [10],[11],[12],[13],[14],[15],[16]. There are very few minor exceptions to the genetic code throughout earth’s biosphere [17]. I can cover DNA composition and the genetic code in more detail in another post if there’s sufficient demand. But for now, suffice it to say that there is a well-known and well-understood relationship between certain combinations of nucleotide bases in protein-coding genes and the amino acids for which they correspond when their mRNA transcripts are translated into proteins [15]. Moreover, all species share certain genes with many other species, and although there can be differences in the ways a gene is expressed in two different species (or even two different cell types within the same organism), those differences occur in ways that can be measured and understood, and they don’t involve outright violations of the genetic code itself [18]. That’s just not how it works.

Often the most noticeable differences between two species has more to do with differential expression and regulation of genes they share in common than with dramatic differences in which genes they possess [18]. And although the passing of genes across species boundaries (horizontal gene transfer) is primarily associated with prokaryotes (bacteria and archaea) in nature, it is not entirely unheard of in eukaryotes. [19],[20].

As an interesting point of fact, the genome of cultivated sweet potatoes has been shown to contain agrobacterium T-DNAs, demonstrating them to be a natural transgenic food crop [21]. This is literally the exact same method used in the deliberate engineering of several transgenic crops today: the agrobacterium method [22]. So, even the claim that transgenesis never occurs in nature is false. Whether or not it occurs naturally obviously has no bearing on its safety, but the point is that many of these anti-GMO arguments are flawed on multiple different levels. The term “fractal wrongness” comes to mind.

Unintended Changes are not Impossible

It is important to point out that unintended off-target changes to the genome can and sometimes do occur with all known genetic modification techniques, including both transgenesis as well as more traditional methods not considered “GMO” by usual colloquial standards. This is neither unexpected nor necessarily problematic, but it is also true that a small portion of these off-target changes may turn out to confer undesirable traits in the resulting organism [23]. However, not only are such instances less likely to make it into the food supply with GE than with non-GE for regulatory reasons, but strong evidence suggests that they also occur less frequently with transgenesis than with most techniques considered non-GE.

How do we know this?

This can be tested in a number of different ways. These methods include compositional, genomic, transcriptomic, proteomic, and metabolomic comparisons between a GE line and a closely-related non-GE counterpart. I will explain each of these in turn, but understand that they are not mutually exclusive.

Compositional Equivalence

One of the most common ways to test this is to perform compositional comparisons between a GE crop and a nearly isogenic non-GE counterpart line. In many instances, one or more additional non-GE commercial reference lines will be included. This usually involves replicated field trials placed throughout a growing region from which tissue samples are taken, which are then analyzed for an array of several dozen compounds.

Subsequently, statistical methods are employed in order to determine whether any significant compositional differences exist between the GE and non-GE lines. If this were to turn out to be the case, the analytes in question would then be compared to the range of levels considered to be normal for the particular crop. If levels are not within the normal range of natural variation, scientists then have to determine the biological relevance of these differences within the context of how the crops are typically produced and consumed. Guidelines for these types of compositional field trials have been prescribed by the EFSA [24]. Notice that this does not entail that the transgenic lines need to be precisely identical to their non-GE counterparts. Some variation is expected. The issue is whether or not the differences actually matter [58].


Another way of assessing the prevalence of off-target changes in transgenesis is to compare various genetic markers between related transgenic and non-transgenic lines. Generally speaking, whole genome sequencing is rarely ever used for this purpose. Part of the reason for this is because, until recently, whole genome sequencing had been cost prohibitive. Another reason, however, is because genome sequencing alone doesn’t tell us much about how the rates of expression of different genes might be affected by the modification process. Of the thousands (or even tens of thousands) of genes possessed by a given multi-cellular organism, only a small fraction of them are expressed in any particular cell at any given time [18].

In fact, the phenomenon of differential gene expression between different cell and tissue types is central to the entire existence and development of complex multi-cellular life. Different cells in different tissues can have the exact same nuclear DNA, yet be vastly different in their appearance and function, simply because they express different subsets of those genes in different cells and/or express them at different rates [18].

This is why genome sequencing alone doesn’t tell us the whole story with respect to examining the changes made (whether intended or not) to a particular cultivar, regardless of the breeding method used to accomplish it. For that, it is often more informative to analyze and compare the total set of RNA molecules transcribed in a given set of cells (the transcriptome), and/or the proteins into which some of those RNA molecules are translated (the proteome). More on that in a moment.


Recall that when DNA in Eukaryotic cells is transcribed the initial product is an RNA strand called a primary transcript. In the case of protein-coding genes, those primary transcripts are called pre-mRNA (pre-messenger RNA), and typically undergo processing in order to produce mature RNA transcripts, which will later be translated into proteins which perform particular functions within the cell [25]. Not all DNA is transcribed to RNA, and not all RNA is translated into proteins, and not all protein-coding genes are expressed in all cell types [18].

As if that wasn’t already enough reason to conclude that the genome itself doesn’t convey the entire expressive picture, some RNA even has to undergo splicing and tagging in the cell before becoming a mature mRNA transcript ready to be translated into a protein. Some of these transcripts even have alternative splicing patterns. These splice jobs are performed by protein-RNA macromolecular complexes known as small nuclear ribonucleoproteins, or simply snRNPs (pronounced snurps). Their purpose is to splice out sections of the primary transcript called introns, which are not expressed in the final product, thus leaving only regions called exons, which are expressed [25].

Much as the aggregate of an organism’s genes is called its genome, the set of all RNA transcripts within a given cell or tissue of a given organism at a given time is called its transcriptome. Analyzing the transcriptomes of GE organisms and comparing them to those of closely-related organisms altered via other methods gives us an idea of how those modifications are really affecting gene expression.

Other “Omics” Comparisons

If genomics involves analyzing all the genes in an organism’s genome, and transcriptomics involves analyzing all the RNA transcripts of a particular cell at a particular time, then you can probably infer that proteomics refers to the analysis of all the proteins expressed in a particular cell at a particular time [26]. Instead of looking at individual proteins, proteomics looks at all of the proteins present. Once the individual proteins have been identified, researchers can then study how the proteins present change over time, how they interact, and/or how they vary between different cells. Because proteins facilitate so many cellular processes, the proteome serves as a functional representation of the genome.

Similarly, metabolomics is the study of all the metabolites present in a cell, tissue, or organism [27]. These metabolites include the small molecular end products and intermediates of various cellular processes, as well as hormones and some signaling molecules. In human metabolomics, molecules may be sub-classified as either endogenous or exogenous depending on whether or not they’re produced by the organism, or as xenometabolites in the case of foreign substances such as drugs [28],[29]. For plants, the terms primary and secondary metabolites are often used [30]. The former refers to metabolites involved in normal growth, development, and/or reproduction, whereas the latter are not, but will in many cases serve an ecological function of interest.

Like the transcriptome and proteome, the metabolome is dynamic; it changes over time. Consequently, transcriptomics, proteomics, and metabolomics are all important tools comprising the field known as functional genomics. A genome sequence is ostensibly just a parts list, whereas these tools permit insight into what the genome is actually doing. Insofar as applying these techniques to compare the rates of off-target changes of different methods of modification, one of the conclusions of Metzdorf et al 2006 was that correct interpretations of the ‘omics’ data would require information about the range of natural variability of crop plants under examination [31].

GE and Unintended Off-Target Changes: What Does the Evidence Actually Suggest?

The transcriptomics, proteomics, and metabolomics, along with compositional and genomic analyses permit us to compare the rates of unintended off-target changes in GE organisms vs those arrived at via other breeding methods. Indeed, such measures have been used to test exactly that question. The evidence suggests that they involve LESS change. That doesn’t mean a harmful off target change is literally physically impossible, but it does make it less likely than with non-GMO. Here are some examples:

This study analyzed the transcriptome of drought-tolerant transgenic arabidopsis thaliana crops via the use of microarrays. (More on those later). With the exception of the transgene itself (ABF3), which codes for a regulatory transcription factor involved in triggering the expression of other genes associated with drought response, the transcriptome revealed no significant differences in gene expression between the transgenic lines vs the controls [32].

Here’s one which found that transgenesis resulted in fewer off-target changes to the transcriptome of wheat grain than did conventional cross-breeding [33].

The following meta-analysis incorporated studies with multiple-”omics” comparisons. Ricroch et al found that that genetic engineering had less of an impact on plant gene expression and composition relative to conventional plant breeding. They also found that environmental factors such as field location, sampling time, (or agricultural practices) had more impact on outcomes than transgenesis [34].

This metabolomic analysis of transgenic wheat did reveal some subtle differences in amino acid profile as well as maltose and sucrose concentrations relative to conventional parental controls. However, these differences were well within the expected range of natural variation and were dwarfed by differences attributable to location, soil, weather etc [35].

The following excerpt is from a paper looking back on 20 years and over 80 studies of compositional comparison research [36]:

“It is concluded that suspect unintended compositional effects that could be caused by genetic modification have not materialized on the basis of this substantial literature.”

The authors also concluded the following:

“Our assessment is that there appears to be overwhelming evidence that transgenesis is  less disruptive of crop composition compared with traditional breeding, which itself has a tremendous history of safety.”

The following proteomic study compared transgenic insect-resistant Bt cotton with conventional using an Enzyme Linked Immunosorbent Assay (ELISA). The author’s found some subtle differences in the expression and/or interaction of 35 proteins, most of which were involved in photosynthetic pathways and/or carbohydrate transport and metabolism, as well as some chaperone proteins involved in post-translational modification. However, there was no increase in toxic or allergenic proteins, and the authors concluded that the differences were minor enough not to constitute a sharp change to the proteome in transgenic cotton leaves [37].

This one found that gene expression differences between insect resistant maize and conventional were best explained by natural variation and environmental considerations [38].

Transgenesis has also been compared with Marker Assisted Breeding, which is a form of selective back crossing for which certain genetic markers are used to ascertain whether or not a plant possesses the desired trait, so that it is not necessary to wait until the plant is fully developed [52]. This speeds up the process. Using transcriptome comparisons between the transgenic and MAB rice cultivars, the authors found that the transgenic rice had a whopping 40% fewer changes in gene expression than the rice arrived at via Marker Assisted Backcrossing relative to the controls [53].

Here is one in which comprehensive metabolomic comparisons were performed on transgenic potatoes relative to closely related cultivars. Other than already anticipated metabolites, the authors’ mass spectra and chromatographic analysis found no significant metabolomic differences between the transgenic lines vs the conventional [39]. These are only but a handful of the studies of these types that are out there. Here are some more [54],[55],[56],[57].


Mutagenesis is a common method of modification in which ionizing radiation or mutagenic chemicals are used to speed up the mutation rate of seeds so that desirable traits which randomly emerge can be artificially selected for in subsequent generations [40]. By US standards, plants developed in this way aren’t legally considered “GMO.” Unlike genetically engineered seeds, they can be used in organic farming.

Given that the procedure’s entire purpose is to speed up mutations, it’s reasonable to wonder how their rates of off-target changes compare to those attributable to transgenic techniques.

The following study found that transgenesis resulted in an entire order of magnitude fewer off-target structural variations to the genome relative to mutagenesis. Using what’s called a tilling microarray, the authors also found that, although rare, structural variants in transgenic varieties typically occurred directly adjacent to the points of transgene insertion, or occasionally at unlinked loci on different chromosomes [41]. Note: I had initially included a section in this post describing the premise of DNA microarrays, RNA-seq, but omitted them because they would make the article way too long.

“On average, the number of genes affected by structural variations in transgenic

plants was one order of magnitude less than that of fast neutron mutants and

two orders of magnitude less than the rates observed between cultivars.”

Another study found that transcriptome alteration was greater in mutagenic breeding than with transgenesis [51].

Despite this, seeds arrived at via mutagenesis undergo no safety evaluation or substantial equivalence testing whatsoever prior to commercialization. A coherent justification for this regulatory double standard has not been forthcoming.

Harmful and/or Undesired Results From Conventional Breeding

Regardless of whether they were genetically engineered, all foods contain substances that could be potentially hazardous in sufficiently high amounts. Unlike GE, however, non-GE methods of modification actually have resulted in foods with harmful unintended changes making it into the food supply.

For example, the kiwi fruit, which was brought about by conventional breeding, has been implicated in severe anaphylaxis in predisposed individuals [42].

Similarly, the conventionally bred Lenape potato was found to contain dangerously elevated solanine levels in the late 1960s, and was subsequently removed from the market [43],[44]. Ironically, there have been experiments in which a transgenic version of the Lenape potato were shown to express substantially lower solanine levels [45].

More recently, a similar case arose with a heritage variety of potato developed in the UK, and was subsequently removed from the market [46].

Elevated psoralens levels in celery plants is another example in which high levels of a toxic compound resulted from traditional breeding [47]. These chemicals confer natural resistance to insect predation and make the plant more aesthetically appealing to consumers, but they are also irritants which can become toxic in higher amounts [48].

Cucurbitacins are a class of biochemical compounds produced by some plants as a defense against herbivores [49]. There have been cases in which conventionally bred zucchini squash varieties have produced such compounds in potentially dangerous quantities [50].

To be clear, there do also exist cases in which GE resulted in unintended off-target changes. However, as the comprehensive 2016 literature review by the National Academy of Sciences puts it:

“Because GE crops are regulated to a greater degree than are conventionally bred,

non-GE crops, it is more likely that traits with potentially hazardous characteristics will not pass early developmental phases. For the same reason, it is also more likely that unintentional, potentially hazardous changes will be noticed before commercialization either by the breeding institution or by governmental regulatory agencies.”

Anti-GMO Ideologues be like…


The bottom line is that we humans have been modifying our food by one method or another for thousands of years. I am well aware that contrarians to the mainstream scientific position like to over-emphasize that GE is different, and strictly speaking, it is trivially true that GE is different, since every method of genetic alteration is technically (by definition) different than every other method of modification; wide-cross hybridization is different from transgenesis, which is different from protoplast fusion, which is different from miRNA gene silencing, which is different from mutagenesis, which is different from polyploidy, and so on and so forth.

However, that argument is ultimately a red herring, because it ignores that the ways in which GE is different are its strengths: not its weaknesses. Overwhelming scientific evidence indicates that commercially available GE foods are at least as safe as their conventional counterparts, and that the processes itself does not increase the likelihood of harmful unintended off-target changes relative to traditional methods of modification. The latter conclusion is supported by multiple converging lines of genomic, transcriptomic, proteomic, and metabolomic evidence and compositional equivalence studies. Moreover, due to the fact that they are far more stringently regulated, any potentially hazardous unintended changes are far more likely to be caught prior to commercialization with GE foods relative to non-GE.

Furthermore, vague pie-in-the-sky anti-GE arguments invoking potentially harmful future unknowns could be applied to virtually any technology (young or old). Such arguments fail to consider the potential consequences of rejecting the technology, and tend to be incapable of generating testable hypotheses, which consequently renders them unfalsifiable, and therefore unscientific. In the instances where such arguments have been formulated well enough to actually generate testable hypotheses (such as the prediction that GE should result in greater changes to composition and gene expression), they have been demonstrated to be false.


– Credible Hulk


[1] Hulk, C. (2015). The International Scientific Consensus On Genetically Engineered Food SafetyThe Credible Hulk. Retrieved 23 October 2017, from

[2] Funk, C., & Rainie, L. (2015). Public and Scientists’ Views on Science and SocietyPew Research Center: Internet, Science & Tech. Retrieved 23 October 2017, from

[3] Caudill, E. (2013). Intelligently designed : How creationists built the campaign against evolution. Urbana, Chicago, and Springfield: University of Illinois Press.

[4] Collomb, J. (2014). The Ideology of Climate Change Denial in the United States. European Journal Of American Studies, 9(1).

[5] Rubin, J., & Rubin, J. (2017). Opinion | Trump’s climate-change denial rattles U.S. businessesWashington Post. Retrieved 23 October 2017, from

[6] C., L., S., N., W., P., . . . R. (2016). Consensus on consensus: A synthesis of consensus estimates on human-caused global warming. Environmental Research Letters, 11(4). doi:10.1088/1748-9326/11/4/048002

[7] Ruishalme, I. (2013). Is There a Consensus about Climate Change?. Thoughtscapism. Retrieved 23 October 2017, from

[8] Tafuri, Silvio & Martinelli, D & Prato, R & Germinario, C. (2011). [From the struggle for freedom to the denial of evidence: history of the anti-vaccination movements in Europe]. Annali di igiene : medicina preventiva e di comunità. 23. 93-9.

[9] History of Anti-vaccination Movements | History of Vaccines. (2017). Retrieved 24 October 2017, from

[10] Theobald, D. (2010). A formal test of the theory of universal common ancestry. Nature, 465(7295), 219-222.

[11] Steel, M., & Penny, D. (2010). Origins of life: Common ancestry put to the test. Nature, 465(7295), 168-169.

[12] Weiss, M., Sousa, F., Mrnjavac, N., Neukirchen, S., Roettger, M., Nelson-Sathi, S., & Martin, W. (2016). The physiology and habitat of the last universal common ancestor. Nature Microbiology, 1(9), 16116.

[13] Watanabe, K., & Suzuki, T. (2008). Universal Genetic Code and its Natural Variations. Encyclopedia Of Life Sciences.

[14] Koonin, E., & Novozhilov, A. (2009). Origin and evolution of the genetic code: The universal enigma. IUBMB Life, 61(2), 99-111.

[15] Isenbarger, T., Carr, C., Johnson, S., Finney, M., Church, G., & Gilbert, W. et al. (2008). The Most Conserved Genome Segments for Life Detection on Earth and Other Planets. Origins Of Life And Evolution Of Biospheres, 38(6), 517-533.

[16] Bejerano, G. (2004). Ultraconserved Elements in the Human Genome. Science, 304(5675), 1321-1325.

[17] Turanov, A., Lobanov, A., Fomenko, D., Morrison, H., Sogin, M., & Klobutcher, L. et al. (2009). Genetic Code Supports Targeted Insertion of Two Amino Acids by One Codon. Science, 323(5911), 259-261.

[18] Gilbert, S., & Barresi, M. (2016). Developmental biology (6th ed.). Sunderland (Mass.): Sinauer.

[19] Fuentes, I., Stegemann, S., Golczyk, H., Karcher, D., & Bock, R. (2014). Horizontal genome transfer as an asexual path to the formation of new species. Nature, 511(7508), 232-235.

[20] Marine Biological Laboratory. (2015, February 3). Sea slug has taken genes from algae it eats, allowing it to photosynthesize like a plant. ScienceDaily. Retrieved October 23, 2017 from

[21] Kyndt, T., Quispe, D., Zhai, H., Jarret, R., Ghislain, M., & Liu, Q. et al. (2015). The genome of cultivated sweet potato contains AgrobacteriumT-DNAs with expressed genes: An example of a naturally transgenic food crop. Proceedings Of The National Academy Of Sciences, 112(18), 5844-5849.

[22] Gelvin, S. (2003). Agrobacterium-Mediated Plant Transformation: the Biology behind the “Gene-Jockeying” Tool. Microbiology And Molecular Biology Reviews, 67(1), 16-37.

[23] Committee on Identifying and Assessing Unintended Effects of Genetically Engineered Foods on Human Health. (2004). Safety of Genetically Engineered Foods (p. 49). Washington: National Academies Press.

[24] EFSA Panel on Genetically Modified Organisms (GMO); Scientific Opinion on Guidance for risk assessment of food and feed from genetically modified plants. EFSA Journal 2011;9(5): 2150. [37 pp.] doi:10.2903/j.efsa.2011.2150.

[25] Mithieux, S., & Weiss, A. (2005). Elastin. Fibrous Proteins: Coiled-Coils, Collagen And Elastomers, 437-461.

[26] NatureEducation. (2017). Scitable. Retrieved 24 October 2017, from

[27] Griffin, J., & Vidal-Puig, A. (2008). Current challenges in metabolomics for diabetes research: a vital functional genomic tool or just a ploy for gaining funding?. Physiological Genomics, 34(1), 1-5.

[28] Nordström, A., O’Maille, G., Qin, C., & Siuzdak, G. (2006). Nonlinear Data Alignment for UPLC−MS and HPLC−MS Based Metabolomics:  Quantitative Analysis of Endogenous and Exogenous Metabolites in Human Serum. Analytical Chemistry, 78(10), 3289-3295.

[29] Crockford, D., Maher, A., Ahmadi, K., Barrett, A., Plumb, R., Wilson, I., & Nicholson, J. (2008). 1H NMR and UPLC-MSE Statistical Heterospectroscopy: Characterization of Drug Metabolites (Xenometabolome) in Epidemiological Studies. Analytical Chemistry, 80(18), 6835-6844.

[30] Bentley, R. (1999). Secondary Metabolite Biosynthesis: The First Century. Critical Reviews In Biotechnology, 19(1), 1-40.

[31] Metzdorff, S., Kok, E., Knuthsen, P., & Pedersen, J. (2006). Evaluation of a non-Targeted “Omic” approach in the safety assessment of genetically modified plants. Plant Biol (stuttg), 8(5), 662-672. doi:10.1055/s-2006-924151

[32] Abdeen, A., Schnell, J., & Miki, B. (2010). Transcriptome analysis reveals absence of unintended effects in drought-tolerant transgenic plants overexpressing the transcription factor ABF3. BMC Genomics, 11(1), 69.

[33] Baudo, M., Lyons, R., Powers, S., Pastori, G., Edwards, K., Holdsworth, M., & Shewry, P. (2006). Transgenesis has less impact on the transcriptome of wheat grain than conventional breeding. Plant Biotechnology Journal, 4(4), 369-380.

[34] Ricroch, A. (2013). Assessment of GE food safety using ‘-omics’ techniques and long-term animal feeding studies. New Biotechnology, 30(4), 349-354.

[35] Herman, R. A., & Price, W. D. (2013). Unintended compositional changes in genetically modified (GM) crops: 20 years of research. Journal of agricultural and food chemistry, 61(48), 11695-11701.

[36] Anderson, J., Michno, J., Kono, T., Stec, A., Campbell, B., Curtin, S., & Stupar, R. (2016). Genomic variation and DNA repair associated with soybean transgenesis: a comparison to cultivars and mutagenized plants. BMC Biotechnology, 16(1).

[37].  Wang, L., Wang, X., Jin, X., Jia, R., Huang, Q., Tan, Y., & Guo, A. (2015). Comparative proteomics of Bt-transgenic and non-transgenic cotton leaves. Proteome Science, 13(1).

[38] Coll, A., Nadal, A., Collado, R., Capellades, G., Kubista, M., Messeguer, J., & Pla, M. (2010). Natural variation explains most transcriptomic changes among maize plants of MON810 and comparable non-GM varieties subjected to two N-fertilization farming practices. Plant Molecular Biology, 73(3), 349-362.

[39]. Catchpole, G., Beckmann, M., Enot, D., Mondhe, M., Zywicki, B., & Taylor, J. et al. (2005). Hierarchical metabolomics demonstrates substantial compositional similarity between genetically modified and conventional potato crops. Proceedings Of The National Academy Of Sciences, 102(40), 14458-14462.

[40] Brown, N. (2011). 20. Mutagenesis – PlantBreeding. Retrieved 24 October 2017, from

[41] Anderson, J., Michno, J., Kono, T., Stec, A., Campbell, B., Curtin, S., & Stupar, R. (2016). Genomic variation and DNA repair associated with soybean transgenesis: a comparison to cultivars and mutagenized plants. BMC Biotechnology, 16(1).

[42]. K., S., R., O., & M. (2007). Life-threatening anaphylaxis to kiwi fruit: Protective sublingual allergen immunotherapy effect persists even after discontinuation. The Journal of Allergy and Clinical Immunology, 119(2), 507-8.

[43] Akeley, R., Mills, W., Cunningham, C., & Watts, J. (1968). Lenape: A new potato variety high in solids and chipping quality. American Potato Journal, 45(4), 142-145.

[44] Zitnak, A., & Johnston, G. (1970). Glycoalkaloid content of B5141-6 potatoes. American Potato Journal, 47(7), 256-260.

[45] McCue, K., Allen, P., Rockhold, D., Maccree, M., Belknap, W., & Shephard, L. et al. (2003). Reduction of Total Steroidal Glycoalkaloids in Potato Tubers Using Antisense Constructs of a Gene Encoding A Solanidine Glucosyl Transferase. Acta Horticulturae, (619), 77-86.

[46] Hellenäs, K., Branzell, C., Johnsson, H., & Slanina, P. (1995). High levels of glycoalkaloids in the established swedish potato variety magnum bonum. Journal Of The Science Of Food And Agriculture, 68(2), 249-255.

[47] Beier, R. (1990). Natural Pesticides and Bioactive Components in Foods. Reviews Of Environmental Contamination And Toxicology, 47-137.

[48] Beier, R., & Oertli, E. (1983). Psoralen and other linear furocoumarins as phytoalexins in celery. Phytochemistry, 22(11), 2595-2597.

[49] Ferguson, J., Fischer, D., & Metcalf, R. (1983). A report of cucurbitacin poisonings in humans. Cgc, V.6, P.73-74, 1983.

[50] Rymal, K., Chambliss, O., Bond, M., & Smith, D. (1984). Squash containing toxic cucurbitacin compounds occurring in california and alabama. Journal of Food Protection, 47(4), 270-271. doi:10.4315/0362-028X-47.4.270

[51] Batista, R., Saibo, N., Lourenço, T., & Oliveira, M. M. (2008). Microarray analyses reveal that plant mutagenesis may induce more transcriptomic changes than transgene insertion. Proceedings of the National Academy of Sciences, 105(9), 3640-3645.T

[52] Collard, B. C. Y., Jahufer, M. Z. Z., Brouwer, J. B., & Pang, E. C. K. (2005). An introduction to markers, quantitative trait loci (QTL) mapping and marker-assisted selection for crop improvement: the basic concepts. Euphytica, 142(1-2), 169-196.

[53] Gao, L., Cao, Y., Xia, Z., Jiang, G., Liu, G., Zhang, W., & Zhai, W. (2013). Do transgenesis and marker-assisted backcross breeding produce substantially equivalent plants?-A comparative study of transgenic and backcross rice carrying bacterial blight resistant gene Xa21. BMC genomics14(1), 738.

[54] Baudo, M. M., Lyons, R., Powers, S., Pastori, G. M., Edwards, K. J., Holdsworth, M. J., & Shewry, P. R. (2006). Transgenesis has less impact on the transcriptome of wheat grain than conventional breeding. Plant Biotechnology Journal4(4), 369-380.

[55] Gregersen, P. L., Brinch-Pedersen, H., & Holm, P. B. (2005). A microarray-based comparative analysis of gene expression profiles during grain development in transgenic and wild type wheat. Transgenic research14(6), 887-905.

[56] Coll, A., Nadal, A., Palaudelmas, M., Messeguer, J., Melé, E., Puigdomenech, P., & Pla, M. (2008). Lack of repeatable differential expression patterns between MON810 and comparable commercial varieties of maize. Plant molecular biology68(1-2), 105-117.

[57] Harrigan, G. G., Lundry, D., Drury, S., Berman, K., Riordan, S. G., Nemeth, M. A., … & Glenn, K. C. (2010). Natural variation in crop composition and the impact of transgenesis. Nature biotechnology28(5), 402-404.


Scientific Consensus isn’t a “Part” of the Scientific Method: it’s a Consequence of it

Although conceptually simple, the term “scientific consensus” is often misused and misunderstood. It can get confused with appeals to popular opinion or erroneously conflated with “consensus” in the colloquial sense of the word. These misunderstandings can lead to things like opinion polls, often predominated by unqualified individuals, being misconstrued as evidence that no scientific consensus exists on some topic for which it clearly does, or that it leans towards a different conclusion than it actually does. In some cases, the very concept itself invokes resentment or even retaliatory commentary from people whose views are threatened by its implications. The purpose of this article is to clarify the concept that the term scientific consensus is meant to refer to and address some of the arguments commonly leveled against it.

Defining Scientific Consensus

Just as the term “theory” has a different meaning in science than its colloquial usage, the term scientific consensus means something different than “consensus” in the usual colloquial sense. The latter typically refers to a popular opinion, and needn’t necessarily be based on knowledge or evidence. On the other hand, a scientific consensus is, by definition, an evidence-based consensus. A convergence of the weight of existing evidence is a prerequisite which distinguishes a knowledge-based scientific consensus from mere agreement. This is critical, because the scientific enterprise is essentially a meritocracy. As a result, it doesn’t matter if a few contrarians on the fringe disagree with the conclusions unless they can marshal up evidential justification of comparable weight or explain the existing data better. The weight of the evidence is paramount.

In a nutshell, a consensus in science refers to a convergence of many independent lines of high quality evidence all leading the majority of active scientists in a given field to arrive at the same conclusion and/or complimentary conclusions. It’s not something any scientist necessarily sets out to become a part of as a goal, but is rather something they discover they’re a member of because that’s where their research results led them. The process by which scientific consensus emerges over time can be complicated and tends to vary from case to case, but it is likely to exist whenever scientific knowledge is this best explanation for a given consensus. Furthermore, scientific knowledge is the best explanation for a consensus when the following definitional criteria are satisfied:

Consilience of Evidence: The consensus should be based on varied lines of evidence which independently converge on the same conclusion or set of conclusions [1]. The scientists and their results needn’t necessarily agree on every single minute detail, and the data convergence will typically fall within a set of error bars, but will point to the same general conclusion even if debates still exist on the minutia. This may involve contributions from multiple scientific sub-specialties, each providing different pieces comprising a broader understanding or set of conclusions.

For instance, the scientific consensus in climate science incorporates evidence and expert knowledge from meteorology, geology, geophysics, geochemistry, atmospheric physics, atmospheric chemistry, community and global ecology, astronomy, planetary science, and even stellar astrophysics. Scientists from different specialties study different aspects of the issue and arrive at results comprising a piece of a puzzle whose results are all consistent with the conclusion that the recent warming trend has been largely the result of human activities [38],[39],[40],[41],[42].

Similarly, the scientific consensus on the individual and social benefits of vaccination combines knowledge and evidence from fields such as microbiology, immunology, virology, epidemiology, systems biology, molecular biology, biochemistry and more. Knowledge comes together from these disparate disciplines to create vaccines that significantly reduce the likelihood of their recipients contracting the diseases against which they are designed to protect, whose risks are greater than the minuscule risks of adverse reactions to the vaccines [36],[37],[43].

Social Calibration: The experts involved are mutually committed to employing the same high standards of evidence and formalisms, and have good justifications for those standards [1]. Nobody disputes that carefully collected and reproducible evidence is key in science; the problem is that evidence doesn’t talk. It has to be interpreted by human scientists. The Social Calibration criterion has to do with what the scientific community as a whole accepts as evidence, how they decide what is relevant and significant, and how individual scientists persuade their peers that they are correct.

One of the reasons that certain fringe disciplines are viewed as pseudosciences by mainstream scientists is because they operate under lower and/or inconsistent standards. For example, one of the most important methods of ascertaining the safety and efficacy of a given medical intervention in conventional science-based medicine is the performance of a large randomized double blinded placebo-controlled clinical trial [34]. In contrast, many so-called “alternative” modalities are satisfied to rely on a modality’s ancientness (whether real or merely assumed), weak or non-replicable studies, and/or unverifiable personal testimonies that may or may not reflect how most patients would be affected [32],[33]. In some cases, alternative practitioners persist even after a substantial clinical evidence directly contradicts the premises underlying the modality, such as is the case with homeopathy [47]. That’s not to say that there do not exist certain exceptions, but the overarching pattern is that the standards of evidence agreed upon by mainstream medical researchers is different than the standards deemed acceptable in alt med. The agreed upon standards of evidence in scientific medicine represents what is being referred to here as social calibration. A counter-example would be something like ghost hunting, whereby there do not exist any consistent standards insofar as what is supposed to qualify as evidence for a ghost [48]

Social Diversity: This criterion simply means that the evidence and analyses comprising the scientific consensus should come from varied sources by scientists of varied backgrounds and cultures in order to avoid any systematic bias in the scientific literature [1]. For example, one of the arguments against the international scientific consensus on genetically engineered food safety is based on the misconception that seed companies like Monsanto are the only ones doing the research, or that they dictate who does. That’s not actually true, but if it were, then the evidence would be falling short on this criterion. Instead, the GE food consensus is supported by myriad studies by scientists from different ethnic, cultural and economic backgrounds with varied funding sources from all around the world, and by position statements from the most prestigious scientific organizations on the planet [27],[28],[29],[30]. The Social Diversity criterion ensures that a consensus is not a product of group think, politics, financial incentives, ideological motives, or shared cultural values.

Via John Garrett of Skeptical Science for Denial 101

Scientific Consensus =/= Unanimity

Notice that the aforementioned criteria defining scientific consensus does not preclude normative contestation or outlier viewpoints within the scientific community. That’s actually quite normal and usually fairly benign. Total absolute 100% unanimity among experts is not a prerequisite to a consilience of evidence supporting a particular conclusion.

Not All Disagreements are of Equal Merit

However, there are cases in which that normative contestation and areas of scientific uncertainty becomes exaggerated by groups with ulterior (unscientific) motivations (either financial, political, or ideological) in order to argue against the reliability of extant scientific knowledge and/or to obfuscate public understanding of it. This phenomenon of special interest groups combating scientific conensus is well-documented on topics ranging from the risks of smoking tobacco, anthropogenic global warming, and the safety of GMO foods and conventional vaccine schedules [2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12],[13],[14],[15],[16],[17],[19].

With a few exceptions, however, the contrarians in these cases are attempting to shed doubt on the existence, strength, and/or legitimacy of the scientific consensus on a particular topic, and are not necessarily directly contesting the very concept of scientific consensus itself. Rather than claiming that there can be no such thing as scientific consensus, most of them instead argue that such a situation does not exist with respect to the particular topic on which they disagree, or that it may exist but is nevertheless simply unfounded. They insert false balance and exploit the normal everyday uncertainties and tentativeness inherent to all scientific knowledge and attempt to amplify them with respect to topics they wish to portray as more contentious than the evidence actually suggests. This is effective because acceptance that a scientific consensus exists has been shown to function as a “gateway belief” to accepting that a set of propositions is true, and manufacturing the appearance of continued legitimate scientific controversy can obscure public perception of its existence [20],[21].

However, there are some exceptions whereby contrarians attempt to undermine the very concept of scientific consensus itself (i.e. here). As you may have guessed, the claimants in such cases invariably misrepresent what scientific consensus actually is. They try to portray it as analogous to arriving at a conclusion by way of an opinion poll, which is an example of the fallacy of equivocation [18]. They will typically argue you that what matters is the scientific evidence, which, although true, ignores the fact that a concilience of evidence is already a non-negotiable prerequisite to scientific consensus [1]. It is therefore nonsensical to speak of evidence and scientific consensus as if the latter was not contingent upon the former, let alone to imply that they are mutually exclusive. Again, this is no more legitimate than arguing against scientific theories by equivocating to a colloquial definition of theory.

Based on the criteria described earlier (consilience of evidence, social calibration, and social diversity), it’s easy to see how scientific consensus will unavoidably emerge on any question for which repeated applications of the scientific method by a diverse group of scientists result in a body of evidence whose results lean heavily towards certain conclusions and away from others. Although it is possible to quantitatively analyze the nodes and connections of various scientific citation networks and how they evolve over time with respect to a given topic, the specific pathway by which scientific consensus emerges tends to vary from case to case.

This paper entitled The Temporal Structure of Scientific Consensus Formation (by Shwed et al) explains some methods by which such scientific citation networks can be analyzed to ascertain degrees of scientific consensus and to partition nodes into salient sub-communities [2]. There are both pros and cons to this approach, but such efforts are designed to minimize dependence on the discretion of individual scientists in the detection of scientific consensus. When a new topic of study first arises, different scientists typically end up in different camps which approach key questions differently, explore different initial hunches, and form different citation networks as more and more studies are produced. As a scientific consensus begins to form, the lines of distinction between the various camps begin to dissolve, and members of each start to converge on certain areas of agreement.

Somewhat counter-intuitively, the authors also discovered that when scientific consensus is achieved, the aforementioned scientific citation networks tend to grow in size, as does the total rate of literature output on the topic. At least, that was the case in the instances they analyzed. The relevance of this observation is that any consensus arrived at on the basis of weak or faulty evidence will tend to quickly dissolve as interest in (and scrutiny towards) the topic increases, whereas conclusions based on stronger evidence will tend to open up follow-up questions whose study results are complimentary to them. This is another key to understanding why scientific consensus represents not the death of scientific inquiry, but rather a scaffolding on which subsequent scientific inquiry can build and grow. It simply represents what we’ve learned so far about some aspect of the universe. In this way, scientific consensus is not so much a final point of arrival but rather a launching point for further inquiry to add to and refine our current knowledge. The authors also identified three trajectories along which scientific consensus emerges, which they characterize as “flat, spiral, and cyclic,” but I’ll leave that for readers compelled to read the original paper [2].

If the scientific consensus is wrong on some topic, then the subsequent exploration of additional questions derived from it should reveal that. There’s a reason why we refer to science as a self-correcting enterprise. It’s not just a catch phrase. Efforts to better understand the universe must build upon existing knowledge. Scientific consensus helps shape the discussion and guide resource allocation with respect to tangential and/or follow-up questions within a particular sub-discipline. Without it, we would simply keep spinning our wheels by re-establishing the same conclusions over and over again without ever attempting to build on that foundation and generate new knowledge.

Why Scientific Consensus Matters

Although often colored by personal values, biases, competing motives, and desires, humans generally make decisions based on what they perceive to be true. This is true both on the individual and group levels. Not everyone can be an expert in a scientific discipline, and nobody can be an expert in all of them. Consequently, people routinely have to assess what is likely to be true in areas on which they are not experts, and make decisions based on it. Scientific Consensus represents the most reliably accurate knowledge available to human beings on a given topic at any given time. It’s far from infallible, but then again, so is every other epistemological framework available to humans (albeit even more so). To reject it on the grounds that it is not infallible in favor of even less reliable approaches to knowledge would be an example of the Nirvana Fallacy [31]. The best available option, even if imperfect, is nevertheless still the best available option. Scientific consensus also serves as a launching point guiding further scientific study of related questions, and helps facilitate the generation and accumulation of new knowledge.

Detecting Scientific Consensus

I think it’s a fairly safe assumption that analyzing scientific citation networks with sophisticated algorithms and statistical methods is not something that the average person is likely to do with respect to every scientific claim they stumble upon. It’s not the be all end all anyway, because it only shows how sub-communities of scientists and their published work converges over time. It does nothing to adjudicate on the quality of individual studies within a citation network or the reliability of their conclusions. Nor does it distinguish between cases where a cited work’s findings are being used as supporting evidence, versus cases where a cited work’s conclusions are being challenged. It’s useful, but it’s not a replacement for actual human experts capable of summarizing the state of affairs in their fields of specialization. Ultimately, becoming an expert in a particular field would be the ideal way to equip oneself to competently assess the current state of the science within it, but that’s not feasible for most people, and nobody is an expert on every topic.

Systematic Reviews as Proxies

Fortunately, however, there are other proxies one can look for to get a ballpark idea of the degree of scientific consensus (or lack thereof) on a particular topic. For example, on many thoroughly studied topics there exist systematic reviews and/or meta-analyses which examine many studies at a time in order to assess what is implied by the weight of the evidence. A systematic review seeks to answer a specific research question by summarizing all the available scientific literature fitting a pre-specified set of eligibility criteria; a meta-analysis seeks to use statistical methods to summarize and analyze the results of such studies. Systematic reviews can vary widely in quality just like other types of studies. You can get an idea of what a good systematic review should entail and how to read one here [22],[23],[35],[44],[45].

Position Statements as Proxies

Alternatively, many reputable scientific organizations will put together position statements on certain topics, which can be a useful proxy for ascertaining the degree of scientific consensus that exists for a given topic. If the majority of prestigious organizations have arrived at similar conclusions, then the chances are that there is a fairly strong scientific consensus on the subject. Obviously this is an imperfect proxy, because there are also front organizations which masquerade as objective scientific organizations, but which are really serving some other agenda, and because it doesn’t provide a clear view of the evidence upon which their conclusions are based.

Other Proxies

It’s advisable to avoid relying too heavily on petitions or surveys of scientists’ opinions as a proxy for or against the existence of a scientific consensus, particularly on topics that tend to be controversial in public discourse. It’s not that they can’t ever be done in such a way that they could convey useful information, but rather that they’re too easy to screw up, or to be manipulated into conveying misleading information. In fact, that’s a common tactic used by people whose goal it is to obfuscate public understanding by disputing the existence of a scientific consensus on particular topics where it exists. They accomplish this by cobbling together signatures and/or statements from people whose views comport with theirs, but whose qualifications are often tangential to the topic under discussion, and/or whose opinions represent a tiny minority of researchers, and are not well-supported by the weight of the evidence in the peer-reviewed literature.

For example, the debunked Oregon Petition Project was an attempt to obscure the weight of the scientific consensus on human-caused climate change [24]. A document assembled by the Discovery Institute which boasted of 100+ scientists who reject the theory of evolution was humorously rebutted by the National Center for Science Education with Project Steve: a list comprised exclusively of scientists named Steve who accept evolution, which nevertheless dwarfed the Discovery Institute’s list [25]. Similarly, anti-GMO campaigners have written things such as the I-SIS letter as an attempt to sew uncertainty and doubt on the mainstream scientific consensus position on the safety of Genetically Engineered food crops [26],[27]. HIV/AIDS denialists have also attempted similar tactics [46].

One possible exception to this rule of thumb would be a survey which groups the participating scientists according to the degree to which their area of specialty pertains to the subject under discussion so that one can see whether the percentage of agreement increases the closer the areas of expertise get to the specific topic. Even then it would have to be based on a representative sample of each sub-category of scientists, and I wouldn’t recommended relying on it as anything more than a complimentary proxy with which to cross-corroborate with other signs of an extant scientific consensus.

Above all, avoid relying on unsourced YouTube conspiracy videos, opinionated people with no relevant scientific education, blogs and other websites that make sensational claims for which they don’t cite credible research, and fake experts whose claims are totally inconsistent with the peer-reviewed scientific literature.

It’s perfectly fine to use a video medium to learn about science, but just understand that there is no vetting process whatsoever, so content creators can say whatever they want with impunity. University lectures are usually fine (and recommended), as are tutorials videos such as Khan Academy, and any videos which cite credible sources in their video description. This should go without saying, but I’m including it for the sake of completeness.


Scientific consensus is not a part of the scientific method so much as it is a consequence of it. It inevitably arises whenever a large body of scientific literature accumulates that points towards similar conclusions. Typically, people who argue otherwise are equivocating due to them either misunderstanding or deliberately misrepresenting the meaning of the term. Scientific consensus is characterized by the co-existence of a consilience of evidence, social calibration, and social diversity, and although not infallible, nevertheless represents the best knowledge currently available on a given scientific question at a given time. Furthermore, it is instrumental to the generation and accumulation of new knowledge in that it directs researchers toward complimentary follow-up questions whose exploration allows humankind to build upon previous knowledge. 


[1] Miller, B. (2013). When is consensus knowledge based? Distinguishing shared knowledge from mere agreement. Synthese190(7), 1293-1316.

[2] Shwed, U., & Bearman, P. S. (2010). The temporal structure of scientific consensus formation. American sociological review75(6), 817-840.

[3] McCright, A. M., & Dunlap, R. E. (2000). Challenging global warming as a social problem: An analysis of the conservative movement’s counter-claims. Social problems47(4), 499-522.

[4] Proctor, R. N. (2012). The history of the discovery of the cigarette–lung cancer link: evidentiary traditions, corporate denial, global toll. Tobacco Control21(2), 87-91.

[5] Lopipero, P. A., & Bero, L. A. (2006). Tobacco interests or the public interest: 20 years of industry strategies to undermine airline smoking restrictions. Tobacco Control15(4), 323.

[6] SCHURMAN, R. (2004). Fighting Frankenfoods: Industry opportunity structures and the efficacy of the anti-biotech movement in Western Europe. Social problems51(2), 243-268.

[7] Wales, C., & Mythen, G. (2002). Risky discourses: the politics of GM foods. Environmental Politics11(2), 121-144.

[8] Anti-GMO Activists Are the One Practicing Tobacco Science. (2015). Food and Farm Discussion Lab. Retrieved 4 August 2017, from

[9] Boykoffa, M. T., & Boykoffb, J. M. (2004). Balance as bias: global warming and the US prestige press$. Global Environmental Change14, 125-136.

[10] Dunlap, R. E., & McCright, A. M. (2008). A Widening Gap: Republican and Democratic Views on Climate Change.

[11] McCright, A. M., & Dunlap, R. E. (2011). THE POLITICIZATION OF CLIMATE CHANGE AND POLARIZATION IN THE AMERICAN PUBLIC’S VIEWS OF GLOBAL WARMING, 2001–2010tsq_1198 155.. 194. The Sociological Quarterly52, 155-194.

[12] Gary Ruskin, GMO Labeling Movement Funded by Anti-Vaxxers | American Council on Science and Health. (2017). Retrieved 4 August 2017, from Attach quote

[13] The Anti-Vaccine And Anti-GMO Movements Are Inextricably Linked And Cause Preventable Suffering. (2017). Retrieved 4 August 2017, from

[14] SCHICK, S. F., & GLANTZ, S. A. (2007). Old ways, new means: tobacco industry funding of academic and private sector scientists since the Master Settlement Agreement. Tobacco control16(3), 157-164.

[15] Kata, A. (2012). Anti-vaccine activists, Web 2.0, and the postmodern paradigm–an overview of tactics and tropes used online by the anti-vaccination movement. Vaccine30(25), 3778.

[16] Offit, P. A. (2015). Deadly choices: How the anti-vaccine movement threatens us all. Basic Books (AZ).

[17] Jolley, D., & Douglas, K. M. (2014). The effects of anti-vaccine conspiracy theories on vaccination intentions. PloS one9(2), e89177.

[18] Equivocation. (2017). Retrieved 5 August 2017, from

[19] Something About Pots and Kettles. (2015). The Skeptical Beard. Retrieved 5 August 2017, from

[20] van der Linden, S. L., Leiserowitz, A. A., Feinberg, G. D., & Maibach, E. W. (2015). The scientific consensus on climate change as a gateway belief: Experimental evidence. PloS one10(2), e0118489.

[21] Lewandowsky, S., Gignac, G. E., & Vaughan, S. (2013). The pivotal role of perceived scientific consensus in acceptance of science. Nature Climate Change3(4), 399.

[22] Systematic reviews and meta-analyses: a step-by-step guide | (2017). Retrieved 7 August 2017, from

[23] Uman, L. S. (2011). Systematic Reviews and Meta-Analyses. Journal of the Canadian Academy of Child and Adolescent Psychiatry20(1), 57–59.

[24] 30,000 Scientists Reject Anthropogenic Climate Change?. (2016). Retrieved 7 August 2017, from

[25] Project Steve | NCSE. (2017). Retrieved 7 August 2017, from

[26] Scientists Declare No Consensus on GMO Safety. (2017). Retrieved 7 August 2017, from

[27] Hulk, C. (2015). The International Scientific Consensus On Genetically Engineered Food SafetyThe Credible Hulk. Retrieved 7 August 2017, from

[28] Sanchez, M. A. (2015). Conflict of interests and evidence base for GM crops food/feed safety research. Nature biotechnology33(2), 135-137.

[29] (GENERA), G. (2014). Source shows half of GMO research is independent | Ag Retrieved 7 August 2017, from

[30] (2017). Retrieved 7 August 2017, from

[31] Man, F. (2016). The nirvana fallacy: An imperfect solution is often better than no solutionThe Logic of Science. Retrieved 8 August 2017, from

[32] We Should Abandon the Concept of “Alternative Medicine”. (2015). Science-Based Medicine. Retrieved 8 August 2017, from

[33] Hunt, K., & Ernst, E. (2009). Evidence-based practice in British complementary and alternative medicine: double standards?. Journal of health services research & policy14(4), 219-223.

[34] Sibbald, B., & Roland, M. (1998). Understanding controlled trials. Why are randomised controlled trials important?. BMJ: British Medical Journal316(7126), 201.

[35] Greenhalgh, T. (1997). Papers that summarise other papers (systematic reviews and meta-analyses). BMJ: British Medical Journal315(7109), 672.

[36] Graphic proof that vaccines work (with sources) – Isabella B. – Medium. (2015). Medium. Retrieved 8 August 2017, from

[37] Vaccines work. Period.. (2013). Science-Based Medicine. Retrieved 8 August 2017, from

[38] Hulk, C. (2017). No, Solar Variations Can’t Account for the Current Global Warming Trend. Here’s Why:The Credible Hulk. Retrieved 8 August 2017, from

[39] Cook, J., Oreskes, N., Doran, P. T., Anderegg, W. R., Verheggen, B., Maibach, E. W., … & Nuccitelli, D. (2016). Consensus on consensus: a synthesis of consensus estimates on human-caused global warming. Environmental Research Letters11(4), 048002.

[40] Oreskes, N. (2004). The scientific consensus on climate change. Science306(5702), 1686-1686.

[41] Doran, P. T., & Zimmerman, M. K. (2009). Examining the scientific consensus on climate change. Eos, Transactions American Geophysical Union90(3), 22-23.

[42] →, V. (2015). Is There a Consensus about Climate Change?Thoughtscapism. Retrieved 8 August 2017, from

[43] Comparison of Effects of Diseases and Vaccines – Canadian Immunization Guide. (2012). Retrieved 8 August 2017, from

[44] Shea, B. J., Grimshaw, J. M., Wells, G. A., Boers, M., Andersson, N., Hamel, C., … Bouter, L. M. (2007). Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Medical Research Methodology7, 10.

[45] Liberati, A., Altman, D. G., Tetzlaff, J., Mulrow, C., Gøtzsche, P. C., Ioannidis, J. P., … & Moher, D. (2009). The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS medicine6(7), e1000100.

[46] Schüklenk, U. (2004). Professional responsibilities of biomedical scientists in public discourse. Journal of medical ethics30(1), 53-60.

[47] Hulk, C. (2015). Money For Nothing: Why Homeopathy Is Still Pseudoscientific Nonsense That Does Not WorkThe Credible Hulk. Retrieved 13 August 2017, from

[48] Ghost-Hunting Mistakes: Science and Pseudoscience in Ghost Investigations – CSI. (2016). Retrieved 13 August 2017, from


Why The Asbestos Gambit Fails

People who oppose one or more areas of mainstream science have developed a wide variety of creative ways of rationalizing their rejection of scientific evidence and scientific consensus. Realizing that they cannot rebut a particular scientific idea on the basis of the evidence, some of them will instead resort to attacking the reliability of scientific knowledge more generally. A popular method of doing so is the Asbestos Gambit. The argument is that the story of asbestos implies that areas of strong scientific consensus can’t be trusted. The purpose of this article is to examine the history of asbestos use and the evolution of our knowledge of the health dangers it presents, and to explain why the Asbestos Gambit is a terrible argument on multiple levels.

Asbestos and its Hazards

Asbestos is the generic name for a set of 6 silicate mineral types which have been utilized by human cultures as far back as 5,000 years ago [1]. These 6 types include 5 minerals in the amphibole category: actinolite, amosite (aka brown asbestos), anthophyllite, crocidolite (aka blue asbestos), and tremolite, as well as chrysotile (aka white asbestos), which falls under the serpentine category, and is the form most commonly used in walls, ceilings and floors of homes and businesses in the US [1].

Although some of these types are likely more hazardous than others, all 6 types are currently classified as human carcinogens by the EPA, U.S. Department of Health and Human Services, and the International Agency for Research on Cancer [2],[3],[4],[5]. Mesothelioma in particular is a relatively rare type of malignant pleural cancer associated almost exclusively with asbestos exposure, and asbestos has also been implicated in an increased risk of a chronic inflammatory lung disease called asbestosis [6].

Consequently, domestic usage of asbestos has decreased considerably in most developed countries since the early 1970s [7]. Its use has been banned in countries such as Australia and the UK, where the asbestos-related death tolls were particularly high [30],[31]. The US has no official asbestos ban, but we do have The Clean Air Act of 1970, The TSCA of 1976, and AHERA of 1986 which, in aggregate, provide the EPA with the authority to regulate asbestos use, place restrictions on its use and disposal, and the power to establish inspection and removal standards for asbestos in schools [26],[27],[28]. Here is a list of countries with full asbestos bans in place [29].

What is the Asbestos Gambit?

Unfortunately, the story of asbestos has occasionally been co-opted by ideologues and reframed as an argument against the reliability of scientific knowledge as an excuse to reject scientific consensus whenever its implications conflict with their personal agenda. The argument is essentially that the story of asbestos implies that science is unreliable and cannot be trusted on the grounds that scientists said asbestos was safe when it actually wasn’t. This is curious because asbestos is a set of naturally occurring substances whose adverse health effects were unknown prior to the modern scientific enterprise, and were only discovered via the scientific method itself. The idea is a variant of the old “science has been wrong before, therefore we should ignore its conclusions even now” argument: a common trope utilized by promoters of pseudoscience and critics of mainstream science in general.

There are several reasons why the reasoning underlying the Asbestos Gambit argument is unsound. Even if it was the case that scientists got it wrong with asbestos, the self-correcting nature of science is among its strengths: not its weaknesses. And when scientific knowledge is wrong, the only reason we ever find out is thanks to newer science. That means that the claimant’s major premise, that “science was wrong,” takes for granted something we only know thanks to science, which, according to the claimant’s own conclusion, cannot be relied upon. The argument is practically self-refuting. In this case, however, the turn of events itself is being misrepresented. The argument implies that there was once a strong scientific consensus that asbestos was perfectly safe, and only subsequently did people realize the science had been wrong. The claimant then uses that to argue that that constitutes a good reason to reject well-supported scientific theories. Like many other examples commonly used to advance this argument, it relies on a historical revisionist narrative. The actual history of how scientific knowledge evolved with respect to mesothelioma (and the health risks of asbestos exposure more generally) is long and complicated.

How the Dangers of Asbestos were Discovered

Contrary to popular belief, there is no unambiguous support in the primary source material for the idea that people in the ancient world knew how hazardous asbestos was. It’s possible that this notion arose in hindsight after people began to realize its health effects in the 20th century, but the evidence commonly cited for it is weak and vague at best.

For instance, it is frequently claimed that the Roman naturalist, Pliny The Elder, reported adverse health effects among slaves who wove asbestos into fabrics, but the evidence for this is extremely weak and has been contested by some scholars on the grounds that the primary sources provide no support for it [15],[35]. Pliny mentions asbestos three times in his 37 volume Natural History, but none of those passages mention adverse health effects from it [16],[17],[18]. The line “disease of slaves” quoted on many asbestos-related websites actually comes from a passage that never even mentions asbestos [19].

Another often quoted passage references workers using “masks of loose bladder-skin, in order to avoid inhaling the dust, which is highly pernicious” [22],[23]. However, this section, (which he got from the works of Dioscoride), was about workers in the manufacture of minium (Lead (I, IV) Oxide aka red lead) products, and makes no mention of asbestos [22],[24]. In fact, if anything, Pliny’s account in Book 36 chapter 31 suggests that he believed asbestos to have healing properties, even going as far as to say that it “effectually counteracts all noxious spells, those wrought by magicians in particular” [17],[20].

Similarly, many of those same internet sources repeat the idea that Strabo, the Greek geographer, reported frequent sickness among slaves working in asbestos mines. However, it is believed that the often referenced passage in Geography in which Strabo says “air in the mines is both deadly and hard to endure on account of the grievous odor of the ore, so that the workmen are doomed to a quick death” is actually in reference to arsenic mines: not asbestos [21]. This appears to be one of those misconceptions that got repeated so many times that it became part of the folklore.

The earliest case likely to have been mesothelioma was documented back in 1767, but no strong association with asbestos was known until nearly two centuries later [10],[11]. As for asbestosis, and other lung complications, although some evidence of connections to pulmonary fibrosis were beginning to emerge in cases of asbestos mine workers as early as the turn of the 20th century, with epidemiological data correlating “dusty trades” with early mortality by 1918, many clinical diagnoses in the early 20th century were confounded by the simultaneous presence of tuberculosis, and it wasn’t until 1928 when the first non-tuberculosis case of asbestosis was unambiguously diagnosed, named, and documented [1],[8],[9],[13].

Compelling preliminary evidence of an association between asbestosis and malignant mesothelioma didn’t emerge until the late 1940s-early 1950s, and it wasn’t until the 1960s that a strong scientific consensus started to take shape [10],[12]. A connection to lung cancer was also documented in 1955 [14].

Why the Asbestos Gambit Fails

The contrarians who use the story of Asbestos to discredit science they don’t like would have us believe that scientists researched carelessly and then hastily and arrogantly proclaimed a scientific consensus that asbestos was harmless, and that they were later shown to be wrong after considerable human cost had already accumulated. As you can see, that is not what happened. In the past, methods or substances whose common usage predated the scientific era were often grandfathered in, so to speak. They were presumed acceptable unless proven otherwise, particularly in the case of naturally occurring substances which had been utilized for millennia. So, the usage of asbestos was never the result of a robust scientific consensus based on the convergence of multiple lines of scientific evidence on its safety. Rather, it was the scientific enterprise itself that was responsible for people learning that it was unsafe.

This is exemplary of how opponents of various areas of science distort facts to shed doubt on the veracity of science they don’t like. They do this to introduce sophisticated doubt in the public sphere about the reliability of the scientific consensus on topics such as evolution, the safety of genetically engineered food crops, the age of the earth, the efficacy of vaccines, and the reality of anthropogenic global warming. “After all,” they argue, “look how long it took scientists to figure out the hazards of asbestos. How can we trust scientists now?” Yet, there never was anything about asbestos safety resembling the formidable body of scientific evidence on which the scientific consensus on each of those topics was built, so the Asbestos Gambit is a complete non-sequitur.

Corporate Malfeasance

Another contention closely related to this is the idea that asbestos companies knowingly kept quiet about the dangers of asbestos, or even actively worked to sow the seeds of doubt in order to delay action and distort public perception of the strength of the science. Strong cases have been made that some asbestos companies dragged their feet while knowing more than they let on, and the argument that they actively tried to downplay the severity of the problem has been the subject of many courtroom battles. In 1989, the EPA issued a Final Ban and Phase-Out to prohibit all manufacture and importation of asbestos in the US, which was promptly overturned, thanks in no small part to a lawsuit by an asbestos industry organization: Corrosion Proof Fittings v. EPA, 947 F.2d 1201 (5th Cir. 1991) [32],[33],[34].

We’ve seen this sort of behavior by companies before, such as in the case of tobacco companies delaying public acceptance of an emerging scientific consensus on the dangers of smoking, and it is certainly problematic [25]. Any time special interest groups of any kind delay or obfuscate public understanding of scientific issues, it removes people’s ability to make sound decisions by impeding the flow of accurate information.

However, that has little to nothing to do with the state of the science itself. Ironically, this sort of behavior is precisely what the people rejecting the scientific consensus on topics like GMO food safety, vaccine efficacy, and anthropogenic global warming are doing now. Rather than going through the proper channels by publishing newer and better science to challenge the current weight of the evidence, they instead resort to political rhetoric, bad logic, bad science, and sowing public doubt on the state and/or reliability of scientific knowledge. Yet, these are likely to be the same people who will use the Asbestos Gambit and similar arguments to build a manufactroversy in order to persuade people to ignore scientific consensus.

For example, the debunked Oregon Petition Project was an attempt to obscure the weight of the scientific consensus on human-caused climate change. A document assembled by the Discovery Institute which boasted of 100+ scientists who reject the theory of evolution was humorously rebutted by the National Center for Science Education with Project Steve: a list comprised exclusively of scientists named Steve who accept evolution, which nevertheless dwarfed the Discovery Institute’s list. Similarly, anti-GMO campaigners have written things such as the I-SIS letter as an attempt to sew uncertainty and doubt on the mainstream scientific consensus position on the safety of Genetically Engineered food crops.

If anything, all of this highlights the importance of learning to tell the difference between science and the subterfuge of ideologues and special interest groups. The asbestos industry never controlled the science and were certainly never able to buy off an international scientific consensus. At worst, some of them may have succeeded in delaying policy action and public acceptance of what the scientific evidence was showing. That (again) goes to show how important it is to look at the science itself. 


Far from being a justification for rejecting or ignoring well-supported scientific conclusions, the real lessons from the story of asbestos are that just because something is naturally occurring or has been used since the pre-scientific era does not preclude it from being unsafe, and above all, that it’s critical to examine the weight of scientific evidence while learning to filter out the noise of spin doctors and ideologues.

People may differ in their personal value-hierarchies, and adjudication on matters of political legislation and public policy always involves normative elements, but they nevertheless can and should at least be informed by scientific knowledge. Matters of brute fact should always be the one consistent region of common ground between groups with competing values and priorities. And insofar as generating reliable knowledge of the physical world, no system ever devised by humanity can rival the power of the scientific method.

Credible Hulk


[1] Ross, M., & Nolan, R. P. (2003). History of asbestos discovery and use and asbestos-related disease in context with the occurrence of asbestos within ophiolite complexes. SPECIAL PAPERS-GEOLOGICAL SOCIETY OF AMERICA, 447-470.

[2] Silverstein, M. A., Welch, L. S., & Lemen, R. (2009). Developments in asbestos cancer risk assessment. American journal of industrial medicine52(11), 850-858.

[3] LaDou, J., Castleman, B., Frank, A., Gochfeld, M., Greenberg, M., Huff, J., … & Soffritti, M. (2010). The case for a global ban on asbestos. Environmental Health Perspectives, 897-901.

[4] US Public Health Service, & US Department of Health and Human Services. (2001). Toxicological profile for asbestos. Atlanta, GA: Agency for Toxic Substances and Disease Registry.

[5] International Agency for Research on Cancer. (1972). IARC monographs on the evaluation of carcinogenic risk of chemicals to man. IARC monographs on the evaluation of carcinogenic risk of chemicals to man.1.

[6] Norbet, C., Joseph, A., Rossi, S. S., Bhalla, S., & Gutierrez, F. R. (2015). Asbestos-related lung disease: a pictorial review. Current problems in diagnostic radiology44(4), 371-382.

[7] U.S. Geological Survey. Mineral Commodity Summaries 2006: Asbestos

[8] Luus, K. (2007). Asbestos: mining exposure, health effects and policy implications. McGill Journal of medicine10(2), 121.

[9] Seiler, H. E. (1928). A case of pneumoconiosis: result of the inhalation of asbestos dust. British medical journal2(3543), 982.

[10] Smith, D. D. (2005). The history of mesothelioma. In Malignant Mesothelioma (pp. 3-20). Springer New York.

[11] Lieutaud, J. (1767). Historia anatomico-medica, etc. Paris2, 86.

[12] Wagner, J. C., Sleggs, C. A., & Marchand, P. (1960). Diffuse pleural mesothelioma and asbestos exposure in the North Western Cape Province. British journal of industrial medicine17(4), 260-271.

[13] Hoffman, F. L. (1918). Mortality from Respiratory Diseases in Dusty Trades (inorganic Dusts) (No. 231). US Government Printing Office.

[14] Doll, R. (1955). Carcinoma of the lung in asbestos-silicosis. industr. Med12, 81-86.

[15] MAINES, R. (2005). Asbestos and Fire: Technological Tradeoffs and the Body at Risk. Rutgers University Press. Retrieved 3 May 2017 from JSTOR.

[16] Bostock, J., & Riley, H. T. (1855). Pliny the Elder. The Natural History2, Book 19, The Nature and Cultivation of Flax, and an Account of Various Garden Plants, Chapter 4, “Linen Made of Asbestos.”

[17] Bostock, J., & Riley, H. T. (1855). Pliny the Elder. The Natural History2, Book 36, The Natural History of Stones, Chapter 31, Ostracites: Four Remedies. Amianthus; Two Remedies.

[18] Bostock, J., & Riley, H. T. (1855). Pliny the Elder. The Natural History2, Book 37, The Natural History of Precious Stones, Chapter 54, Achates; the several varieties of it. Acopos; the remedies derived from it. Alabastritis; the remedies derived from it. Alectoria. Androdamas. Argyrodamas. Antipathes. Arabica. Aromatitis. Asbestos. Aspisatis. Atizoe. Augetis. Amphidanes or Chrysocolla. Aphrodisiaca. Apsyctos. Aegyptilla.

[19] Bostock, J., & Riley, H. T. (1855). Pliny the Elder. The Natural History2, Book 7, Man, His Birth, His Organization, and the Invention of the Arts, Chapter 51,Various Instances of Diseases.

[20] Bianchi, C., & Bianchi, T. (2014). Asbestos between science and myth. A 6,000-year story. La Medicina del lavoro106(2), 83-90.

[21] Strabo. ed. H. L. Jones, The Geography of Strabo, Book 12, Chapter 3, Section 40. Cambridge, Mass.: Harvard University Press; London: William Heinemann, Ltd. 1924.

[22] Bostock, J., & Riley, H. T. (1855). Pliny the Elder. The Natural History2, Book 33, The Natural History of Metals, Chapter 40, “The Various Kinds of Minium.”

[23] Hunter, D. (1969). The diseases of occupations. The diseases of occupations., (5th Edition).

[24] Dioscoride. (1968). The Greek Herbal of Dioscorides: Illustrated by a Byzantine, AD 512. R. W. T. Gunther (Ed.). Hafner Publishing Company.

[25] Brownell, K. D., & Warner, K. E. (2009). The perils of ignoring history: Big Tobacco played dirty and millions died. How similar is Big Food?. Milbank Quarterly87(1), 259-294.

[26] Evolution of the Clean Air Act | Overview of the Clean Air Act and Air Pollution | US EPA. (2017). Retrieved 5 May 2017, from

[27] Summary of the Toxic Substances Control Act | Laws & Regulations | US EPA. (2017). Retrieved 5 May 2017, from

[28] Asbestos Laws and Regulations | Asbestos | US EPA. (2017). Retrieved 5 May 2017, from

[29] Current Asbestos Bans and Restrictions. (2017). Retrieved 5 May 2017, from

[30] Australia – Asbestos Use, Mesothelioma & Asbestos Laws. (2017). Mesothelioma Center – Vital Services for Cancer Patients & Families. Retrieved 5 May 2017, from

[31] United Kingdom – Asbestos, Mesothelioma, Laws & Regulations. (2017). Mesothelioma Center – Vital Services for Cancer Patients & Families. Retrieved 5 May 2017, from

[32] Asbestos Ban and Phase-Out Federal Register Notices | Asbestos | US EPA. (2017). Retrieved 5 May 2017, from

[33] (2017). Retrieved 5 May 2017, from

[34] Kazan, S. (2014). The U.S. Asbestos Ban That Wasn’t | California Mesothelioma Asbestos Lawyers Kazan Law. California Mesothelioma Asbestos Lawyers Kazan Law. Retrieved 5 May 2017, from

[35] (2017). Retrieved 5 May 2017, from

Image c/o Home Solutions


The One True Argument™

Anyone who has spent much time addressing a lot of myths, misconceptions, and anti-science arguments has probably had the experience of some contrarian taking issue with his or her rebuttal to some common talking point on the grounds that it’s not the “real” issue people have with the topic at hand. It does occasionally happen that some skeptic spends an inordinate amount of time refuting an argument that literally nobody has put forward for a position, but I’m specifically referring to situations in which the rebuttal addresses claims or arguments that some people have actually made, but that the contrarian is implying either haven’t been made or shouldn’t be addressed, because they claim that it’s not the “real” argument. This is a form of No True Scotsman logical fallacy, and is a common tactic of people who reject well-supported scientific ideas for one reason or another. In some cases this may be due to the individual’s lack of exposure to the argument being addressed rather than an act of subterfuge, but it is problematic regardless of whether or not the interlocutor is sincere.

The dilemma is that there are usually many arguments for (and variations of) a particular position, so it’s not usually possible for someone to respond to every possible permutation of every argument that has ever been made against a particular idea (scientific or otherwise). The aforementioned tactic takes advantage of this by implying that the skeptic is attacking a strawman on the grounds that what they refuted was not the “real” main argument for their position. In comment sections on my page, I’ve referred to this as The One True ArgumentTM fallacy. It’s a deceptive way for the contrarian to move the goalpost while deflecting blame back onto the other person by accusing them of misrepresentation. The argument being addressed has been successfully refuted, but instead of acknowledging that, the interlocutor introduces a brand new argument (often just as flawed as the one that was just deconstructed), and accuses the person debunking it of either not understanding or not addressing The One True ArgumentTM.

Some brands of science denial have brought this to the level of an integrative art form. If argument set A is refuted, they will cite argument set B as The One True ArgumentTM, but if argument set B is refuted, they will either cite argument set A or argument set C as The One True ArgumentTM. If argument sets A, B, and C are all refuted in a row, they’ll either bring out argument set D, or they will accuse the skeptic of relying on verbosity, and will attempt to characterize detailed rebuttals as some sort of vice or symptom of a weak argument (even though the skeptic is merely responding to the claimant’s arguments). I really wish I was making this up, but these are all techniques I’ve seen science deniers use in debates on social media or on their own blogs. Of course, the volume of the rebuttal cannot be helped due to what has come to be known as Brandolini’s Law AKA Brandolini’s Bullshit Asymmetry Principle (coined by Alberto Brandolini), which states that the amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it.

The argumentation tactics of sophisticated science deniers and other pseudoscience proponents (or even the less sophisticated ones) could probably fill an entire book, but this is one that I haven’t seen many people address, and it comes up fairly often.

For example, many opponents of genetically engineered food crops claim that they are unsafe to eat, and that they are not tested. Often when someone takes the time to show that they are actually some of the most tested foods in the entire food supply, and that the weight of evidence from decades of research from scientists all across the world has converged on an International Scientific Consensus that the commercially available GE crops are at least as safe and nutritious as their closest conventional counterparts, the opponents will downplay it as not being the “real” issue. In some cases they will appeal to conspiracy theories or poorly done outlier studies that have been rejected by the scientific community, but in other instances they will invoke The One True ArgumentTM fallacy. They will claim that nobody is saying that GMOs are unsafe to eat, and that the problem is the overuse of pesticides that GMOs encourage, or that the problem is that the patents and terminator seeds allegedly permit corporations to sue farmers for accidental cross contamination and monopolize the food supply by prohibiting seed saving.

Of course, these arguments are similarly flawed. GMOs have actually helped reduce pesticide use: not increase it, (particularly insecticides) [1],[2],[3], and have coincided with a trend toward using much less toxic and environmentally persistent herbicides [4]. Plant patents have been common in non-GMO seeds too since the Plant Patent Act of 1930, terminator seeds were never brought to market, the popularity of seed saving had already greatly diminished several decades before the first GE crops, and there are still no documented cases of non-GMO farmers getting sued by GMO seed companies for accidental cross-contamination.

However, although the follow up arguments are similarly flawed, the fact is that many organizations absolutely are claiming that genetically engineered food crops are unsafe. I’m not going to give free traffic to promoters of pseudoscience if I can help it, but one need only to plug in the search terms “gmo + poison” or “gmo + unsafe” to see a plethora of less-than-reputable websites claiming precisely that. The point is that it’s dishonest to pretend that the person rebutting such claims isn’t addressing the “real” contention, because there is no one single contention, and the notion that the foods are unsafe is a very common one.

Another example occurred just the other day on my page. I posted a graphic depicting some data showing how effective vaccines have been at mitigating certain infectious diseases. A commentator responded as shown here:

I responded thusly:

Putting aside the fact that information on vaccine ingredients is easy to obtain (they are laid out in vaccine packaging inserts), and the fact that increasing life expectancy and population numbers suggest that, if there is any nefarious plot to depopulate the planet, the perpetrators have been spectacularly unsuccessful so far, the point is that this exemplifies The One True ArgumentTM tactic.

Another common example is when scientists meticulously lay out the arguments and evidence for how we know that global warming and/or climate change are occurring. There are many common contrarian responses to this, some of which employ the One True Argument fallacy, such as when the contrarian claims that nobody actually rejects the claim that the change is occurring, bur rather they doubt that human actions have played any significant role in it.

Of course, the follow up claim is similarly flawed, since we know that climate changes not by magic but rather when acted upon by physical causes (called forcings), none of which are capable of accounting for the current trend without the inclusion of anthropogenically caused increases in atmospheric concentrations of greenhouse gases such as CO2. This is because most of the other important forcings have either not changed much in the last few decades, or have been moving in the opposite direction of the trend (cooling rather than warming). I’ve explained how solar cycles, continental arrangement, albedo, Milankovitch cycles, volcanism, and meteorite impacts can affect the climate with hundreds of citations from credible scientific journals here, here, here, here, here, here, here, here, here, here, here, and here.

 In this instance, although it has become more common than in the past for climate science contrarians to accept the conclusion that climate has been changing but reject human causation, there are still plenty who argue that the warming trend itself is grand hoax, and that NASA, NOAA, (and virtually every other scientific organization on the planet) has deliberately manipulated the data to make money. If you doubt this, all you need to do is enter “global warming + hoax + fudged data” into your favorite search engine to see an endless list of webmasters making this claim. In fact, in one study, the position that “it’s not happening” at all was the single most common one expressed in op-ed pieces by climate science contrarians between 2007 – 2010 [10]. Their abundance even increased towards the end of that time period, so it’s flat out untrue that the push-back against the science has centered only on human causation and/or the eventual severity of the problem. 

The truth is that there was never anything nefarious going on with the temperature data adjustments. Similar adjustments are performed on data in most scientific fields. They were completely legitimate and scientifically justified. There have even been additional studies in which the assumptions and reasoning behind the ways in which the data was adjusted have been scrutinized and compared to data from reference networks, and the same procedures produced readings that were MORE accurate than the raw non-adjusted data: not less [5],[6],[7].[8].[9]. This is nicely explained here, but I digress; the main point here is not just that the follow-up arguments tend to be similarly flawed, but rather that this technique could in principle be used indefinitely to move the goal posts ad infinitum.

It’s easy to see that this also forces a strategic decision on the part of the skeptic or science advocate. Do you nail them down on their use of this tactic? Do you respond to the follow-up argument they’ve presented as the “real” issue? Do you do both? If so, are there any strategic disadvantages to doing both? Would it make the response excessively long? If so, does that matter? If so, how much can it be compressed by improved concision without sacrificing accuracy and/or important details? Disingenuous argumentative tactics like these put the contrarian’s opponents in a position where he or she has to make these kinds of strategic decisions rather than simply focusing on the veracity of specific claims.

As I alluded to earlier, this is not a free license to construct actual strawmen of other people’s positions and ignore their explanations when they attempt to clarify their arguments and their conclusions, because people do that too, and that’s no good either. But the One True ArgumentTM fallacy refers specifically to when a refutation to a common argument is mischaracterized as a strawman as a means of introducing a different argument while trying to construe it as the skeptic’s fault for addressing the argument they addressed instead of some other one. It’s dishonest, it’s based on bad reasoning, you shouldn’t use it, and you should point it out when others do. 


[1] Brookes, G., & Barfoot, P. (2017). Environmental impacts of genetically modified (GM) crop use 1996–2015: impacts on pesticide use and carbon emissions. GM crops & food, (just-accepted), 00-00.

[2] Klümper, W., & Qaim, M. (2014). A meta-analysis of the impacts of genetically modified crops. PloS one9(11), e111629.

[3] National Academies of Sciences, Engineering, and Medicine. (2017). Genetically Engineered Crops: Experiences and Prospects. National Academies Press (pg. 117-119).

[4] Kniss, A. R. (2017). Long-term trends in the intensity and relative toxicity of herbicide use. Nature communications8, 14865.

[5] Jones, P. D., & Moberg, A. (2003). Hemispheric and large-scale surface air temperature variations: An extensive revision and an update to 2001. Journal of Climate16(2), 206-223.

[6] Brohan, P., Kennedy, J. J., Harris, I., Tett, S. F., & Jones, P. D. (2006). Uncertainty estimates in regional and global observed temperature changes: A new data set from 1850. Journal of Geophysical Research: Atmospheres111(D12).

[7] Jones, P. D., Lister, D. H., Osborn, T. J., Harpham, C., Salmon, M., & Morice, C. P. (2012). Hemispheric and large‐scale land‐surface air temperature variations: An extensive revision and an update to 2010. Journal of Geophysical Research: Atmospheres117(D5).

[8] Hausfather, Z., Menne, M. J., Williams, C. N., Masters, T., Broberg, R., & Jones, D. (2013). Quantifying the effect of urbanization on US Historical Climatology Network temperature records. Journal of Geophysical Research: Atmospheres118(2), 481-494.

[9] Hausfather, Z., Cowtan, K., Menne, M. J., & Williams, C. N. (2016). Evaluating the impact of US Historical Climatology Network homogenization using the US Climate Reference Network. Geophysical Research Letters.

[10] Elsasser, S. W., & Dunlap, R. E. (2013). Leading voices in the denier choir: Conservative columnists’ dismissal of global warming and denigration of climate science. American Behavioral Scientist57(6), 754-776.


No, Solar Variations Can’t Account for the Current Global Warming Trend. Here’s Why:

In part I of this series on the sun and Earth’s climate, I covered the characteristics of the sun’s 11 and 22 year cycles, the observed laws which describe the behavior of the sunspot cycle, how proxy data is used to reconstruct a record of solar cycles of the past, Grand Solar Maxima and Minima, the relationship between Total Solar Irradiance (TSI) and the sunspot cycle, and the relevance of these factors to earth’s climate system. In part II, I went over the structure of the sun, and some of the characteristics of each layer, which laid the groundwork for part III, in which I explained the solar dynamo: the physical mechanism underlying solar cycles, which I expanded upon in part IV, in which I talked about some common approaches to solar dynamo modeling, including Mean Field Theory. This installment covers how all of that relates to climate change and the current warming trend.

Solar Cycles and Earth’s Climate

The sun is responsible for nearly all of the energy entering our climate system, so it should come as no surprise that variations in Total Solar Irradiance throughout solar cycles do indeed affect Earth’s climate (Eddy 1977, Bond 2001, Solanki 2002, de Jager 2008). Knowing this, it’s natural to wonder whether solar variations are to blame for the current global warming trend. There’s nothing irrational about wondering “hey, you know that gigantic fusion reactor fireball thing in the sky? What if that thing has something to do with global warming and climate change?” I want to emphasize that this is by no means a crazy or unreasonable question to ask. It’s just that with the current warming we’re not just looking at cyclical oscillations or subtle fluctuations; we’re looking at a clear trend (NOAA 2016, Anderson et al 2013, Hansen et al 2010). And changes in solar activity are simply not sufficient to explain that rate and magnitude of the current trend (Frohlich 1998, Meehl et al 2004, Wild 2007, Lean and Rind 2008, Duffy 2009, Gray et al 2010, Kopp 2011).

It has been estimated that climate forcings attributable to solar variability have not contributed more than 30% of the global warming from 1970 – 1999 (Solanki 2003). To add insult to injury, 15 of the 16 hottest years on the instrumental record have occurred since the turn of the millennium, and more recent analyses have found that the solar activity and global temperature trends have been moving in opposite directions in recent cycles (Lockwood and Frohlich 2007 and 2008, Lockwood 2009). Moreover, researchers have found that the warming trend becomes even clearer after correcting for El Niños and volcanic and solar forcings (Foster 2011).

That’s right! Solar activity has actually declined in the last decade, and this last cycle (solar cycle 24) has been well below average amplitude (Jiang 2015, Pesnell 2016). So if changes in solar activity were the principle determinant of the recent changes, we should expect to be experiencing cooling: not warming. So this is also ruled out as a principle cause of late 20th century and early 21st century global warming and resultant climate change. Look at how solar forcings stack up against the observed temperature curve from Meehl et al 2004.

Image 3: c/o Meehl 2004

Firstly, if greenhouse gases are primarily responsible, we should expect to see little change in the amount of solar energy entering the earth’s atmosphere, but a decrease in the amount leaving. Contrastingly, if the sun is primarily responsible, we should expect to see an increase both in the energy entering earth’s atmosphere and the amount leaving. Since the mid-late 1970s, we’ve been able to measure this with satellites.

Lo and behold! It turns out that the rate of energy coming in from the sun has changed very little, while the rate at which energy leaves the earth has decreased significantly (Harries 2001, Griggs and Harries 2007, Philipona 2004, Leroy 2008, Worden 2008, Huang et al 2010). This is the proverbial smoking gun evidence that recent climate change is not due to changes in solar forcing, but rather to the greenhouse effect.

Secondly, if the warming effects were attributable primarily to the sun, then we should be seeing a very different distribution of temperatures than what we are actually observing. Specifically, warming due to solar forcing should be most prominent during the daytime and during the summer months, because these are the times during which the sun is most intensely bombarding the earth.

However, what we observe instead is that night time and winter temperatures are increasing faster than would be the case if the sun was chiefly responsible for the trend (Alexander et al 2006, Caesar et al 2006). This distribution cannot be explained by natural variability, but is consistent with the predictions of the greenhouse effect explanation (Brown 2008). The energy is entering the climate system during the day when the sun is shining, and is getting trapped by greenhouse gases, which slows down the rate at which that energy can escape the earth’s atmosphere. Alexander et al in particular found that over 70% of the land area sampled showed a significant increase in the occurrence of warm nights annual from 1951 – 2003, and a corresponding decrease in the occurrence of cold nights (Alexander et al 2006). So, here we have multiple lines of smoking gun evidence unanimously converging on the conclusion that current climate change cannot be blamed on changes in solar activity.

Could a Grand Minimum Mitigate 21st Century Global Warming?

Okay, so we know that variations in Total Solar Irradiance can’t account for the current warming trend, but what if we just lucked out and entered a new Grand Solar Minimum? How likely is it that it would stop or reverse the trend, and make the last few decades of climate science research and undesirable predictions seem like much ado about nothing? This possibility has been investigated in several papers as well. Although the predictions vary slightly insofar as the precise amounts by which TSI and temperatures would be reduced, they all arrive at reductions in TSI of no greater than a few watts per square meter, a slowing of ascending temperatures by no more than 0.1 – 0.3 °C, and therefore imply that a 21st Century Grand Minimum would (at most) slightly slow global warming down temporarily without actually stopping it (Wigley et al 1990, Feulner and Rahmstorf 2010, Jones et al 2012, Meehl et al 2013, Anet et al 2013, Maycock et al 2015). One might reason that any delay in the warming trend would be better than nothing, because it might buy some time for the innovation and implementation of mitigation and/or coping strategies, and I would not be compelled to argue against that, but the current weight of the evidence suggests that it would be of only marginal help at best.


In summary, solar cycles can affect earth’s weather and climate, both on decadal scales in correspondence with the 11 year sunspot cycle, as well as longer term amplitude changes associated with grand solar maxima and minima.

The prevailing scientific theory for the mechanism underlying these cycles is the solar dynamo, which explains the associated magnetic field oscillations in terms of a branch of physics called magnetohydronamics. It accounts for the observed sunspot butterfly diagrams, Sporer’s Law, Joy’s Law, and Hale’s Polarity Law, and explains the 11 and 22 year cycle periods. Mean Field theory is one of the ways in which stellar astrophysicists simplify solar dynamo model calculations, but it has its limitations.

Multiple lines of evidence suggest the current warming trend on earth is not caused by an increase in solar activity. We know from satellite data that there has been no substantial increase in the amount of solar energy (TSI) entering earth’s climate system, but less of it has been making it back out into space. Moreover, winter and night time warming has increased rapidly, which is consistent with the greenhouse effect explanation, but not with the solar forcing explanation.

Additionally, if a 21st Century Grand Solar Minimum were to occur, it would most-likely have a noticeable but small slowing effect on Global Warming and the resultant Climate Change.

What we humans should do about this is not a strictly scientific question, because it depends not only on model predictions but also on normative issues, personal values, and cost-benefit analyses of different potential solution strategies (both technological and political). However, what we do know with VERY high confidence is that global warming and climate change are happening, and that the sun is not to blame for it.


Related Articles:


Alexander, L. V., Zhang, X., Peterson, T. C., Caesar, J., Gleason, B., Klein Tank, A. M. G., … & Tagipour, A. (2006). Global observed changes in daily climate extremes of temperature and precipitation. Journal of Geophysical Research: Atmospheres111(D5).

Anderson, D. M., Mauk, E. M., Wahl, E. R., Morrill, C., Wagner, A. J., Easterling, D., & Rutishauser, T. (2013). Global warming in an independent record of the past 130 years. Geophysical Research Letters40(1), 189-193.

Anet, J. G., Rozanov, E. V., Muthers, S., Peter, T., Brönnimann, S., Arfeuille, F., … & Schmutz, W. K. (2013). Impact of a potential 21st century “grand solar minimum” on surface temperatures and stratospheric ozone. Geophysical Research Letters40(16), 4420-4425.

Bond, G., Kromer, B., Beer, J., Muscheler, R., Evans, M. N., Showers, W., … & Bonani, G. (2001). Persistent solar influence on North Atlantic climate during the Holocene. Science294(5549), 2130-2136.

Brown, S. J., Caesar, J., & Ferro, C. A. (2008). Global changes in extreme daily temperature since 1950. Journal of Geophysical Research: Atmospheres113(D5).

Caesar, J., Alexander, L., & Vose, R. (2006). Large‐scale changes in observed daily maximum and minimum temperatures: Creation and analysis of a new gridded data set. Journal of Geophysical Research: Atmospheres111(D5).

Cox, P. M., Betts, R. A., Jones, C. D., Spall, S. A., & Totterdell, I. J. (2000). Acceleration of global warming due to carbon-cycle feedbacks in a coupled climate model. Nature408(6809), 184-187.

De Jager, C. (2008). Solar activity and its influence on climate. Neth. J. Geosci. Geologie En Mijnbouw87, 207-213.

Duffy, P. B., Santer, B. D., & Wigley, T. M. (2009). Solar variability does not explain late-20th-century warming. Physics Today62(1), 48.

Eddy, J. A. (1977). Climate and the changing sun. Climatic Change1(2), 173-190.

Feulner, G., & Rahmstorf, S. (2010). On the effect of a new grand minimum of solar activity on the future climate on Earth. Geophysical Research Letters37(5).

Foster, G., & Rahmstorf, S. (2011). Global temperature evolution 1979 – 2010. Environmental Research Letters6(4), 044022.

Frohlich, C., & Lean, J. (1998). The Sun’s total irradiance: Cycles, trends and related climate change uncertainties since 1976. Geophys. Res. Lett25(23), 4377-4380.

Gray, L. J., Beer, J., Geller, M., Haigh, J. D., Lockwood, M., Matthes, K., … & Luterbacher, J. (2010). Solar influences on climate. Reviews of Geophysics48(4).

Griggs, J. A., & Harries, J. E. (2007). Comparison of spectrally resolved outgoing longwave radiation over the tropical Pacific between 1970 and 2003 using IRIS, IMG, and AIRS. Journal of climate20(15), 3982-4001.

Hansen, J., Ruedy, R., Sato, M., & Lo, K. (2010). Global surface temperature change. Reviews of Geophysics48(4).

Harries, J. E., Brindley, H. E., Sagoo, P. J., & Bantges, R. J. (2001). Increases in greenhouse forcing inferred from the outgoing longwave radiation spectra of the Earth in 1970 and 1997. Nature410(6826), 355-357.

Huang, Y., Leroy, S., Gero, P. J., Dykema, J., & Anderson, J. (2010). Separation of longwave climate feedbacks from spectral observations. Journal of Geophysical Research: Atmospheres115(D7).

Jiang, J., Cameron, R. H., & Schuessler, M. (2015). The cause of the weak solar cycle 24. The Astrophysical Journal Letters808(1), L28.

Jones, G. S., Lockwood, M., & Stott, P. A. (2012). What influence will future solar activity changes over the 21st century have on projected global near‐surface temperature changes?. Journal of Geophysical Research: Atmospheres117(D5).

Karl, T. R., & Trenberth, K. E. (2003). Modern global climate change. science302(5651), 1719-1723.

Kopp, G., & Lean, J. L. (2011). A new, lower value of total solar irradiance: Evidence and climate significance. Geophysical Research Letters38(1).

Lean, J. L., & Rind, D. H. (2008). How natural and anthropogenic influences alter global and regional surface temperatures: 1889 to 2006. Geophysical Research Letters35(18).

Leroy, S., Anderson, J., Dykema, J., & Goody, R. (2008). Testing climate models using thermal infrared spectra. Journal of Climate21(9), 1863-1875.

Liverman, D. (2007). From uncertain to unequivocal. Environment: Science and policy for sustainable development49(8), 28-32.

Lockwood, M., & Fröhlich, C. (2007, October). Recent oppositely directed trends in solar climate forcings and the global mean surface air temperature. In Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences (Vol. 463, No. 2086, pp. 2447-2460). The Royal Society.

Lockwood, M., & Fröhlich, C. (2008, June). Recent oppositely directed trends in solar climate forcings and the global mean surface air temperature. II. Different reconstructions of the total solar irradiance variation and dependence on response time scale. In Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences (Vol. 464, No. 2094, pp. 1367-1385). The Royal Society.

Lockwood, M. (2009, December). Solar change and climate: an update in the light of the current exceptional solar minimum. In Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences (p. rspa20090519). The Royal Society.

Matthews, H. D., Graham, T. L., Keverian, S., Lamontagne, C., Seto, D., & Smith, T. J. (2014). National contributions to observed global warming. Environmental Research Letters9(1), 014010.

Maycock, A. C., Ineson, S., Gray, L. J., Scaife, A. A., Anstey, J. A., Lockwood, M., … & Osprey, S. M. (2015). Possible impacts of a future grand solar minimum on climate: Stratospheric and global circulation changes. Journal of Geophysical Research: Atmospheres120(18), 9043-9058.

Meehl, G. A., Washington, W. M., Ammann, C. M., Arblaster, J. M., Wigley, T. M. L., & Tebaldi, C. (2004). Combinations of natural and anthropogenic forcings in twentieth-century climate. Journal of Climate17(19), 3721-3727.

Meehl, G. A., Arblaster, J. M., & Marsh, D. R. (2013). Could a future “Grand Solar Minimum” like the Maunder Minimum stop global warming?. Geophysical Research Letters40(9), 1789-1793.

Min, S. K., Zhang, X., Zwiers, F. W., & Hegerl, G. C. (2011). Human contribution to more-intense precipitation extremes. Nature470(7334), 378-381.

NOAA National Centers for Environmental Information, State of the Climate: Global Analysis for Annual 2015, published online January 2016, retrieved on January 8, 2017 from

NOAA National Centers for Environmental information, Climate at a Glance: Global Time Series, published December 2016, retrieved on January 8, 2017 from

Pesnell, W. D. (2016). Predictions of Solar Cycle 24: How are we doing?. Space Weather14(1), 10-21.

Philipona, R., Dürr, B., Marty, C., Ohmura, A., & Wild, M. (2004). Radiative forcing‐measured at Earth’s surface‐corroborate the increasing greenhouse effect. Geophysical Research Letters31(3).

Solanki, S. K. (2002). Solar variability and climate change: is there a link?. Astronomy & Geophysics43(5), 5-9.

Solanki, S. K., & Krivova, N. A. (2003). Can solar variability explain global warming since 1970?. Journal of Geophysical Research: Space Physics108(A5).

Wigley, T. M., Kelly, P. M., Eddy, J. A., Berger, A., & Renfrew, A. C. (1990). Holocene Climatic Change, 14C Wiggles and Variations in Solar Irradiance [and Discussion]. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences330(1615), 547-560.

Wild, M., Ohmura, A., & Makowski, K. (2007). Impact of global dimming and brightening on global warming. Geophysical Research Letters34(4).

Worden, H. M., Bowman, K. W., Worden, J. R., Eldering, A., & Beer, R. (2008). Satellite measurements of the clear-sky greenhouse effect from tropospheric ozone. Nature Geoscience1(5), 305-308.

Image Credits:

Image 3:

Meehl, G. A., Washington, W. M., Ammann, C. M., Arblaster, J. M., Wigley, T. M. L., & Tebaldi, C. (2004). Combinations of natural and anthropogenic forcings in twentieth-century climate. Journal of Climate17(19), 3721-3727.

Images 1 and 2:

Myhre, G., D. Shindell, F.-M. Bréon, W. Collins, J. Fuglestvedt, J. Huang, D. Koch, J.-F. Lamarque, D. Lee, B. Mendoza, T. Nakajima, A. Robock, G. Stephens, T. Takemura and H. Zhang, 2013: Anthropogenic and Natural Radiative Forcing. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, pp. 659–740, doi:10.1017/ CBO9781107415324.018.

Image 4:

Thoughtscapism and Making Sense of Climate Science Denial

coronal mass ejection c/o NASA


Mean Field Theory and Solar Dynamo Modeling

In a recent post, I talked about the characteristics of the sun’s 11 and 22 year cycles, the observed laws which describe the behavior of the sunspot cycle, how proxy data is used to reconstruct a record of solar cycles of the past, Grand Solar Maxima and Minima, the relationship between Total Solar Irradiance (TSI) and the sunspot cycle, and the relevance of these factors to earth’s climate system. In a follow up post, I went over the structure of the sun, and some of the characteristics of each layer, which laid the groundwork for my last post, in which I explained the solar dynamo: the physical mechanism underlying solar cycles.

Before elaborating on the sun’s role in climate change in the installment following this one, I’ll be going over an approach called “Mean Field Theory” in this installment, which dynamo theorists and other scientists sometimes use to make the modelling of certain systems more manageable. As was the case with part III, this may be a bit more technical than most of my subscribers are accustomed to, but I think the small subset of readers with the tools to digest it will appreciate it. And to be perfectly blunt, writing this was not just about my subscribers. I wanted to do it. It was an excuse for me to dig more deeply into something that has been going on in modern stellar astrophysics that I thought was interesting. The fact that it happened to be tangentially related to my series on climate science was a mere convenience. Anyone wanting to avoid the math and/or to cut to the chase with respect to the effects of solar cycles on climate change might want to skip ahead to part V, or perhaps just read only the text portions of this post. However, for those who don’t mind a little bit of math, I present to you the following:

Mean Field Theory

One approach by which scientists and mathematicians can simplify the models describing large complex systems stochastic effects is called Mean Field Theory (Schrinner 2005). This involves subsuming multiple complicated interactions between different parts of a system into a single averaged effect. In this way, multi-body problems, (which are notoriously difficult to solve even with numerical approximation methods with supercomputers), can be reduced to simpler single body problems. For instance, the velocity field u and magnetic field B could each be broken up into two separate terms: A mean term (u0 and B0 respectively), and a fluctuating term (u’ and B’ respectively), whereby the mean terms are taken as averages over time and/or space depending on what is appropriate to the system being modeled.

In other words, u = u0 + u’ and B = B0 + B’, where (by definition) the average velocity <u> = u0, the average magnetic field <B> = B0, because u0 and B0 are the mean field terms, and the average of the fluctuating terms <B’> and <u’> are both equal to zero by definition. The angled brackets simply denote a suitable average of the term they enclose (again taken over time and/or space as deemed appropriate by the scientist or mathematician). The reason the fluctuating terms average out to zero is because the mean field terms are defined as the average of the entire field, so by definition, the only way that can be true is if the fluctuating terms average out to zero.

Using the vector calculus identity × ( × B) = ∇(∇·B) + 2B, and the fact that ·B = 0 by Gauss’s Law for magnetic fields, the induction equation, ∂B/∂t  = × (u × B − η × B) from the previous section can also be expressed as ∂B/∂t = η2B + × (u × B), where 2B is called the Laplacian operator of the magnetic field B.

Plugging in our mean field equations u and B into this form of the induction equation:

∂B0/∂t + ∂B’/∂t = η2B0 + η2B’+

× (u0× B0) + × (u0 × B’) +

× (u’ × B0) + × (u’ × B’)

Now we take the average of both sides:

∂<B0>/∂t + ∂<B’>/∂t = η2<B0> + η2 <B’> + 

× <u0× B0> + × <u0 × B’> +

× <u’ × B0> + × <u’ × B’>

However, we’ve already know that <B0> = B0, <u0> = u0, and <u’> = < B’> = 0, so this can be simplified:

∂B0/∂t = η2B0 + × <u0× B0> + × <u’ × B’>.

The × <u’ × B’> is typically then replaced with a term × ε, where ε is called the mean electromotive force (Radler 2007). This yields the Mean Field Induction Equation:

∂B0/∂t = η2B0 + × <u0× B0 + ε>

Although many details of the theory are still being worked out, models based on the solar dynamo mechanism are consistent with the periodicity of the solar cycle, Hale’s law (the opposing magnetic polarity of sunspots above and below the solar equator and the alternation of polarity in successive 11 year cycles), as well as both Sporer’s and Joy’s laws (the apparent migration of sunspots towards the equator as a cycle progresses, as well as their tilt), which together produce the observed sunspot butterfly diagrams I talked about here.

Some models can even simulate variations in amplitude from one cycle to the next, but the precise manner in which Grand Solar Maxima and Minima emerge is still being worked out. Consequently, our models’ ability to reliably and accurately forecast them is currently still limited. Methods have been developed for estimating sunspot number and solar activity of a cycle’s solar maxima from observations of the poloidal field strength of the preceding cycle’s solar minima (Schatten 1978, Svalgaard 2005). In addition to only providing information on the solar maxima immediately after the minima being measured, this approach is also limited by the fact that our poloidal field measurements only go back a few cycles, and because the poloidal magnetic fields during solar minima are difficult to measure reliably, because they are weak and have radial as well as meridional components.

Other researchers have focused on kinematic flux transport solar dynamo models which, in addition to differential rotation, include the effects of meridional flow in the convective envelope, whereby the poloidal magnetic field is regenerated by the decay of the bipolar magnetically active regions subsequent to their emergence at the solar surface (Dikpati 1999, Dikpati 2006, Choudhuri 2007). Active regions are the high magnetic flux regions at which sunspots emerge.

Image  by Andres Munoz-Jaramillo (check out his fantastic presentations on

This meridional flow sets the period of the cycle, the strength of the poloidal field, and the amplitude of the solar maximum of the subsequent cycle. However, estimates of meridional flow velocities prior to 1996 are highly uncertain (Hathaway 1996). All of these models have been criticized by peers of their proponents. A concise summary of the blow by blow can be viewed here.

As for Grand Solar Maxima and Minima, no comprehensive theory has yet emerged on how they arise and decay, let alone a scientific consensus. However, certain constraints have been identified. There is evidence that the dynamo cycle does continue in some modified form during Maunder-type minima periods. The idea is that the dynamo enters Grand Maxima and Minima by way of chaotic and/or stochastic processes. In the case of Grand Maxima, the dynamo also exits that state via stochastic processes. In the case of Grand Minima, on the other hand, the dynamo then gets “trapped” in this state, but eventually gets out of it via deterministic internal processes (Usoskin 2007). It is also thought that the polarity of the sun’s toroidal magnetic field may lose its equatorial anti-symmetry during such minima, and instead become symmetric (Beer, Tobias and Weiss 1998).

Truly fantastic long term predictive power for solar cycles probably won’t be achieved until poloidal magnetic field generation is better understood, which will likely include improvements in flux transport models, and a more complete characterization of the statistical properties of bipolar magnetic regions (BMRs). For a comprehensive overview of the current state of Solar Dynamo Models and their predictive strengths and limitations, see Charbonneau 2010.

In the next installment, I’ll explain how all of this relates to climate change on earth, and address the elephant in the room: “are solar variations responsible for the current global warming trend?”

Related Articles:


Beer, J., Tobias, S., & Weiss, N. (1998). An active Sun throughout the Maunder minimum. Solar Physics181(1), 237-249.

Charbonneau, P. (2010). Dynamo models of the solar cycle. Living Reviews in Solar Physics7(1), 1-91.

Choudhuri, A. R., Chatterjee, P., & Jiang, J. (2007). Predicting solar cycle 24 with a solar dynamo model. Physical review letters98(13), 131103-131103.

Coriolis, G. G. (1835). Théorie mathématique des effets du jeu de billard. Carilian-Goeury.

Dikpati, M., & Charbonneau, P. (1999). A Babcock-Leighton flux transport dynamo with solar-like differential rotation. The Astrophysical Journal518(1), 508.

Dikpati, M., De Toma, G., & Gilman, P. A. (2006). Predicting the strength of solar cycle 24 using a flux‐transport dynamo‐based tool. Geophysical research letters33(5).

Hathaway, D. H. (1996). Doppler measurements of the sun’s meridional flow. The Astrophysical Journal460, 1027.

Rädler, K. H., & Rheinhardt, M. (2007). Mean-field electrodynamics: critical analysis of various analytical approaches to the mean electromotive force. Geophysical & Astro Fluid Dynamics101(2), 117-154.

Schatten, K. H., Scherrer, P. H., Svalgaard, L., & Wilcox, J. M. (1978). Using dynamo theory to predict the sunspot number during solar cycle 21. Geophysical Research Letters5(5), 411-414.

Schrinner, M., Rädler, K. H., Schmitt, D., Rheinhardt, M., & Christensen, U. (2005). Mean‐field view on rotating magnetoconvection and a geodynamo model. Astronomische Nachrichten326(3‐4), 245-249.

Svalgaard, L., Cliver, E. W., & Kamide, Y. (2005). Sunspot cycle 24: Smallest cycle in 100 years?. GEOPHYSICAL RESEARCH LETTERS32, L01104.

Usoskin, I. G., Solanki, S. K., & Kovaltsov, G. A. (2007). Grand minima and maxima of solar activity: new observational constraints. Astronomy & Astrophysics471(1), 301-309.


The Solar Dynamo: The Physical Basis of the Solar Cycle and the Sun’s Magnetic Field

In my previous article, I laid out some basics about the sun’s structure and physical characteristics in order to set up the groundwork upon which I could then explain the physical mechanism which underlies the solar cycles I talked about in the article prior to that one. I understand that this is a bit more technical than most readers may be accustomed to, which is why I’ve included a simplified “tl; dr” version before delving deeper.

Solar Dynamo Theory

The leading scientific explanation for the mechanism by which these solar cycles emerge is the solar dynamo theory. It arises from an area of physics called magnetohydrodynamics, which is the field which studies the magnetic properties of electrically conducting fluids, and is covered in most university textbooks on plasma physics. So how does it work?

The tl; dr version is as follows: the convective zone of the sun is a plasma (ionized gas), and it moves around via turbulent convection currents. The flow of these charged particles generates electric currents. Those electric currents generate magnetic fields (via Ampere’s law). In turn, when those magnetic fields change, they induce electric currents (Faraday’s law). In this manner, the dynamo is self-reinforcing, and permits the continual generation of magnetic dipole fields over time. An analogy that helps some people is to think of the magnetic field loops as being like rubber bands. And the convection currents stretch and twist the magnetic field lines. Just as how stretching and twisting rubber bands will increase their tension, the stretching and twisting of magnetic field lines can make the field stronger at certain points and/or change the field’s direction. If this twisting and stretching is done in a particular way (i.e. in the manner which occurs in our sun), it produces a cycle of changing magnetic fields which corresponds to the 11 and 22 year solar cycles.

However, this is an extremely over-simplified version of what the theory entails. There constraints on what sort of velocity fields will produce the observed effects. Namely, the flow must be turbulent like a pot of boiling water (rather than like a stream or faucet). The flow must be three dimensional. That means that the flow must have components in the radial direction, along the meridians (north and south), and along the latitudinal lines (also referred to as the azimuthal direction). And the flow must be roughly helical (Seehafer 1996).

Another critical requirement is differential rotation. In other words, the angular velocities at which the different parts of the sun rotate vary both with radius and with latitude (Schou 1998). The rotation rate at the solar equator, for example, is faster than the rotation at the poles. This is possible for the sun because it is composed primarily of plasma rather than a solid like the Earth. In the convective zone, differential rotation is primarily a function of latitude, and varies only weakly with depth, while the tachocline exhibits a strong radial shear (Howe 2009). The reason for these requirements is that the motions of the plasma must be capable of converting a meridional (poloidal) magnetic field into an azimuthal (toroidal) magnetic field, and vice versa.

The Omega Effect

Basically, if we begin with a meridional magnetic field, the differential rotation of the sun twists and coils this field around the sun, which results in an azimuthal magnetic field. This phenomenon of converting a meridional magnetic field into an azimuthal one is called the Omega effect. Its relevance to the observed solar cycle is that the twisting of the magnetic flux strands in the azimuthal (toroidal) direction in shallow depths and low latitudes create concentrated magnetic “ropes,” which are brought to the surface via magnetic buoyancy to produce the bipolar magnetic fields associated with sunspots and other related activity of the solar cycle (Parker 1955, Babcock 1961).

The Alpha Effect

Contrastingly, the Alpha effect converts an azimuthal (toroidal) magnetic field into a meridional (poloidal) field. The precise mechanism by which this occurs is still not fully understood as of this writing, but it has to do with the interaction between the velocity field of the plasma, the rotation of the sun, the toroidal magnetic field, and the Coriolis Effect acting on rising flux tubes.

From a qualitative standpoint, suppose we have a sphere of hot plasma rotating at an angular velocity ω. Suppose also that the fluid convects, and that certain localized pockets are hotter than the surrounding fluid, and thus move radially outward at velocity u. Additionally, suppose the presence of a toroidal magnetic field which gets partially dragged by the motion of the fluid. Since the sphere is rotating, each of those pockets of fluid is acted on by the Coriolis force ω x u, and therefore twists as it moves upwards and expands. Consequently, the magnetic field lines twist as well. Since the signs of both the Coriolis force and the toroidal magnetic field are reversed in the northern versus the southern hemisphere, this results in small scale magnetic field loops of the same polarity in both hemispheres (Coriolis 1835). The idea then is that these small scale loops of magnetic flux gradually coalesce as a result of magnetic diffusivity, which therefore generates a large scale poloidal magnetic field (Parker 1955).

The Omega Effect and the Alpha Effect. Image by E. F. Dajka

In this manner, a poloidal magnetic field generates a toroidal magnetic field, which in turn regenerates the poloidal magnetic field, and so on and so forth. The poloidal fields predominate during solar minima, while the toroidal fields generate the sunspots and other activity associated with solar maxima. The cycle repeats with an approximately 11 year period, and the associated magnetic fields alternate polarity from one cycle to the next, thus producing the observed 22 year solar cycle. I should reiterate that there are other hypotheses than what I’ve described here, and unlike the Omega effect, which is better understood, no clear scientific consensus has yet emerged on the precise mechanism of the alpha effect. In recent years, a lot of focus has been placed on variants of what’s known as the Babcock-Leighton (BL) mechanism, which is described here.

The Fundamental Equations of Magnetohydrodynamics and the Solar Dynamo

Warning!! Vector partial differential equations ahead!

The mathematically faint of heart may want to scroll past this section!

The physics involved in the dynamo are described by the equations of magnetohydrodynamics (MHD), which derive primarily from classical electromagnetism, but also from fluid mechanics to some extent, because hot plasmas share certain dynamical behaviors with liquids. The relevant equations include the following:

E = J/σ − u × B,

where E represents the electric field, J is the electric current density (charge per unit time per unit area), u is the velocity of a fluid element of the plasma, B represents the magnetic field, and σ is the conductivity of the plasma (J can also be expressed as J = nqvd, where q = the charge of a given particle, n = the number of said particles present, and vd is the average “drift” velocity of the particles).

This actually derives from Ohm’s law. You may be more familiar with Ohm’s law in its common form V = IR, where V is voltage (or electric potential difference), I is the electric current, and R is the resistance. But this is just veiled form of a more fundamental form of the Ohm’s law equation. The current I can also be expressed as I = J·A (the dot product of the current density with area element), and the resistance R can be expressed as a property called the resistivity ρ of the conductor (in this case the plasma) times the length element L of a charged particle’s path divided by the path’s cross sectional area element A, (R = ρL/A).

Thus V = IR becomes V = J·A(ρL/|A|) = JρL. But the resistivity term ρ of a given medium is also the reciprocal of a quantity called its conductivity (denoted as 1/ρ = σ), and the dot product of J·A is just the product of their magnitudes, thus giving us V = JL/σ, or alternatively, J = σV/L. But in many conductive mediums, this term scales linearly with the electric field, and can be expressed as J = σE. However, that’s in a reference frame co-moving with the fluid element. From a fixed reference frame (assuming non-relativistic velocities), and with an external magnetic field B, an additional term must be added to account for the Lorentz force on the moving charges, and the equation becomes J = σ(E + u x B), where u is the velocity of the fluid element, and x is not a multiplication sign, but rather what’s called a cross-product operator.

Dividing both sides by σ, and subtracting u x B from both sides yields the aforementioned E = J/σ − u × B equation.

Another important equation in the magnetohydrodynamics of the solar dynamo is the pre-Maxwellian form of Ampere’s Law:

× B = μ0J,

where μ0 is the magnetic permeability constant, and × B operator represents what’s called the curl of the magnetic field B.

Finally, there’s Faraday’s Law, one form of which is × E = – ∂B/∂t, which is basically saying that the curl of the electric field is equal to the negative of the rate of change of the magnetic field with time.

But we already have another expression for E = J/σ − u × B.

By dividing both sides of our × B = μ0J equation by μ0 to get J = × B/μ0, and then substituting that for J into our Ampere’s Law equation E = J/σ − u × B, we get E = ( × B)/(μ0σ) – u × B.

We can then substitute into our Faraday’s Law equation × E = – ∂B/∂t, in which case we get

× [( × B)/(μ0σ) – u × B] = – ∂B/∂t.

Rearranging this, we get the following:

∂B/∂t  = × (u × B − η × B),

where η = 1/(μ0σ) is the magnetic diffusivity term.

This is the MHD induction equation. The first term on the right side of the MHD induction equation represents the induction via the flow of electrically charged constituents across the magnetic field, while the second term expresses Ohmic dissipation of the current systems supporting that magnetic field. The relative importance of these two terms is measured by what’s called a magnetic Reynold’s number: Rm = u0L/η, where u0 and L are characteristic values for the flow velocity and length scale of the system respectively. For solar dynamo action, where L is on the order of the solar radius, Rm is invariably much greater than 1. Ergo, the Ohmic dissipation is highly inefficient on this scale, and therefore maintaining a solar magnetic field against diffusion is no problem.  

And now for something unrelated…

In the next installment, I’ll briefly go over an approach called Mean Field Theory, which astrophysicists and other scientists sometimes use to simplify their mathematical models of large complex systems.

Related Articles:


Babcock, H. W. (1961). The Topology of the Sun’s Magnetic Field and the 22-YEAR Cycle. The Astrophysical Journal133, 572.

Coriolis, G. G. (1835). Théorie mathématique des effets du jeu de billard. Carilian-Goeury.

Howe, R. (2009). Solar interior rotation and its variation. Living Reviews in Solar Physics6(1), 1-75.

Parker, E. N. (1955). Hydromagnetic dynamo models. The Astrophysical Journal122, 293.

Parker, E. N. (1955). The Formation of Sunspots from the Solar Toroidal Field. The astrophysical journal121, 491.

Schou, J., Antia, H. M., Basu, S., Bogart, R. S., Bush, R. I., Chitre, S. M., … & Gough, D. O. (1998). Helioseismic studies of differential rotation in the solar envelope by the solar oscillations investigation using the Michelson Doppler Imager. The Astrophysical Journal505(1), 390.

Seehafer, N. (1996). Nature of the α effect in magnetohydrodynamics. Physical Review E53(1), 1283.