Incommensurability, The Correspondence Principle, and the “Scientists Were Wrong Before” Gambit

Introduction

One of the intrinsic features of the scientific process is that it leads to modifications to previously accepted knowledge over time. Those modifications come in many forms. They may involve simply tacking on new discoveries to an existing body of accepted knowledge without really contradicting prevailing theoretical frameworks. They may necessitate making subtle refinements or adjustments to existing theories to account for newer data. They may involve the reformulation of the way in which certain things are categorized within a particular field so that the groupings make more sense logically, and/or are more practical to use. In rare cases, scientific theories are replaced entirely and new data can even lead to an overhaul of the entire conceptual framework in terms of which work within a particular discipline is performed. In his famous book, The Structure of Scientific Revolutions, physicist, historian, and philosopher of science, Thomas Kuhn referred to such an event as a “paradigm shift.” [1],[2]. This tendency is a result of efforts to accommodate new information and cultivate as accurate a representation of the world as possible.

The “scientists have been wrong before” argument

However, sometimes opponents of one or more areas of mainstream science attempt to recast this self-correcting characteristic of science as a weakness rather than a strength. Anti-GMO activists, anti-vaxxers, young earth creationists, climate science contrarians, AIDS deniers and many other subscribers to unscientific viewpoints have used this as a talking point. The argument is essentially that the fact that scientists revise and sometimes even eliminate old ideas indicates that scientific knowledge is too unreliable to take seriously. They reframe the act of refinement over time as a form of waffling. Based on this, they conclude that whatever widely accepted scientific conclusions they don’t like should therefore be rejected.

Why the “Scientists Have Been Wrong Before” Gambit Exists

The main function of the “scientists have been wrong before” gambit is to serve as a post-hoc rationalization for embracing ideas that are neither empirically supportable nor rationally defensible, and/or rejecting ones that are. Pseudoscience proponents want to focus on perceived errors in science in order to downplay the successful track record of the scientific method. In doing so, they fail to account for the why and the how of scientific transitions. This is also ironic and hypocritical because pseudoscience has no track record worth speaking of at all. Scientific theories are updated when other scientists better meet their burden of proof, and when doing so serves the goal of better understanding the universe. In contrast, the aforementioned gambit is a self-serving attempt to side step the contrarian’s burden of proof in order to resist change.

The argument is disingenuous for a number of reasons, not least of which is that it ignores the ways in which scientific knowledge typically changes over time. Previous observations place constraints on the specific ways in which scientific explanations can change in response to newer evidence. Old facts don’t just magically go away. In order to serve their purpose, reformulations of scientific theories have to account for both old facts and the new. Otherwise, the change would not be an actual improvement on the older explanation, which presumably accounted for at least the older data, but not the newer.

Facts, Laws, and Theories

Before further unpacking this point, I should clarify my use of terminology: in this context, I’m essentially using the term fact to denote repeatedly observed data points. These are independent of the explanations proposed for their existence. Alternatively, one might say that facts report. Scientific Laws are essentially persistent data trends which specify a mathematically predictable relationship between two or more quantities. On the other hand, Scientific Theories are well-supported explanations for why some aspect of the natural world is the way it is and/or how exactly it works. They are consistent with the currently available evidence and make testable predictions that are corroborated by a substantial body of repeatable evidence. In short, facts and laws describe; theories explain.

For example, evolution is both a fact and a scientific theory. This because the fact that populations evolve and the modern scientific theory of evolution (which describes how it occurs) are separate but related concepts. Evolution is formally defined as a statistically significant change of allele frequency in a population over time. *An allele is just genetics jargon for a variant of a particular gene. That is descent with modification. It happens all the time. We witness it constantly. It’s not hypothetical. It’s not speculation. It’s an empirical fact.

The theory of evolution, on the other hand, is an elaborate explanatory framework which outlines how evolution occurs. This includes the mechanisms of natural selection, genetic drift, gene flow, mutation (and much more), and it makes many testable predictions about a wide range of biological phenomena. In science, a theory provides more information than facts or laws, because it connects them in ways that permit the generation of new knowledge. I’ll say it again: facts and laws describe; theories explain.

The Correspondence Principle

It’s true that scientific ideas can be wrong or incomplete and that scientific theories can change with new evidence. However, the argument that this justifies rejecting well-supported scientific theories just because one doesn’t like their conclusions ignores the constraints that prior experimental results place on the ways in which scientific knowledge can realistically change in the future. People advancing the Scientists have been wrong gambit are typically vague and imprecise in their usage of the term, “wrong.” It is often implied that wrong is being used in the sense of “totally factually wrong,” rather than merely incomplete, which is inconsistent both with scientific epistemology and with the history of science. It’s at odds with scientific epistemology, because knowledge in science is generally conceived of in a fallibilistic and/or probabilistic manner rather than in a binary one [12]. It’s at odds with the history of science because it is not generally the case that the data used to support a theoretical claim is entirely 180 degrees mistaken, but rather that the theory is being replaced by a more complete one which, in many cases, simply looks differently. Sure, theories can be expanded and the meaning and implications of experimental data can be conceptually reframed, but new theories can’t be in direct contradiction with the aspects of the old one whose predictions corresponded with experimental data. Unless it can be shown that all prior data consistent with the predictions of the older theory was either fraudulent or due to systematically faulty measurements, this is simply not a viable option.

Another way to put it is that old facts don’t go away so much as their explanations can change in light of newly discovered ones.

This is reflected in what is called the correspondence principle [8].

A Paraphrasing of Bohr’s conception of the Correspondence Principle

Although originally associated with Niels Bohr and the reconciliation of quantum theory with classical mechanics, it illustrates a concept which applies in all areas of science. Essentially, the correspondence principle says that any modifications made to classical mechanics in order to account for the behavior of matter in the microscopic and submicroscopic realms must agree with the repeatedly verified calculations of classical physics when extended to macroscopic scales [9]. However, the overarching concept of older (yet well-supported) scientific theories becoming limiting cases of newer and broader ones is inextricable from advancement of scientific knowledge more generally.

This is why there exist certain facts that will probably never be totally refuted, even if the theories which explain and account for them are subsequently refined and/or placed within the broader context of newer and more comprehensive explanatory frameworks. This is necessarily the case because any candidate for a new scientific theory which proves inferior to the old framework insofar as accounting for the empirical data would be a step backward (not forward) in terms of the degree to which our leading scientific theories map onto the real world phenomena they purport to represent.

As Isaac Asimov put it:

“John, when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together” [16].

The Story of Gravity

Another one of my favorite examples of this is gravity. Our understanding of gravity has undergone multiple changes over the centuries, but none of those updates ever overturned the empirical observation that massive bodies reliably undergo an apparent acceleration towards other massive bodies in a mathematically predictable relationship. Aristotle was wrong about the mass of an object determining the rate at which it fell, and explained it in teleological terms, whereby certain objects were thought to have more “earth-like” properties, so that it was in their nature to belong on the ground [10]. But he didn’t dispute the basic observation that objects fell. Isaac Newton, who developed the inverse square law relationship for gravity, did not develop a theory for why matter behaved this way. He merely described it [11]. Rather than being satisfied with spooky action at a distance, prolific French physicist, astronomer, and mathematician, Pierre-Simone Marquis de Laplace conceptualized gravity in terms of classical field theory, whereby each point in space corresponded to a different value of a gravitational field, such that the field itself was thought of as the thing acting locally on a massive object [5].

The modern theory of gravity (Einstein’s General Relativity) explains it by positing a four dimensional space-time manifold capable of degrees of curvature surrounding massive bodies. In this theory, space-time tells matter how to move, and matter tells space-time how to curve [6]. Like the theory of evolution, general relativity has made many testable and falsifiable predictions that have come to fruition. Moreover, we know that GR cannot be the end of the story either, because the rest of the fundamental forces of physics are better described by quantum field theory (QFT), a formulation to which certain features of GR have notoriously not been amenable [7].

However, not one of these refinements contradicted the basic observations of massive bodies undergoing apparent accelerations in the presence of other massive bodies. Mathematically, it can be shown that Laplace’s formulation was consistent with Newton’s; the difference was in how it was conceptualized. Similarly, in situations involving relatively small masses and velocities, solving the Einstein Field Equations yields predictions that agree with Newton’s and Laplace’s out to several decimal places of precision. And although we don’t yet know for sure what form a successful reconciliation of GR and QFT will ultimately take, we know that it can’t directly contradict the successful predictions that GR and QFT have already made. This exemplifies the point that there exist constraints on the particular ways in which scientific theories can change.

Parsimony and Planetary Motion

I should note that concurrent to the progression of our scientific knowledge of gravity were changes in our understanding of planetary motion, because it demonstrates how the expansion of predictive power is not the only criterion governing theoretical transitions in science. More specifically, the Copernican model of the solar system didn’t actually produce calculations of superior predictive accuracy to the best Geocentric models of his time. Tycho Brahe’s formulation of Ptolemaic astronomy was more accurate. Although Brahe ultimately rejected Heliocentrism, Copernicus’s arguments intrigued him because the his model seemed less mathematically superfluous than the system of epicycles required to make Geocentrism work, yet it yielded results that were more or less in the same ballpark [13]. In other words, what stood out about Copernicus’s model was that, even though it wasn’t quite accurate, it accounted for a lot with a little. It was more parsimonious.

Many of the arguments against the Copernican model had more to do with Aristotelian physics than with the discrepancies in the resulting calculations, some of which were themselves a consequence of Copernicus’s assumption that orbits had to be circular, which was due in part to the philosophical notion that circles were the perfect shape. These problems were of course later resolved by the work of Johannes Kepler and Galileo Galilei; the former used Brahe’s own data to deduce that planets moved in elliptical orbits and swept out equal areas in equal times, whereas the latter formulated the law of inertia and overturned much of the Aristotelian physics upon which many arguments against the Copernican view were based [14]. In combination, Kepler and Galileo laid down much of the groundwork from which Isaac Newton would revolutionize science just a generation later.

The moral of the story, however, is that there are times when parsimony directs the trajectory of further scientific inquiry. It’s not always directed by expanding predictive power. A certain amount of theorizing in science involves what can essentially be understood as a form of data compression. Ultimately, the consistency of theory with empirical reality is the end game, but if a concept can explain more facts more simply and/or with fewer assumptions, then it may be preferred over its leading competitor. It’s certainly preferable to lists of disparate facts lacking any common underlying principles, because science isn’t just about describing empirical phenomena, but about discovering and understanding the rules by which they arise.

This touches on the principle of Occam’s Razor which, insofar as it applies to science, can be roughly paraphrased as the idea that one ought not to multiply theoretical entities beyond that which is needed in order to explain the data [15]. Putting it another way, the more ad hoc assumptions one’s hypothesis requires in order to work, the more likely it is that at least one of them is mistaken.

Or as Newton put it,

We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. Therefore, to the same natural effects we must, as far as possible, assign the same causes” [11].

 

Occam’s Razor is not a rule in science so much as it is a heuristic that sometimes proves useful. Ultimately, our ideas must agree with nature’s results first and foremost. Deference to the empirical world is always paramount, and the universe is under no obligation to meet our arbitrary standards of simplicity or aesthetic preferences, but some prospective theories are better than others at compressing our understanding into more cogent sets of concepts.

Incommensurability

In addition to introducing the idea of paradigm shifts in scientific advancement, Kuhn’s The Structure of Scientific Revolutions (TSoSR) also introduced the concept of incommensurability to describe the relationship between newer and older scientific paradigms. Initially, he introduced this as an umbrella term for any and all conceptual, observational, and/or methodological discrepancies between paradigms, as well as semantic differences in the use of specialized terminology. Kuhn’s own conception of incommensurability evolved considerably in the years following his publications of TSoSR, eventually restricting its applicability to problems with the translation of certain terminology common to both paradigms due to semantic differences arising from the transition to a new conceptual framework [3].

However, the basic idea was essentially that the methods, concepts, and modes of communication involved in disparate scientific paradigm are different enough that anyone from one paradigm attempting to communicate with someone from another would necessarily be speaking at cross-purposes, because they lack common measure. Even the observations themselves are thought to be too theory-laden for concepts and problems to be adequately translated across the theoretical boundaries of the pre and post phases of a scientific revolution. Kuhn himself even used the analogy from Gestalt psychology known as a Gestalt shift [4]. Here’s an example:

Hill, W. E. “My Wife and My Mother-in-Law.” Puck 16, 11, Nov. 1915

Do you see a young woman looking away, or an old woman looking down and to your left? Can you switch back and forth between perspectives? The meaning of any reference to the “nose” of the figure depends on whether one is speaking within the young woman or old woman paradigm. The placement and thickness of the lines does not change during gestalt shifts. What changes is the way in which their meaning is understood.

Analogously, the precise meaning of scientific statements depends on the theoretical framework in terms of which they are being made. The empirical facts that the theories seek to explain have not gone away (though newly obtained data may very well be forcing the change). What changes significantly is the way in which the meaning of the data is conceptualized, and the way in which new questions are framed.

Incommensurability as an attack on the scientific method

Some opportunists might seek to co-opt this notion of incommensurabilty to attack the epistemological integrity of the scientific process itself by exaggerating the degree to which new paradigms invalidate previous scientific knowledge, and to downplay their regions of predictive overlap. However, such attacks would necessarily be weakened by having to account for the constraints the correspondence principle places on which aspects of a scientific theory can change and/or be invalidated by a paradigm shift. To conflate a conceptual change in science with the invalidation of all facets of an older theory is to implicitly presuppose an anti-realist relationship between theory and the empirical phenomena to which it refers.

This is circular reasoning.

The unstated assumption is that no meaningful correspondence relationship exists between scientific concepts and the aspects of the empirical world they purport to represent, therefore changes in how terms are used and how problems are conceptualized precludes the preservation of facts and predictions an earlier model got right. As we saw in the earlier examples of the correspondence principle in action, this is demonstrably false. Many facts and predictions of older theories and paradigms are necessarily carried over to and/or modified to be incorporated into newer ones.

Concluding Summary

Scientific knowledge changes over time, but it does so in the net direction of increasing accuracy. This is one of the strengths of the scientific method: not one of its weaknesses. Most attempts to reframe this as a weakness (invariably via the use of specious mental acrobatics) ignore the constraints necessarily placed on the ways in which scientific theories can change or be wrong.

Many important revolutions in science involve conceptual changes which do not contradict all of the facts and predictions of the older theory, but rather reframe them, restrict them to limiting cases, or expand them to more general ones.

The preservation of certain facts and predictions which are carried over from older theories to newer ones (because the older ones also got them right) can be understood in terms of the correspondence principle.

The validity of the concept of incommensurability between temporally adjacent scientific paradigms is restricted to terminological, conceptual, and sometimes methodological differences between pre and post scientific revolution phases, but does not in any way contradict the correspondence principle.

The fact that scientific ideas can be wrong in principle does not mean that the particular ones the contrarian using this gambit dislikes will be among the discarded, nor that the ways in which it could conceivably be wrong could vindicate the contrarian’s desired conclusion.

Consequently, citing the observation that “scientists have been wrong before” is never a rationally defensible basis with which to justify rejection of scientific ideas which are currently well-supported by the weight of the evidence; only bringing new evidence of comparable quality can do that. If the contrarian is not currently in the process of gathering and publishing the evidence that would supposedly revolutionize some area of science, then they are placing their bet on an underdog based on faith in a future outcome over which they have no influence, and for which they have no rational basis for expecting. This is no more reasonable than believing one is going to win the lottery based on the observation that other people have won the lottery before, and then not even bothering to buy a ticket.  

You don’t know what aspects of our current knowledge will turn out to be incorrect, nor which will be preserved. That’s why the maximally rational position is always to calibrate one’s position to the weight of currently available scientific evidence, and then simply leave room for change in the event that newer evidence arises which justifies doing so.

References

[1] Kuhn, T. S., & Hawkins, D. (1963). The structure of scientific revolutions. American Journal of Physics31(7), 554-555.

[2] Bird, A. (2004). Thomas KuhnPlato.stanford.edu. Retrieved 4 January 2018, from https://plato.stanford.edu/entries/thomas-kuhn/

[3] Sankey, H. (1993). Kuhn’s changing concept of incommensurability. The British Journal for the Philosophy of Science44(4), 759-774.

[4] What Impact Did Gestalt Psychology Have?. (2018). Verywell. Retrieved 4 January 2018, from https://www.verywell.com/what-is-gestalt-psychology-2795808

[5] Laplace, P. S. A Treatise in Celestial Mechanics, Vol. IV, Book X, Chapter VII (1805), translated by N. Bowditch (Chelsea, New York, 1966).

[6] Astronomy, S. (2017). Einstein’s Theory of General RelativitySpace.com. Retrieved 4 January 2018, from https://www.space.com/17661-theory-general-relativity.html

[7] relativity?, A. (2018). A list of inconveniences between quantum mechanics and (general) relativity?Physics.stackexchange.com. Retrieved 4 January 2018, from https://physics.stackexchange.com/questions/387/a-list-of-inconveniences-between-quantum-mechanics-and-general-relativity

[8] Bokulich, A. (2010). Bohr’s Correspondence PrincipleStanford.library.sydney.edu.au. Retrieved 4 January 2018, from https://stanford.library.sydney.edu.au/archives/spr2013/entries/bohr-correspondence/

[9] Bokulich, P., & Bokulich, A. (2005). Niels Bohr’s generalization of classical mechanics. Foundations of Physics35(3), 347-371.

[10] Pedersen, O. (1993). Early physics and astronomy: A historical introduction. CUP Archive.

[11] Newton, I. (1999). The Principia: mathematical principles of natural philosophy. Univ of California Press.

[12] Fallibilism – By Branch / Doctrine – The Basics of Philosophy. (2018). Philosophybasics.com. Retrieved 5 January 2018, from http://www.philosophybasics.com/branch_fallibilism.html

[13] Blair, A. (1990). Tycho Brahe’s critique of Copernicus and the Copernican system. Journal of the History of Ideas51(3), 355-377.

[14] Copernicus, Brahe & Kepler. (2018). Faculty.history.wisc.edu. Retrieved 5 January 2018, from https://faculty.history.wisc.edu/sommerville/351/351-182.htm

[15] What is Occam’s Razor?. (2018). Math.ucr.edu. Retrieved 5 January 2018, from http://math.ucr.edu/home/baez/physics/General/occam.html

[16] Asimov, I. (1989). The relativity of wrong. The Skeptical Inquirer14(1), 35-44.

Share

Science has been wrong before, therefore I can make up whatever bullshit I want.

Some of the historical instances which typically get characterized as “science having been wrong” can be better understood as incomplete theories/models being conceptually re-framed in order to account for both the facts explained by the prior theory/model, as well as whatever (more recently acquired facts) made the modification necessary. In such cases, the “meaning” of the known facts gets viewed through a new and better lens. When it’s done successfully, the new conceptual framework also makes additional predictions that, if accurate, can expand human knowledge beyond merely summarizing the phenomena already known at the time of its inception.

One of my favorite historical examples of this is the transition from Newton’s Universal Gravitation to Einstein’s General Theory of relativity. The fact that apples can be readily observed moving from tree branches to the ground wasn’t overturned by General Relativity because Newton wasn’t wrong about that. It was a fact long before Newton, and remains a fact today. What changed was the way in which such occurrences were conceptualized and explained.

Newton didn’t have a comprehensive theory, but he had an equation, m*d2r/dt2 = G*M*m/r^2, which seemed to imply non-local action at a distance. General Relativity on the other hand, which involved considerably busier mathematics (involving tensors and differential geometry), conceived of the falling apple as moving naturally along a geodesic path on a 4 dimensional spacetime manifold which had been curved by the presence of a large mass (Earth in this case). It also made other predictions that people may never have thought of otherwise  and which expanded our knowledge (i.e. gravitational lensing for example)., the he point being that it didn’t so much refute the basic facts so much as it explained them in a new way and explained other facts that the Newtonian model didn’t account for.

Similarly, we know that General Relativity cannot be the end of the story because it produces absurd results at the outer limits of its predictive power (such as the prediction of actual singularities), and it doesn’t play well with another extremely well-tested theory, quantum mechanics (and quantum field theory). We do know that any candidate for replacing it will have to (at minimum) account for the facts that the current theories do predict and account for, or else it would be a downgrade rather than an improvement.

This type of reiterative evolutionary process represents the scientific process at its best, and recognizing that can go a long way in distinguishing between varying degrees of relative truths (as described in Asimov’s famous “the relativity of wrong” essay), and assigning fair and reasonable degrees of confidence to various aspects of our current scientific understanding of a given subject.

Asimov famously summarized it thusly:

“John, when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.”

Although improving public awareness of this may not be sufficient in and of itself to ameliorate the problem of public science denial, since such attitudes are often fueled by ulterior motivations rather than by lack of knowledge alone, it is nevertheless important insofar as understanding the way in which cumulative knowledge progresses in science, because understanding that makes it clear why nihilistic dismissal of science on the grounds that it can be wrong is an untenable and irrational position.

Yet, we see those sorts of rationalizations constantly by people who reject evolution, climate science, vaccine science, science-based medicine and the science of genetically engineered foods.

So to summarize, the take away message here is twofold:

1) The phenomenon of science correcting mistakes is one of its strengths: not one of its weaknesses.

and

2) Just because newer theories or models incorporate and explain additional facts and information doesn’t mean that the facts explained by the old theories and models aren’t true, or that the new ones don’t have to account for them. Often times the improved theory or model is inclusive of the old one, but merely explains the facts in a different way and/or accounts for additional information that the old one didn’t cover. – Credible Hulk unnamed

Share

Blowing smoke: Annihilating fallacious comparisons of biotech scientists to tobacco company lobbyists.

Bringing up pictures of doctors smoking cigarettes is a common tactic used by anti-GMO activists and other critics of “mainstream” science to cast doubt and mistrust on matters of scientific consensus by implying that a world wide scientific consensus can realistically be bought off by corporations, and insinuating that that is, in fact, what is actually happening.

The claim comes up frequently enough that I think it deserves to be directly addressed, so let’s put this tired canard to bed for all time.

One of the fatal problems with this argument, of which there are many, is that the scientific consensus never was in favor of cigarette safety to begin with. In fact, it was known as far back as the 1930s that epidemiological data suggested a connection between smoking and lung cancer, and other detrimental health effects were documented as far back as 250 years ago.

tobacco versus GMOs

People using this comparison between tobacco companies and the consensus of legitimate science are attempting to equate the content of paid advertising campaigns with the views of the broader scientific community, which is naive at best, and dishonest at worst.

Looking back, the thinly veiled obfuscation attempts by the tobacco companies seem conspicuous now because the dangers of smoking had already been established (and were continuing to be further established) by independent research, while the tobacco sponsored researchers were simultaneously attempting to formulate alternative explanations for the increased cancer risk and other health problems correlated with smoking. The tobacco companies never controlled the science of the matter and never really had scientific consensus on their side. What they had was a good PR department and enough cash to bankroll a few scientists and doctors into speaking positively about smoking tobacco despite the weight of the evidence against them. These phenomena are elucidated here , here and here. The same companies later exhibited similar patterns of behavior for many years with respect to the controversy surrounding the health effects of second hand smoke.

On the other hand, the safety of genetically engineered foods does have a strong scientific consensus behind it, and there aren’t really any credible studies from any source showing any damage to animals or people attributable to any of the currently used transgenic crops. There is simply no evidence of an attempted cover-up, and there are no papers postulating alternative explanations for negative results, simply because no such results have been found that would require explanation. Moreover, there is no systematic contradiction between independent studies versus industry-funded ones either, which we should expect to see if “Big Biotech” was really manipulating the data and buying off all of the world’s biotech scientists.

Instead, the only evidence we find of deliberate spin and misleading data is in papers by Seralini, Seneff and Carman, which have been found to be lacking in solid, credible scientific rigor, and by frequently repeated claims by anti-GMO front organizations and their supporters. Although most of the media attention tends to focus on “Big Biotech,” organic food  is an $63 billion dollar industry, much larger than any of the big biotech companies, and many of them have been engaged in vehement ongoing smear campaigns opposing transgenic crops, yet their allies consistently publicly criticize biotech companies for fighting unjustified labeling mandates and accuse them of buying off scientists and scientific journals (despite having precisely zero evidence to corroborate their accusations).

Ironically, Just-Label-It and Mamavation, two of the most well known anti-GMO groups, have been openly advertising to pay bloggers to argue for their cause, thus demonstrating that their hypocritical use of the shill gambit against pro-science people has been nothing more than a case of psychological projection (as Kavin Senapathy discusses here). The infamous Mike Adams, the founder of anti-science website Natural News, even went as far as to assemble a “kill list” of known scientists and science advocates he characterized as “Monsanto Collaborators.” To the best of my knowledge, not even the tobacco companies have stooped as low as to call for the execution of people known for debunking their claims.

So, the reality is that there is no analogy between GMOs and tobacco.

There is evidence of detrimental Heath effects from smoking being known to science 200 years before the tobacco company fiasco that tragically duped so many laypeople.

The safety of cigarettes was never the prevailing scientific consensus, and people looking for justifications to ignore or deny science now will just have to come up with a better excuse than this.

QED

BOOM!

Credible Hulk

UPDATE: 

Shortly after the original publication of this piece, Marc Brazeau, founder of Food and Farm Discussion Lab, wrote the following response piece. In the piece, Marc takes issue with a line towards the end of this piece in which I said:

“So, the reality is that there is no analogy between GMOs and tobacco.”

He agreed with everything else in the original version of this piece except for that concluding line, and I think it’s worth looking at his argument, because upon examining it, I think that he’s right. Marc argues as follows:

“I would actually argue that there is a strong analogy between GMOs and Tobacco Science. What obscures the analogy is that the roles are reversed in ways that obscure the parallels. With “tobacco science” we had big business twisting, cherry-picking and manipulating the science in an attempt to confuse the public and provide cover for policy makers they have in their pocket. What we see today with GMOs is similar twisting, cherry-picking and manipulating of the science relating to biotech crops in order to confuse the public and drive policy-making.

Except that instead of Big Ag, the source of misinformation and misconceptions is environmental and public interest watchdog groups. For those of us used to turning to these groups to make sense of scientific research and let us know the policy implications, this can be disorienting to say the least. If you are an environmentalist or any sort of liberal/lefty it can really throw you off your bearings to realize that they guys you thought were wearing the white hats are the ones blowing smoke, muddying the waters, and sowing confusion.”

For examples, see here, here, here, and here.

Marc makes valid points, and I concede his (much stronger) conclusion: Anti-GMO groups are the ones guilty of utilizing the tactics of tobacco companies.

Share