Genetically Engineering Foods Involves Greater Precision and Lower Risk of Unintentional Changes Than Traditional Breeding Methods


There exists an international scientific consensus that existing genetically engineered foods are at least as safe as their closest corresponding non-GMO counterparts [1]. This consensus is drawn on decades of research and thousands of studies. Despite this fact, there still exists a broad gap between the science and public perception of the topic. According to PEW reports, this gap is wider than on any other topic for which a strong scientific consensus exists [2].

This is significant because we know several other scientific topics have remained highly controversial in the court of public opinion for decades after having achieved mainstream acceptance among experts. Young Earth Creationists have been trying out various strategies for roughly a century to undermine the teaching of evolution in public schools in the US [3]. Rejection of Anthropogenic Climate change is ubiquitous in the US, and even extends to the POTUS himself, despite the weight of the evidence and resulting scientific consensus to the contrary [4],[5],[6],[7]. Although its prevalence has waxed and waned over time, vaccine opposition has been a near constant presence ever since the discovery of the cowpox vaccine [8],[9]. So, for there to exist an even bigger gap between science and public perception on GE foods than for any of these other topics, is no trivial matter.

One of the most common reasons given for trepidation with respect to GE foods is the idea that it’s a more radical way of altering our foods than more traditional methods of artificial selection, and that we therefore can’t know whether or not GE foods are bad until they’ve been around for several more human generations. This view relies on an argument from ignorance logical fallacy, and presents a double standard with no biologically plausible justification. One fact that proponents of this view rarely acknowledge is that it also leads to the generation of testable hypotheses, the results of which actually suggest the exact opposite. Allow me to explain.

The Scientific Consensus

In previous posts, I’ve pointed out what we mean when we speak of the international scientific consensus on GE food safety consists of two major claims:

  • All currently approved GE crops have been tested on a case-by-case basis and the weight of the evidence suggests they are at least as safe as their closest non-GE counterparts.
  • Nothing about the process makes unpredicted dangers any more intrinsically likely with modern molecular GE techniques than other methods of altering an organism’s genome.

I’ve already outlined the evidence for the first point in previous posts [1]. However, the second point directly bears upon the aforementioned double standards and arguments from ignorance comprising many anti-GMO fears and talking points. The purpose of this article is to explain how we know this to be the case.

The Genetic Code is a Universal Language

Before delving into what I believe to be the most intelligently-formulated versions of this set of concerns over GMO food crops, I want to clear up some even more pervasive perceptions that are based almost exclusively on malformed understandings of basic genetics and the evolutionary relationships between species. See, most applications of transgenesis involve the insertion of no more than one to three new genes. And although it is generally well-accepted that humans have been altering our food for thousands of years, those who are distrustful of biotechnology often argue that that doesn’t count on the grounds that cross-breeding doesn’t entail the insertion of a gene from a totally different species, phyla, or even kingdom of life. The argument usually takes the form of “selective cross-breeding is not remotely the same as putting a fish gene into a tomato!

Derek from Veritasium

As of this writing, there are no GM plants on the market whose transgenes were derived from animals, but that is tangential to the even bigger flaw and this line of thinking. The argument is based on the assumption that the number and/or identity of genes altered by a process makes less of a difference than the source of the single gene.

However, this is an incorrect way of looking at it, because at least for life on earth, all species are related by common descent, the genetic code is a nearly universal language, and most genes are not species proprietary [10],[11],[12],[13],[14],[15],[16]. There are very few minor exceptions to the genetic code throughout earth’s biosphere [17]. I can cover DNA composition and the genetic code in more detail in another post if there’s sufficient demand. But for now, suffice it to say that there is a well-known and well-understood relationship between certain combinations of nucleotide bases in protein-coding genes and the amino acids for which they correspond when their mRNA transcripts are translated into proteins [15]. Moreover, all species share certain genes with many other species, and although there can be differences in the ways a gene is expressed in two different species (or even two different cell types within the same organism), those differences occur in ways that can be measured and understood, and they don’t involve outright violations of the genetic code itself [18]. That’s just not how it works.

Often the most noticeable differences between two species has more to do with differential expression and regulation of genes they share in common than with dramatic differences in which genes they possess [18]. And although the passing of genes across species boundaries (horizontal gene transfer) is primarily associated with prokaryotes (bacteria and archaea) in nature, it is not entirely unheard of in eukaryotes. [19],[20].

As an interesting point of fact, the genome of cultivated sweet potatoes has been shown to contain agrobacterium T-DNAs, demonstrating them to be a natural transgenic food crop [21]. This is literally the exact same method used in the deliberate engineering of several transgenic crops today: the agrobacterium method [22]. So, even the claim that transgenesis never occurs in nature is false. Whether or not it occurs naturally obviously has no bearing on its safety, but the point is that many of these anti-GMO arguments are flawed on multiple different levels. The term “fractal wrongness” comes to mind.

Unintended Changes are not Impossible

It is important to point out that unintended off-target changes to the genome can and sometimes do occur with all known genetic modification techniques, including both transgenesis as well as more traditional methods not considered “GMO” by usual colloquial standards. This is neither unexpected nor necessarily problematic, but it is also true that a small portion of these off-target changes may turn out to confer undesirable traits in the resulting organism [23]. However, not only are such instances less likely to make it into the food supply with GE than with non-GE for regulatory reasons, but strong evidence suggests that they also occur less frequently with transgenesis than with most techniques considered non-GE.

How do we know this?

This can be tested in a number of different ways. These methods include compositional, genomic, transcriptomic, proteomic, and metabolomic comparisons between a GE line and a closely-related non-GE counterpart. I will explain each of these in turn, but understand that they are not mutually exclusive.

Compositional Equivalence

One of the most common ways to test this is to perform compositional comparisons between a GE crop and a nearly isogenic non-GE counterpart line. In many instances, one or more additional non-GE commercial reference lines will be included. This usually involves replicated field trials placed throughout a growing region from which tissue samples are taken, which are then analyzed for an array of several dozen compounds.

Subsequently, statistical methods are employed in order to determine whether any significant compositional differences exist between the GE and non-GE lines. If this were to turn out to be the case, the analytes in question would then be compared to the range of levels considered to be normal for the particular crop. If levels are not within the normal range of natural variation, scientists then have to determine the biological relevance of these differences within the context of how the crops are typically produced and consumed. Guidelines for these types of compositional field trials have been prescribed by the EFSA [24]. Notice that this does not entail that the transgenic lines need to be precisely identical to their non-GE counterparts. Some variation is expected. The issue is whether or not the differences actually matter [58].


Another way of assessing the prevalence of off-target changes in transgenesis is to compare various genetic markers between related transgenic and non-transgenic lines. Generally speaking, whole genome sequencing is rarely ever used for this purpose. Part of the reason for this is because, until recently, whole genome sequencing had been cost prohibitive. Another reason, however, is because genome sequencing alone doesn’t tell us much about how the rates of expression of different genes might be affected by the modification process. Of the thousands (or even tens of thousands) of genes possessed by a given multi-cellular organism, only a small fraction of them are expressed in any particular cell at any given time [18].

In fact, the phenomenon of differential gene expression between different cell and tissue types is central to the entire existence and development of complex multi-cellular life. Different cells in different tissues can have the exact same nuclear DNA, yet be vastly different in their appearance and function, simply because they express different subsets of those genes in different cells and/or express them at different rates [18].

This is why genome sequencing alone doesn’t tell us the whole story with respect to examining the changes made (whether intended or not) to a particular cultivar, regardless of the breeding method used to accomplish it. For that, it is often more informative to analyze and compare the total set of RNA molecules transcribed in a given set of cells (the transcriptome), and/or the proteins into which some of those RNA molecules are translated (the proteome). More on that in a moment.


Recall that when DNA in Eukaryotic cells is transcribed the initial product is an RNA strand called a primary transcript. In the case of protein-coding genes, those primary transcripts are called pre-mRNA (pre-messenger RNA), and typically undergo processing in order to produce mature RNA transcripts, which will later be translated into proteins which perform particular functions within the cell [25]. Not all DNA is transcribed to RNA, and not all RNA is translated into proteins, and not all protein-coding genes are expressed in all cell types [18].

As if that wasn’t already enough reason to conclude that the genome itself doesn’t convey the entire expressive picture, some RNA even has to undergo splicing and tagging in the cell before becoming a mature mRNA transcript ready to be translated into a protein. Some of these transcripts even have alternative splicing patterns. These splice jobs are performed by protein-RNA macromolecular complexes known as small nuclear ribonucleoproteins, or simply snRNPs (pronounced snurps). Their purpose is to splice out sections of the primary transcript called introns, which are not expressed in the final product, thus leaving only regions called exons, which are expressed [25].

Much as the aggregate of an organism’s genes is called its genome, the set of all RNA transcripts within a given cell or tissue of a given organism at a given time is called its transcriptome. Analyzing the transcriptomes of GE organisms and comparing them to those of closely-related organisms altered via other methods gives us an idea of how those modifications are really affecting gene expression.

Other “Omics” Comparisons

If genomics involves analyzing all the genes in an organism’s genome, and transcriptomics involves analyzing all the RNA transcripts of a particular cell at a particular time, then you can probably infer that proteomics refers to the analysis of all the proteins expressed in a particular cell at a particular time [26]. Instead of looking at individual proteins, proteomics looks at all of the proteins present. Once the individual proteins have been identified, researchers can then study how the proteins present change over time, how they interact, and/or how they vary between different cells. Because proteins facilitate so many cellular processes, the proteome serves as a functional representation of the genome.

Similarly, metabolomics is the study of all the metabolites present in a cell, tissue, or organism [27]. These metabolites include the small molecular end products and intermediates of various cellular processes, as well as hormones and some signaling molecules. In human metabolomics, molecules may be sub-classified as either endogenous or exogenous depending on whether or not they’re produced by the organism, or as xenometabolites in the case of foreign substances such as drugs [28],[29]. For plants, the terms primary and secondary metabolites are often used [30]. The former refers to metabolites involved in normal growth, development, and/or reproduction, whereas the latter are not, but will in many cases serve an ecological function of interest.

Like the transcriptome and proteome, the metabolome is dynamic; it changes over time. Consequently, transcriptomics, proteomics, and metabolomics are all important tools comprising the field known as functional genomics. A genome sequence is ostensibly just a parts list, whereas these tools permit insight into what the genome is actually doing. Insofar as applying these techniques to compare the rates of off-target changes of different methods of modification, one of the conclusions of Metzdorf et al 2006 was that correct interpretations of the ‘omics’ data would require information about the range of natural variability of crop plants under examination [31].

GE and Unintended Off-Target Changes: What Does the Evidence Actually Suggest?

The transcriptomics, proteomics, and metabolomics, along with compositional and genomic analyses permit us to compare the rates of unintended off-target changes in GE organisms vs those arrived at via other breeding methods. Indeed, such measures have been used to test exactly that question. The evidence suggests that they involve LESS change. That doesn’t mean a harmful off target change is literally physically impossible, but it does make it less likely than with non-GMO. Here are some examples:

This study analyzed the transcriptome of drought-tolerant transgenic arabidopsis thaliana crops via the use of microarrays. (More on those later). With the exception of the transgene itself (ABF3), which codes for a regulatory transcription factor involved in triggering the expression of other genes associated with drought response, the transcriptome revealed no significant differences in gene expression between the transgenic lines vs the controls [32].

Here’s one which found that transgenesis resulted in fewer off-target changes to the transcriptome of wheat grain than did conventional cross-breeding [33].

The following meta-analysis incorporated studies with multiple-”omics” comparisons. Ricroch et al found that that genetic engineering had less of an impact on plant gene expression and composition relative to conventional plant breeding. They also found that environmental factors such as field location, sampling time, (or agricultural practices) had more impact on outcomes than transgenesis [34].

This metabolomic analysis of transgenic wheat did reveal some subtle differences in amino acid profile as well as maltose and sucrose concentrations relative to conventional parental controls. However, these differences were well within the expected range of natural variation and were dwarfed by differences attributable to location, soil, weather etc [35].

The following excerpt is from a paper looking back on 20 years and over 80 studies of compositional comparison research [36]:

“It is concluded that suspect unintended compositional effects that could be caused by genetic modification have not materialized on the basis of this substantial literature.”

The authors also concluded the following:

“Our assessment is that there appears to be overwhelming evidence that transgenesis is  less disruptive of crop composition compared with traditional breeding, which itself has a tremendous history of safety.”

The following proteomic study compared transgenic insect-resistant Bt cotton with conventional using an Enzyme Linked Immunosorbent Assay (ELISA). The author’s found some subtle differences in the expression and/or interaction of 35 proteins, most of which were involved in photosynthetic pathways and/or carbohydrate transport and metabolism, as well as some chaperone proteins involved in post-translational modification. However, there was no increase in toxic or allergenic proteins, and the authors concluded that the differences were minor enough not to constitute a sharp change to the proteome in transgenic cotton leaves [37].

This one found that gene expression differences between insect resistant maize and conventional were best explained by natural variation and environmental considerations [38].

Transgenesis has also been compared with Marker Assisted Breeding, which is a form of selective back crossing for which certain genetic markers are used to ascertain whether or not a plant possesses the desired trait, so that it is not necessary to wait until the plant is fully developed [52]. This speeds up the process. Using transcriptome comparisons between the transgenic and MAB rice cultivars, the authors found that the transgenic rice had a whopping 40% fewer changes in gene expression than the rice arrived at via Marker Assisted Backcrossing relative to the controls [53].

Here is one in which comprehensive metabolomic comparisons were performed on transgenic potatoes relative to closely related cultivars. Other than already anticipated metabolites, the authors’ mass spectra and chromatographic analysis found no significant metabolomic differences between the transgenic lines vs the conventional [39]. These are only but a handful of the studies of these types that are out there. Here are some more [54],[55],[56],[57].


Mutagenesis is a common method of modification in which ionizing radiation or mutagenic chemicals are used to speed up the mutation rate of seeds so that desirable traits which randomly emerge can be artificially selected for in subsequent generations [40]. By US standards, plants developed in this way aren’t legally considered “GMO.” Unlike genetically engineered seeds, they can be used in organic farming.

Given that the procedure’s entire purpose is to speed up mutations, it’s reasonable to wonder how their rates of off-target changes compare to those attributable to transgenic techniques.

The following study found that transgenesis resulted in an entire order of magnitude fewer off-target structural variations to the genome relative to mutagenesis. Using what’s called a tilling microarray, the authors also found that, although rare, structural variants in transgenic varieties typically occurred directly adjacent to the points of transgene insertion, or occasionally at unlinked loci on different chromosomes [41]. Note: I had initially included a section in this post describing the premise of DNA microarrays, RNA-seq, but omitted them because they would make the article way too long.

“On average, the number of genes affected by structural variations in transgenic

plants was one order of magnitude less than that of fast neutron mutants and

two orders of magnitude less than the rates observed between cultivars.”

Another study found that transcriptome alteration was greater in mutagenic breeding than with transgenesis [51].

Despite this, seeds arrived at via mutagenesis undergo no safety evaluation or substantial equivalence testing whatsoever prior to commercialization. A coherent justification for this regulatory double standard has not been forthcoming.

Harmful and/or Undesired Results From Conventional Breeding

Regardless of whether they were genetically engineered, all foods contain substances that could be potentially hazardous in sufficiently high amounts. Unlike GE, however, non-GE methods of modification actually have resulted in foods with harmful unintended changes making it into the food supply.

For example, the kiwi fruit, which was brought about by conventional breeding, has been implicated in severe anaphylaxis in predisposed individuals [42].

Similarly, the conventionally bred Lenape potato was found to contain dangerously elevated solanine levels in the late 1960s, and was subsequently removed from the market [43],[44]. Ironically, there have been experiments in which a transgenic version of the Lenape potato were shown to express substantially lower solanine levels [45].

More recently, a similar case arose with a heritage variety of potato developed in the UK, and was subsequently removed from the market [46].

Elevated psoralens levels in celery plants is another example in which high levels of a toxic compound resulted from traditional breeding [47]. These chemicals confer natural resistance to insect predation and make the plant more aesthetically appealing to consumers, but they are also irritants which can become toxic in higher amounts [48].

Cucurbitacins are a class of biochemical compounds produced by some plants as a defense against herbivores [49]. There have been cases in which conventionally bred zucchini squash varieties have produced such compounds in potentially dangerous quantities [50].

To be clear, there do also exist cases in which GE resulted in unintended off-target changes. However, as the comprehensive 2016 literature review by the National Academy of Sciences puts it:

“Because GE crops are regulated to a greater degree than are conventionally bred,

non-GE crops, it is more likely that traits with potentially hazardous characteristics will not pass early developmental phases. For the same reason, it is also more likely that unintentional, potentially hazardous changes will be noticed before commercialization either by the breeding institution or by governmental regulatory agencies.”

Anti-GMO Ideologues be like…


The bottom line is that we humans have been modifying our food by one method or another for thousands of years. I am well aware that contrarians to the mainstream scientific position like to over-emphasize that GE is different, and strictly speaking, it is trivially true that GE is different, since every method of genetic alteration is technically (by definition) different than every other method of modification; wide-cross hybridization is different from transgenesis, which is different from protoplast fusion, which is different from miRNA gene silencing, which is different from mutagenesis, which is different from polyploidy, and so on and so forth.

However, that argument is ultimately a red herring, because it ignores that the ways in which GE is different are its strengths: not its weaknesses. Overwhelming scientific evidence indicates that commercially available GE foods are at least as safe as their conventional counterparts, and that the processes itself does not increase the likelihood of harmful unintended off-target changes relative to traditional methods of modification. The latter conclusion is supported by multiple converging lines of genomic, transcriptomic, proteomic, and metabolomic evidence and compositional equivalence studies. Moreover, due to the fact that they are far more stringently regulated, any potentially hazardous unintended changes are far more likely to be caught prior to commercialization with GE foods relative to non-GE.

Furthermore, vague pie-in-the-sky anti-GE arguments invoking potentially harmful future unknowns could be applied to virtually any technology (young or old). Such arguments fail to consider the potential consequences of rejecting the technology, and tend to be incapable of generating testable hypotheses, which consequently renders them unfalsifiable, and therefore unscientific. In the instances where such arguments have been formulated well enough to actually generate testable hypotheses (such as the prediction that GE should result in greater changes to composition and gene expression), they have been demonstrated to be false.


– Credible Hulk


[1] Hulk, C. (2015). The International Scientific Consensus On Genetically Engineered Food SafetyThe Credible Hulk. Retrieved 23 October 2017, from

[2] Funk, C., & Rainie, L. (2015). Public and Scientists’ Views on Science and SocietyPew Research Center: Internet, Science & Tech. Retrieved 23 October 2017, from

[3] Caudill, E. (2013). Intelligently designed : How creationists built the campaign against evolution. Urbana, Chicago, and Springfield: University of Illinois Press.

[4] Collomb, J. (2014). The Ideology of Climate Change Denial in the United States. European Journal Of American Studies, 9(1).

[5] Rubin, J., & Rubin, J. (2017). Opinion | Trump’s climate-change denial rattles U.S. businessesWashington Post. Retrieved 23 October 2017, from

[6] C., L., S., N., W., P., . . . R. (2016). Consensus on consensus: A synthesis of consensus estimates on human-caused global warming. Environmental Research Letters, 11(4). doi:10.1088/1748-9326/11/4/048002

[7] Ruishalme, I. (2013). Is There a Consensus about Climate Change?. Thoughtscapism. Retrieved 23 October 2017, from

[8] Tafuri, Silvio & Martinelli, D & Prato, R & Germinario, C. (2011). [From the struggle for freedom to the denial of evidence: history of the anti-vaccination movements in Europe]. Annali di igiene : medicina preventiva e di comunità. 23. 93-9.

[9] History of Anti-vaccination Movements | History of Vaccines. (2017). Retrieved 24 October 2017, from

[10] Theobald, D. (2010). A formal test of the theory of universal common ancestry. Nature, 465(7295), 219-222.

[11] Steel, M., & Penny, D. (2010). Origins of life: Common ancestry put to the test. Nature, 465(7295), 168-169.

[12] Weiss, M., Sousa, F., Mrnjavac, N., Neukirchen, S., Roettger, M., Nelson-Sathi, S., & Martin, W. (2016). The physiology and habitat of the last universal common ancestor. Nature Microbiology, 1(9), 16116.

[13] Watanabe, K., & Suzuki, T. (2008). Universal Genetic Code and its Natural Variations. Encyclopedia Of Life Sciences.

[14] Koonin, E., & Novozhilov, A. (2009). Origin and evolution of the genetic code: The universal enigma. IUBMB Life, 61(2), 99-111.

[15] Isenbarger, T., Carr, C., Johnson, S., Finney, M., Church, G., & Gilbert, W. et al. (2008). The Most Conserved Genome Segments for Life Detection on Earth and Other Planets. Origins Of Life And Evolution Of Biospheres, 38(6), 517-533.

[16] Bejerano, G. (2004). Ultraconserved Elements in the Human Genome. Science, 304(5675), 1321-1325.

[17] Turanov, A., Lobanov, A., Fomenko, D., Morrison, H., Sogin, M., & Klobutcher, L. et al. (2009). Genetic Code Supports Targeted Insertion of Two Amino Acids by One Codon. Science, 323(5911), 259-261.

[18] Gilbert, S., & Barresi, M. (2016). Developmental biology (6th ed.). Sunderland (Mass.): Sinauer.

[19] Fuentes, I., Stegemann, S., Golczyk, H., Karcher, D., & Bock, R. (2014). Horizontal genome transfer as an asexual path to the formation of new species. Nature, 511(7508), 232-235.

[20] Marine Biological Laboratory. (2015, February 3). Sea slug has taken genes from algae it eats, allowing it to photosynthesize like a plant. ScienceDaily. Retrieved October 23, 2017 from

[21] Kyndt, T., Quispe, D., Zhai, H., Jarret, R., Ghislain, M., & Liu, Q. et al. (2015). The genome of cultivated sweet potato contains AgrobacteriumT-DNAs with expressed genes: An example of a naturally transgenic food crop. Proceedings Of The National Academy Of Sciences, 112(18), 5844-5849.

[22] Gelvin, S. (2003). Agrobacterium-Mediated Plant Transformation: the Biology behind the “Gene-Jockeying” Tool. Microbiology And Molecular Biology Reviews, 67(1), 16-37.

[23] Committee on Identifying and Assessing Unintended Effects of Genetically Engineered Foods on Human Health. (2004). Safety of Genetically Engineered Foods (p. 49). Washington: National Academies Press.

[24] EFSA Panel on Genetically Modified Organisms (GMO); Scientific Opinion on Guidance for risk assessment of food and feed from genetically modified plants. EFSA Journal 2011;9(5): 2150. [37 pp.] doi:10.2903/j.efsa.2011.2150.

[25] Mithieux, S., & Weiss, A. (2005). Elastin. Fibrous Proteins: Coiled-Coils, Collagen And Elastomers, 437-461.

[26] NatureEducation. (2017). Scitable. Retrieved 24 October 2017, from

[27] Griffin, J., & Vidal-Puig, A. (2008). Current challenges in metabolomics for diabetes research: a vital functional genomic tool or just a ploy for gaining funding?. Physiological Genomics, 34(1), 1-5.

[28] Nordström, A., O’Maille, G., Qin, C., & Siuzdak, G. (2006). Nonlinear Data Alignment for UPLC−MS and HPLC−MS Based Metabolomics:  Quantitative Analysis of Endogenous and Exogenous Metabolites in Human Serum. Analytical Chemistry, 78(10), 3289-3295.

[29] Crockford, D., Maher, A., Ahmadi, K., Barrett, A., Plumb, R., Wilson, I., & Nicholson, J. (2008). 1H NMR and UPLC-MSE Statistical Heterospectroscopy: Characterization of Drug Metabolites (Xenometabolome) in Epidemiological Studies. Analytical Chemistry, 80(18), 6835-6844.

[30] Bentley, R. (1999). Secondary Metabolite Biosynthesis: The First Century. Critical Reviews In Biotechnology, 19(1), 1-40.

[31] Metzdorff, S., Kok, E., Knuthsen, P., & Pedersen, J. (2006). Evaluation of a non-Targeted “Omic” approach in the safety assessment of genetically modified plants. Plant Biol (stuttg), 8(5), 662-672. doi:10.1055/s-2006-924151

[32] Abdeen, A., Schnell, J., & Miki, B. (2010). Transcriptome analysis reveals absence of unintended effects in drought-tolerant transgenic plants overexpressing the transcription factor ABF3. BMC Genomics, 11(1), 69.

[33] Baudo, M., Lyons, R., Powers, S., Pastori, G., Edwards, K., Holdsworth, M., & Shewry, P. (2006). Transgenesis has less impact on the transcriptome of wheat grain than conventional breeding. Plant Biotechnology Journal, 4(4), 369-380.

[34] Ricroch, A. (2013). Assessment of GE food safety using ‘-omics’ techniques and long-term animal feeding studies. New Biotechnology, 30(4), 349-354.

[35] Herman, R. A., & Price, W. D. (2013). Unintended compositional changes in genetically modified (GM) crops: 20 years of research. Journal of agricultural and food chemistry, 61(48), 11695-11701.

[36] Anderson, J., Michno, J., Kono, T., Stec, A., Campbell, B., Curtin, S., & Stupar, R. (2016). Genomic variation and DNA repair associated with soybean transgenesis: a comparison to cultivars and mutagenized plants. BMC Biotechnology, 16(1).

[37].  Wang, L., Wang, X., Jin, X., Jia, R., Huang, Q., Tan, Y., & Guo, A. (2015). Comparative proteomics of Bt-transgenic and non-transgenic cotton leaves. Proteome Science, 13(1).

[38] Coll, A., Nadal, A., Collado, R., Capellades, G., Kubista, M., Messeguer, J., & Pla, M. (2010). Natural variation explains most transcriptomic changes among maize plants of MON810 and comparable non-GM varieties subjected to two N-fertilization farming practices. Plant Molecular Biology, 73(3), 349-362.

[39]. Catchpole, G., Beckmann, M., Enot, D., Mondhe, M., Zywicki, B., & Taylor, J. et al. (2005). Hierarchical metabolomics demonstrates substantial compositional similarity between genetically modified and conventional potato crops. Proceedings Of The National Academy Of Sciences, 102(40), 14458-14462.

[40] Brown, N. (2011). 20. Mutagenesis – PlantBreeding. Retrieved 24 October 2017, from

[41] Anderson, J., Michno, J., Kono, T., Stec, A., Campbell, B., Curtin, S., & Stupar, R. (2016). Genomic variation and DNA repair associated with soybean transgenesis: a comparison to cultivars and mutagenized plants. BMC Biotechnology, 16(1).

[42]. K., S., R., O., & M. (2007). Life-threatening anaphylaxis to kiwi fruit: Protective sublingual allergen immunotherapy effect persists even after discontinuation. The Journal of Allergy and Clinical Immunology, 119(2), 507-8.

[43] Akeley, R., Mills, W., Cunningham, C., & Watts, J. (1968). Lenape: A new potato variety high in solids and chipping quality. American Potato Journal, 45(4), 142-145.

[44] Zitnak, A., & Johnston, G. (1970). Glycoalkaloid content of B5141-6 potatoes. American Potato Journal, 47(7), 256-260.

[45] McCue, K., Allen, P., Rockhold, D., Maccree, M., Belknap, W., & Shephard, L. et al. (2003). Reduction of Total Steroidal Glycoalkaloids in Potato Tubers Using Antisense Constructs of a Gene Encoding A Solanidine Glucosyl Transferase. Acta Horticulturae, (619), 77-86.

[46] Hellenäs, K., Branzell, C., Johnsson, H., & Slanina, P. (1995). High levels of glycoalkaloids in the established swedish potato variety magnum bonum. Journal Of The Science Of Food And Agriculture, 68(2), 249-255.

[47] Beier, R. (1990). Natural Pesticides and Bioactive Components in Foods. Reviews Of Environmental Contamination And Toxicology, 47-137.

[48] Beier, R., & Oertli, E. (1983). Psoralen and other linear furocoumarins as phytoalexins in celery. Phytochemistry, 22(11), 2595-2597.

[49] Ferguson, J., Fischer, D., & Metcalf, R. (1983). A report of cucurbitacin poisonings in humans. Cgc, V.6, P.73-74, 1983.

[50] Rymal, K., Chambliss, O., Bond, M., & Smith, D. (1984). Squash containing toxic cucurbitacin compounds occurring in california and alabama. Journal of Food Protection, 47(4), 270-271. doi:10.4315/0362-028X-47.4.270

[51] Batista, R., Saibo, N., Lourenço, T., & Oliveira, M. M. (2008). Microarray analyses reveal that plant mutagenesis may induce more transcriptomic changes than transgene insertion. Proceedings of the National Academy of Sciences, 105(9), 3640-3645.T

[52] Collard, B. C. Y., Jahufer, M. Z. Z., Brouwer, J. B., & Pang, E. C. K. (2005). An introduction to markers, quantitative trait loci (QTL) mapping and marker-assisted selection for crop improvement: the basic concepts. Euphytica, 142(1-2), 169-196.

[53] Gao, L., Cao, Y., Xia, Z., Jiang, G., Liu, G., Zhang, W., & Zhai, W. (2013). Do transgenesis and marker-assisted backcross breeding produce substantially equivalent plants?-A comparative study of transgenic and backcross rice carrying bacterial blight resistant gene Xa21. BMC genomics14(1), 738.

[54] Baudo, M. M., Lyons, R., Powers, S., Pastori, G. M., Edwards, K. J., Holdsworth, M. J., & Shewry, P. R. (2006). Transgenesis has less impact on the transcriptome of wheat grain than conventional breeding. Plant Biotechnology Journal4(4), 369-380.

[55] Gregersen, P. L., Brinch-Pedersen, H., & Holm, P. B. (2005). A microarray-based comparative analysis of gene expression profiles during grain development in transgenic and wild type wheat. Transgenic research14(6), 887-905.

[56] Coll, A., Nadal, A., Palaudelmas, M., Messeguer, J., Melé, E., Puigdomenech, P., & Pla, M. (2008). Lack of repeatable differential expression patterns between MON810 and comparable commercial varieties of maize. Plant molecular biology68(1-2), 105-117.

[57] Harrigan, G. G., Lundry, D., Drury, S., Berman, K., Riordan, S. G., Nemeth, M. A., … & Glenn, K. C. (2010). Natural variation in crop composition and the impact of transgenesis. Nature biotechnology28(5), 402-404.


Why The Asbestos Gambit Fails

People who oppose one or more areas of mainstream science have developed a wide variety of creative ways of rationalizing their rejection of scientific evidence and scientific consensus. Realizing that they cannot rebut a particular scientific idea on the basis of the evidence, some of them will instead resort to attacking the reliability of scientific knowledge more generally. A popular method of doing so is the Asbestos Gambit. The argument is that the story of asbestos implies that areas of strong scientific consensus can’t be trusted. The purpose of this article is to examine the history of asbestos use and the evolution of our knowledge of the health dangers it presents, and to explain why the Asbestos Gambit is a terrible argument on multiple levels.

Asbestos and its Hazards

Asbestos is the generic name for a set of 6 silicate mineral types which have been utilized by human cultures as far back as 5,000 years ago [1]. These 6 types include 5 minerals in the amphibole category: actinolite, amosite (aka brown asbestos), anthophyllite, crocidolite (aka blue asbestos), and tremolite, as well as chrysotile (aka white asbestos), which falls under the serpentine category, and is the form most commonly used in walls, ceilings and floors of homes and businesses in the US [1].

Although some of these types are likely more hazardous than others, all 6 types are currently classified as human carcinogens by the EPA, U.S. Department of Health and Human Services, and the International Agency for Research on Cancer [2],[3],[4],[5]. Mesothelioma in particular is a relatively rare type of malignant pleural cancer associated almost exclusively with asbestos exposure, and asbestos has also been implicated in an increased risk of a chronic inflammatory lung disease called asbestosis [6].

Consequently, domestic usage of asbestos has decreased considerably in most developed countries since the early 1970s [7]. Its use has been banned in countries such as Australia and the UK, where the asbestos-related death tolls were particularly high [30],[31]. The US has no official asbestos ban, but we do have The Clean Air Act of 1970, The TSCA of 1976, and AHERA of 1986 which, in aggregate, provide the EPA with the authority to regulate asbestos use, place restrictions on its use and disposal, and the power to establish inspection and removal standards for asbestos in schools [26],[27],[28]. Here is a list of countries with full asbestos bans in place [29].

What is the Asbestos Gambit?

Unfortunately, the story of asbestos has occasionally been co-opted by ideologues and reframed as an argument against the reliability of scientific knowledge as an excuse to reject scientific consensus whenever its implications conflict with their personal agenda. The argument is essentially that the story of asbestos implies that science is unreliable and cannot be trusted on the grounds that scientists said asbestos was safe when it actually wasn’t. This is curious because asbestos is a set of naturally occurring substances whose adverse health effects were unknown prior to the modern scientific enterprise, and were only discovered via the scientific method itself. The idea is a variant of the old “science has been wrong before, therefore we should ignore its conclusions even now” argument: a common trope utilized by promoters of pseudoscience and critics of mainstream science in general.

There are several reasons why the reasoning underlying the Asbestos Gambit argument is unsound. Even if it was the case that scientists got it wrong with asbestos, the self-correcting nature of science is among its strengths: not its weaknesses. And when scientific knowledge is wrong, the only reason we ever find out is thanks to newer science. That means that the claimant’s major premise, that “science was wrong,” takes for granted something we only know thanks to science, which, according to the claimant’s own conclusion, cannot be relied upon. The argument is practically self-refuting. In this case, however, the turn of events itself is being misrepresented. The argument implies that there was once a strong scientific consensus that asbestos was perfectly safe, and only subsequently did people realize the science had been wrong. The claimant then uses that to argue that that constitutes a good reason to reject well-supported scientific theories. Like many other examples commonly used to advance this argument, it relies on a historical revisionist narrative. The actual history of how scientific knowledge evolved with respect to mesothelioma (and the health risks of asbestos exposure more generally) is long and complicated.

How the Dangers of Asbestos were Discovered

Contrary to popular belief, there is no unambiguous support in the primary source material for the idea that people in the ancient world knew how hazardous asbestos was. It’s possible that this notion arose in hindsight after people began to realize its health effects in the 20th century, but the evidence commonly cited for it is weak and vague at best.

For instance, it is frequently claimed that the Roman naturalist, Pliny The Elder, reported adverse health effects among slaves who wove asbestos into fabrics, but the evidence for this is extremely weak and has been contested by some scholars on the grounds that the primary sources provide no support for it [15],[35]. Pliny mentions asbestos three times in his 37 volume Natural History, but none of those passages mention adverse health effects from it [16],[17],[18]. The line “disease of slaves” quoted on many asbestos-related websites actually comes from a passage that never even mentions asbestos [19].

Another often quoted passage references workers using “masks of loose bladder-skin, in order to avoid inhaling the dust, which is highly pernicious” [22],[23]. However, this section, (which he got from the works of Dioscoride), was about workers in the manufacture of minium (Lead (I, IV) Oxide aka red lead) products, and makes no mention of asbestos [22],[24]. In fact, if anything, Pliny’s account in Book 36 chapter 31 suggests that he believed asbestos to have healing properties, even going as far as to say that it “effectually counteracts all noxious spells, those wrought by magicians in particular” [17],[20].

Similarly, many of those same internet sources repeat the idea that Strabo, the Greek geographer, reported frequent sickness among slaves working in asbestos mines. However, it is believed that the often referenced passage in Geography in which Strabo says “air in the mines is both deadly and hard to endure on account of the grievous odor of the ore, so that the workmen are doomed to a quick death” is actually in reference to arsenic mines: not asbestos [21]. This appears to be one of those misconceptions that got repeated so many times that it became part of the folklore.

The earliest case likely to have been mesothelioma was documented back in 1767, but no strong association with asbestos was known until nearly two centuries later [10],[11]. As for asbestosis, and other lung complications, although some evidence of connections to pulmonary fibrosis were beginning to emerge in cases of asbestos mine workers as early as the turn of the 20th century, with epidemiological data correlating “dusty trades” with early mortality by 1918, many clinical diagnoses in the early 20th century were confounded by the simultaneous presence of tuberculosis, and it wasn’t until 1928 when the first non-tuberculosis case of asbestosis was unambiguously diagnosed, named, and documented [1],[8],[9],[13].

Compelling preliminary evidence of an association between asbestosis and malignant mesothelioma didn’t emerge until the late 1940s-early 1950s, and it wasn’t until the 1960s that a strong scientific consensus started to take shape [10],[12]. A connection to lung cancer was also documented in 1955 [14].

Why the Asbestos Gambit Fails

The contrarians who use the story of Asbestos to discredit science they don’t like would have us believe that scientists researched carelessly and then hastily and arrogantly proclaimed a scientific consensus that asbestos was harmless, and that they were later shown to be wrong after considerable human cost had already accumulated. As you can see, that is not what happened. In the past, methods or substances whose common usage predated the scientific era were often grandfathered in, so to speak. They were presumed acceptable unless proven otherwise, particularly in the case of naturally occurring substances which had been utilized for millennia. So, the usage of asbestos was never the result of a robust scientific consensus based on the convergence of multiple lines of scientific evidence on its safety. Rather, it was the scientific enterprise itself that was responsible for people learning that it was unsafe.

This is exemplary of how opponents of various areas of science distort facts to shed doubt on the veracity of science they don’t like. They do this to introduce sophisticated doubt in the public sphere about the reliability of the scientific consensus on topics such as evolution, the safety of genetically engineered food crops, the age of the earth, the efficacy of vaccines, and the reality of anthropogenic global warming. “After all,” they argue, “look how long it took scientists to figure out the hazards of asbestos. How can we trust scientists now?” Yet, there never was anything about asbestos safety resembling the formidable body of scientific evidence on which the scientific consensus on each of those topics was built, so the Asbestos Gambit is a complete non-sequitur.

Corporate Malfeasance

Another contention closely related to this is the idea that asbestos companies knowingly kept quiet about the dangers of asbestos, or even actively worked to sow the seeds of doubt in order to delay action and distort public perception of the strength of the science. Strong cases have been made that some asbestos companies dragged their feet while knowing more than they let on, and the argument that they actively tried to downplay the severity of the problem has been the subject of many courtroom battles. In 1989, the EPA issued a Final Ban and Phase-Out to prohibit all manufacture and importation of asbestos in the US, which was promptly overturned, thanks in no small part to a lawsuit by an asbestos industry organization: Corrosion Proof Fittings v. EPA, 947 F.2d 1201 (5th Cir. 1991) [32],[33],[34].

We’ve seen this sort of behavior by companies before, such as in the case of tobacco companies delaying public acceptance of an emerging scientific consensus on the dangers of smoking, and it is certainly problematic [25]. Any time special interest groups of any kind delay or obfuscate public understanding of scientific issues, it removes people’s ability to make sound decisions by impeding the flow of accurate information.

However, that has little to nothing to do with the state of the science itself. Ironically, this sort of behavior is precisely what the people rejecting the scientific consensus on topics like GMO food safety, vaccine efficacy, and anthropogenic global warming are doing now. Rather than going through the proper channels by publishing newer and better science to challenge the current weight of the evidence, they instead resort to political rhetoric, bad logic, bad science, and sowing public doubt on the state and/or reliability of scientific knowledge. Yet, these are likely to be the same people who will use the Asbestos Gambit and similar arguments to build a manufactroversy in order to persuade people to ignore scientific consensus.

For example, the debunked Oregon Petition Project was an attempt to obscure the weight of the scientific consensus on human-caused climate change. A document assembled by the Discovery Institute which boasted of 100+ scientists who reject the theory of evolution was humorously rebutted by the National Center for Science Education with Project Steve: a list comprised exclusively of scientists named Steve who accept evolution, which nevertheless dwarfed the Discovery Institute’s list. Similarly, anti-GMO campaigners have written things such as the I-SIS letter as an attempt to sew uncertainty and doubt on the mainstream scientific consensus position on the safety of Genetically Engineered food crops.

If anything, all of this highlights the importance of learning to tell the difference between science and the subterfuge of ideologues and special interest groups. The asbestos industry never controlled the science and were certainly never able to buy off an international scientific consensus. At worst, some of them may have succeeded in delaying policy action and public acceptance of what the scientific evidence was showing. That (again) goes to show how important it is to look at the science itself. 


Far from being a justification for rejecting or ignoring well-supported scientific conclusions, the real lessons from the story of asbestos are that just because something is naturally occurring or has been used since the pre-scientific era does not preclude it from being unsafe, and above all, that it’s critical to examine the weight of scientific evidence while learning to filter out the noise of spin doctors and ideologues.

People may differ in their personal value-hierarchies, and adjudication on matters of political legislation and public policy always involves normative elements, but they nevertheless can and should at least be informed by scientific knowledge. Matters of brute fact should always be the one consistent region of common ground between groups with competing values and priorities. And insofar as generating reliable knowledge of the physical world, no system ever devised by humanity can rival the power of the scientific method.

Credible Hulk


[1] Ross, M., & Nolan, R. P. (2003). History of asbestos discovery and use and asbestos-related disease in context with the occurrence of asbestos within ophiolite complexes. SPECIAL PAPERS-GEOLOGICAL SOCIETY OF AMERICA, 447-470.

[2] Silverstein, M. A., Welch, L. S., & Lemen, R. (2009). Developments in asbestos cancer risk assessment. American journal of industrial medicine52(11), 850-858.

[3] LaDou, J., Castleman, B., Frank, A., Gochfeld, M., Greenberg, M., Huff, J., … & Soffritti, M. (2010). The case for a global ban on asbestos. Environmental Health Perspectives, 897-901.

[4] US Public Health Service, & US Department of Health and Human Services. (2001). Toxicological profile for asbestos. Atlanta, GA: Agency for Toxic Substances and Disease Registry.

[5] International Agency for Research on Cancer. (1972). IARC monographs on the evaluation of carcinogenic risk of chemicals to man. IARC monographs on the evaluation of carcinogenic risk of chemicals to man.1.

[6] Norbet, C., Joseph, A., Rossi, S. S., Bhalla, S., & Gutierrez, F. R. (2015). Asbestos-related lung disease: a pictorial review. Current problems in diagnostic radiology44(4), 371-382.

[7] U.S. Geological Survey. Mineral Commodity Summaries 2006: Asbestos

[8] Luus, K. (2007). Asbestos: mining exposure, health effects and policy implications. McGill Journal of medicine10(2), 121.

[9] Seiler, H. E. (1928). A case of pneumoconiosis: result of the inhalation of asbestos dust. British medical journal2(3543), 982.

[10] Smith, D. D. (2005). The history of mesothelioma. In Malignant Mesothelioma (pp. 3-20). Springer New York.

[11] Lieutaud, J. (1767). Historia anatomico-medica, etc. Paris2, 86.

[12] Wagner, J. C., Sleggs, C. A., & Marchand, P. (1960). Diffuse pleural mesothelioma and asbestos exposure in the North Western Cape Province. British journal of industrial medicine17(4), 260-271.

[13] Hoffman, F. L. (1918). Mortality from Respiratory Diseases in Dusty Trades (inorganic Dusts) (No. 231). US Government Printing Office.

[14] Doll, R. (1955). Carcinoma of the lung in asbestos-silicosis. industr. Med12, 81-86.

[15] MAINES, R. (2005). Asbestos and Fire: Technological Tradeoffs and the Body at Risk. Rutgers University Press. Retrieved 3 May 2017 from JSTOR.

[16] Bostock, J., & Riley, H. T. (1855). Pliny the Elder. The Natural History2, Book 19, The Nature and Cultivation of Flax, and an Account of Various Garden Plants, Chapter 4, “Linen Made of Asbestos.”

[17] Bostock, J., & Riley, H. T. (1855). Pliny the Elder. The Natural History2, Book 36, The Natural History of Stones, Chapter 31, Ostracites: Four Remedies. Amianthus; Two Remedies.

[18] Bostock, J., & Riley, H. T. (1855). Pliny the Elder. The Natural History2, Book 37, The Natural History of Precious Stones, Chapter 54, Achates; the several varieties of it. Acopos; the remedies derived from it. Alabastritis; the remedies derived from it. Alectoria. Androdamas. Argyrodamas. Antipathes. Arabica. Aromatitis. Asbestos. Aspisatis. Atizoe. Augetis. Amphidanes or Chrysocolla. Aphrodisiaca. Apsyctos. Aegyptilla.

[19] Bostock, J., & Riley, H. T. (1855). Pliny the Elder. The Natural History2, Book 7, Man, His Birth, His Organization, and the Invention of the Arts, Chapter 51,Various Instances of Diseases.

[20] Bianchi, C., & Bianchi, T. (2014). Asbestos between science and myth. A 6,000-year story. La Medicina del lavoro106(2), 83-90.

[21] Strabo. ed. H. L. Jones, The Geography of Strabo, Book 12, Chapter 3, Section 40. Cambridge, Mass.: Harvard University Press; London: William Heinemann, Ltd. 1924.

[22] Bostock, J., & Riley, H. T. (1855). Pliny the Elder. The Natural History2, Book 33, The Natural History of Metals, Chapter 40, “The Various Kinds of Minium.”

[23] Hunter, D. (1969). The diseases of occupations. The diseases of occupations., (5th Edition).

[24] Dioscoride. (1968). The Greek Herbal of Dioscorides: Illustrated by a Byzantine, AD 512. R. W. T. Gunther (Ed.). Hafner Publishing Company.

[25] Brownell, K. D., & Warner, K. E. (2009). The perils of ignoring history: Big Tobacco played dirty and millions died. How similar is Big Food?. Milbank Quarterly87(1), 259-294.

[26] Evolution of the Clean Air Act | Overview of the Clean Air Act and Air Pollution | US EPA. (2017). Retrieved 5 May 2017, from

[27] Summary of the Toxic Substances Control Act | Laws & Regulations | US EPA. (2017). Retrieved 5 May 2017, from

[28] Asbestos Laws and Regulations | Asbestos | US EPA. (2017). Retrieved 5 May 2017, from

[29] Current Asbestos Bans and Restrictions. (2017). Retrieved 5 May 2017, from

[30] Australia – Asbestos Use, Mesothelioma & Asbestos Laws. (2017). Mesothelioma Center – Vital Services for Cancer Patients & Families. Retrieved 5 May 2017, from

[31] United Kingdom – Asbestos, Mesothelioma, Laws & Regulations. (2017). Mesothelioma Center – Vital Services for Cancer Patients & Families. Retrieved 5 May 2017, from

[32] Asbestos Ban and Phase-Out Federal Register Notices | Asbestos | US EPA. (2017). Retrieved 5 May 2017, from

[33] (2017). Retrieved 5 May 2017, from

[34] Kazan, S. (2014). The U.S. Asbestos Ban That Wasn’t | California Mesothelioma Asbestos Lawyers Kazan Law. California Mesothelioma Asbestos Lawyers Kazan Law. Retrieved 5 May 2017, from

[35] (2017). Retrieved 5 May 2017, from

Image c/o Home Solutions


The One True Argument™

Anyone who has spent much time addressing a lot of myths, misconceptions, and anti-science arguments has probably had the experience of some contrarian taking issue with his or her rebuttal to some common talking point on the grounds that it’s not the “real” issue people have with the topic at hand. It does occasionally happen that some skeptic spends an inordinate amount of time refuting an argument that literally nobody has put forward for a position, but I’m specifically referring to situations in which the rebuttal addresses claims or arguments that some people have actually made, but that the contrarian is implying either haven’t been made or shouldn’t be addressed, because they claim that it’s not the “real” argument. This is a form of No True Scotsman logical fallacy, and is a common tactic of people who reject well-supported scientific ideas for one reason or another. In some cases this may be due to the individual’s lack of exposure to the argument being addressed rather than an act of subterfuge, but it is problematic regardless of whether or not the interlocutor is sincere.

The dilemma is that there are usually many arguments for (and variations of) a particular position, so it’s not usually possible for someone to respond to every possible permutation of every argument that has ever been made against a particular idea (scientific or otherwise). The aforementioned tactic takes advantage of this by implying that the skeptic is attacking a strawman on the grounds that what they refuted was not the “real” main argument for their position. In comment sections on my page, I’ve referred to this as The One True ArgumentTM fallacy. It’s a deceptive way for the contrarian to move the goalpost while deflecting blame back onto the other person by accusing them of misrepresentation. The argument being addressed has been successfully refuted, but instead of acknowledging that, the interlocutor introduces a brand new argument (often just as flawed as the one that was just deconstructed), and accuses the person debunking it of either not understanding or not addressing The One True ArgumentTM.

Some brands of science denial have brought this to the level of an integrative art form. If argument set A is refuted, they will cite argument set B as The One True ArgumentTM, but if argument set B is refuted, they will either cite argument set A or argument set C as The One True ArgumentTM. If argument sets A, B, and C are all refuted in a row, they’ll either bring out argument set D, or they will accuse the skeptic of relying on verbosity, and will attempt to characterize detailed rebuttals as some sort of vice or symptom of a weak argument (even though the skeptic is merely responding to the claimant’s arguments). I really wish I was making this up, but these are all techniques I’ve seen science deniers use in debates on social media or on their own blogs. Of course, the volume of the rebuttal cannot be helped due to what has come to be known as Brandolini’s Law AKA Brandolini’s Bullshit Asymmetry Principle (coined by Alberto Brandolini), which states that the amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it.

The argumentation tactics of sophisticated science deniers and other pseudoscience proponents (or even the less sophisticated ones) could probably fill an entire book, but this is one that I haven’t seen many people address, and it comes up fairly often.

For example, many opponents of genetically engineered food crops claim that they are unsafe to eat, and that they are not tested. Often when someone takes the time to show that they are actually some of the most tested foods in the entire food supply, and that the weight of evidence from decades of research from scientists all across the world has converged on an International Scientific Consensus that the commercially available GE crops are at least as safe and nutritious as their closest conventional counterparts, the opponents will downplay it as not being the “real” issue. In some cases they will appeal to conspiracy theories or poorly done outlier studies that have been rejected by the scientific community, but in other instances they will invoke The One True ArgumentTM fallacy. They will claim that nobody is saying that GMOs are unsafe to eat, and that the problem is the overuse of pesticides that GMOs encourage, or that the problem is that the patents and terminator seeds allegedly permit corporations to sue farmers for accidental cross contamination and monopolize the food supply by prohibiting seed saving.

Of course, these arguments are similarly flawed. GMOs have actually helped reduce pesticide use: not increase it, (particularly insecticides) [1],[2],[3], and have coincided with a trend toward using much less toxic and environmentally persistent herbicides [4]. Plant patents have been common in non-GMO seeds too since the Plant Patent Act of 1930, terminator seeds were never brought to market, the popularity of seed saving had already greatly diminished several decades before the first GE crops, and there are still no documented cases of non-GMO farmers getting sued by GMO seed companies for accidental cross-contamination.

However, although the follow up arguments are similarly flawed, the fact is that many organizations absolutely are claiming that genetically engineered food crops are unsafe. I’m not going to give free traffic to promoters of pseudoscience if I can help it, but one need only to plug in the search terms “gmo + poison” or “gmo + unsafe” to see a plethora of less-than-reputable websites claiming precisely that. The point is that it’s dishonest to pretend that the person rebutting such claims isn’t addressing the “real” contention, because there is no one single contention, and the notion that the foods are unsafe is a very common one.

Another example occurred just the other day on my page. I posted a graphic depicting some data showing how effective vaccines have been at mitigating certain infectious diseases. A commentator responded as shown here:

I responded thusly:

Putting aside the fact that information on vaccine ingredients is easy to obtain (they are laid out in vaccine packaging inserts), and the fact that increasing life expectancy and population numbers suggest that, if there is any nefarious plot to depopulate the planet, the perpetrators have been spectacularly unsuccessful so far, the point is that this exemplifies The One True ArgumentTM tactic.

Another common example is when scientists meticulously lay out the arguments and evidence for how we know that global warming and/or climate change are occurring. There are many common contrarian responses to this, some of which employ the One True Argument fallacy, such as when the contrarian claims that nobody actually rejects the claim that the change is occurring, bur rather they doubt that human actions have played any significant role in it.

Of course, the follow up claim is similarly flawed, since we know that climate changes not by magic but rather when acted upon by physical causes (called forcings), none of which are capable of accounting for the current trend without the inclusion of anthropogenically caused increases in atmospheric concentrations of greenhouse gases such as CO2. This is because most of the other important forcings have either not changed much in the last few decades, or have been moving in the opposite direction of the trend (cooling rather than warming). I’ve explained how solar cycles, continental arrangement, albedo, Milankovitch cycles, volcanism, and meteorite impacts can affect the climate with hundreds of citations from credible scientific journals here, here, here, here, here, here, here, here, here, here, here, and here.

 In this instance, although it has become more common than in the past for climate science contrarians to accept the conclusion that climate has been changing but reject human causation, there are still plenty who argue that the warming trend itself is grand hoax, and that NASA, NOAA, (and virtually every other scientific organization on the planet) has deliberately manipulated the data to make money. If you doubt this, all you need to do is enter “global warming + hoax + fudged data” into your favorite search engine to see an endless list of webmasters making this claim. In fact, in one study, the position that “it’s not happening” at all was the single most common one expressed in op-ed pieces by climate science contrarians between 2007 – 2010 [10]. Their abundance even increased towards the end of that time period, so it’s flat out untrue that the push-back against the science has centered only on human causation and/or the eventual severity of the problem. 

The truth is that there was never anything nefarious going on with the temperature data adjustments. Similar adjustments are performed on data in most scientific fields. They were completely legitimate and scientifically justified. There have even been additional studies in which the assumptions and reasoning behind the ways in which the data was adjusted have been scrutinized and compared to data from reference networks, and the same procedures produced readings that were MORE accurate than the raw non-adjusted data: not less [5],[6],[7].[8].[9]. This is nicely explained here, but I digress; the main point here is not just that the follow-up arguments tend to be similarly flawed, but rather that this technique could in principle be used indefinitely to move the goal posts ad infinitum.

It’s easy to see that this also forces a strategic decision on the part of the skeptic or science advocate. Do you nail them down on their use of this tactic? Do you respond to the follow-up argument they’ve presented as the “real” issue? Do you do both? If so, are there any strategic disadvantages to doing both? Would it make the response excessively long? If so, does that matter? If so, how much can it be compressed by improved concision without sacrificing accuracy and/or important details? Disingenuous argumentative tactics like these put the contrarian’s opponents in a position where he or she has to make these kinds of strategic decisions rather than simply focusing on the veracity of specific claims.

As I alluded to earlier, this is not a free license to construct actual strawmen of other people’s positions and ignore their explanations when they attempt to clarify their arguments and their conclusions, because people do that too, and that’s no good either. But the One True ArgumentTM fallacy refers specifically to when a refutation to a common argument is mischaracterized as a strawman as a means of introducing a different argument while trying to construe it as the skeptic’s fault for addressing the argument they addressed instead of some other one. It’s dishonest, it’s based on bad reasoning, you shouldn’t use it, and you should point it out when others do. 


[1] Brookes, G., & Barfoot, P. (2017). Environmental impacts of genetically modified (GM) crop use 1996–2015: impacts on pesticide use and carbon emissions. GM crops & food, (just-accepted), 00-00.

[2] Klümper, W., & Qaim, M. (2014). A meta-analysis of the impacts of genetically modified crops. PloS one9(11), e111629.

[3] National Academies of Sciences, Engineering, and Medicine. (2017). Genetically Engineered Crops: Experiences and Prospects. National Academies Press (pg. 117-119).

[4] Kniss, A. R. (2017). Long-term trends in the intensity and relative toxicity of herbicide use. Nature communications8, 14865.

[5] Jones, P. D., & Moberg, A. (2003). Hemispheric and large-scale surface air temperature variations: An extensive revision and an update to 2001. Journal of Climate16(2), 206-223.

[6] Brohan, P., Kennedy, J. J., Harris, I., Tett, S. F., & Jones, P. D. (2006). Uncertainty estimates in regional and global observed temperature changes: A new data set from 1850. Journal of Geophysical Research: Atmospheres111(D12).

[7] Jones, P. D., Lister, D. H., Osborn, T. J., Harpham, C., Salmon, M., & Morice, C. P. (2012). Hemispheric and large‐scale land‐surface air temperature variations: An extensive revision and an update to 2010. Journal of Geophysical Research: Atmospheres117(D5).

[8] Hausfather, Z., Menne, M. J., Williams, C. N., Masters, T., Broberg, R., & Jones, D. (2013). Quantifying the effect of urbanization on US Historical Climatology Network temperature records. Journal of Geophysical Research: Atmospheres118(2), 481-494.

[9] Hausfather, Z., Cowtan, K., Menne, M. J., & Williams, C. N. (2016). Evaluating the impact of US Historical Climatology Network homogenization using the US Climate Reference Network. Geophysical Research Letters.

[10] Elsasser, S. W., & Dunlap, R. E. (2013). Leading voices in the denier choir: Conservative columnists’ dismissal of global warming and denigration of climate science. American Behavioral Scientist57(6), 754-776.


Mean Field Theory and Solar Dynamo Modeling

In a recent post, I talked about the characteristics of the sun’s 11 and 22 year cycles, the observed laws which describe the behavior of the sunspot cycle, how proxy data is used to reconstruct a record of solar cycles of the past, Grand Solar Maxima and Minima, the relationship between Total Solar Irradiance (TSI) and the sunspot cycle, and the relevance of these factors to earth’s climate system. In a follow up post, I went over the structure of the sun, and some of the characteristics of each layer, which laid the groundwork for my last post, in which I explained the solar dynamo: the physical mechanism underlying solar cycles.

Before elaborating on the sun’s role in climate change in the installment following this one, I’ll be going over an approach called “Mean Field Theory” in this installment, which dynamo theorists and other scientists sometimes use to make the modelling of certain systems more manageable. As was the case with part III, this may be a bit more technical than most of my subscribers are accustomed to, but I think the small subset of readers with the tools to digest it will appreciate it. And to be perfectly blunt, writing this was not just about my subscribers. I wanted to do it. It was an excuse for me to dig more deeply into something that has been going on in modern stellar astrophysics that I thought was interesting. The fact that it happened to be tangentially related to my series on climate science was a mere convenience. Anyone wanting to avoid the math and/or to cut to the chase with respect to the effects of solar cycles on climate change might want to skip ahead to part V, or perhaps just read only the text portions of this post. However, for those who don’t mind a little bit of math, I present to you the following:

Mean Field Theory

One approach by which scientists and mathematicians can simplify the models describing large complex systems stochastic effects is called Mean Field Theory (Schrinner 2005). This involves subsuming multiple complicated interactions between different parts of a system into a single averaged effect. In this way, multi-body problems, (which are notoriously difficult to solve even with numerical approximation methods with supercomputers), can be reduced to simpler single body problems. For instance, the velocity field u and magnetic field B could each be broken up into two separate terms: A mean term (u0 and B0 respectively), and a fluctuating term (u’ and B’ respectively), whereby the mean terms are taken as averages over time and/or space depending on what is appropriate to the system being modeled.

In other words, u = u0 + u’ and B = B0 + B’, where (by definition) the average velocity <u> = u0, the average magnetic field <B> = B0, because u0 and B0 are the mean field terms, and the average of the fluctuating terms <B’> and <u’> are both equal to zero by definition. The angled brackets simply denote a suitable average of the term they enclose (again taken over time and/or space as deemed appropriate by the scientist or mathematician). The reason the fluctuating terms average out to zero is because the mean field terms are defined as the average of the entire field, so by definition, the only way that can be true is if the fluctuating terms average out to zero.

Using the vector calculus identity × ( × B) = ∇(∇·B) + 2B, and the fact that ·B = 0 by Gauss’s Law for magnetic fields, the induction equation, ∂B/∂t  = × (u × B − η × B) from the previous section can also be expressed as ∂B/∂t = η2B + × (u × B), where 2B is called the Laplacian operator of the magnetic field B.

Plugging in our mean field equations u and B into this form of the induction equation:

∂B0/∂t + ∂B’/∂t = η2B0 + η2B’+

× (u0× B0) + × (u0 × B’) +

× (u’ × B0) + × (u’ × B’)

Now we take the average of both sides:

∂<B0>/∂t + ∂<B’>/∂t = η2<B0> + η2 <B’> + 

× <u0× B0> + × <u0 × B’> +

× <u’ × B0> + × <u’ × B’>

However, we’ve already know that <B0> = B0, <u0> = u0, and <u’> = < B’> = 0, so this can be simplified:

∂B0/∂t = η2B0 + × <u0× B0> + × <u’ × B’>.

The × <u’ × B’> is typically then replaced with a term × ε, where ε is called the mean electromotive force (Radler 2007). This yields the Mean Field Induction Equation:

∂B0/∂t = η2B0 + × <u0× B0 + ε>

Although many details of the theory are still being worked out, models based on the solar dynamo mechanism are consistent with the periodicity of the solar cycle, Hale’s law (the opposing magnetic polarity of sunspots above and below the solar equator and the alternation of polarity in successive 11 year cycles), as well as both Sporer’s and Joy’s laws (the apparent migration of sunspots towards the equator as a cycle progresses, as well as their tilt), which together produce the observed sunspot butterfly diagrams I talked about here.

Some models can even simulate variations in amplitude from one cycle to the next, but the precise manner in which Grand Solar Maxima and Minima emerge is still being worked out. Consequently, our models’ ability to reliably and accurately forecast them is currently still limited. Methods have been developed for estimating sunspot number and solar activity of a cycle’s solar maxima from observations of the poloidal field strength of the preceding cycle’s solar minima (Schatten 1978, Svalgaard 2005). In addition to only providing information on the solar maxima immediately after the minima being measured, this approach is also limited by the fact that our poloidal field measurements only go back a few cycles, and because the poloidal magnetic fields during solar minima are difficult to measure reliably, because they are weak and have radial as well as meridional components.

Other researchers have focused on kinematic flux transport solar dynamo models which, in addition to differential rotation, include the effects of meridional flow in the convective envelope, whereby the poloidal magnetic field is regenerated by the decay of the bipolar magnetically active regions subsequent to their emergence at the solar surface (Dikpati 1999, Dikpati 2006, Choudhuri 2007). Active regions are the high magnetic flux regions at which sunspots emerge.

Image  by Andres Munoz-Jaramillo (check out his fantastic presentations on

This meridional flow sets the period of the cycle, the strength of the poloidal field, and the amplitude of the solar maximum of the subsequent cycle. However, estimates of meridional flow velocities prior to 1996 are highly uncertain (Hathaway 1996). All of these models have been criticized by peers of their proponents. A concise summary of the blow by blow can be viewed here.

As for Grand Solar Maxima and Minima, no comprehensive theory has yet emerged on how they arise and decay, let alone a scientific consensus. However, certain constraints have been identified. There is evidence that the dynamo cycle does continue in some modified form during Maunder-type minima periods. The idea is that the dynamo enters Grand Maxima and Minima by way of chaotic and/or stochastic processes. In the case of Grand Maxima, the dynamo also exits that state via stochastic processes. In the case of Grand Minima, on the other hand, the dynamo then gets “trapped” in this state, but eventually gets out of it via deterministic internal processes (Usoskin 2007). It is also thought that the polarity of the sun’s toroidal magnetic field may lose its equatorial anti-symmetry during such minima, and instead become symmetric (Beer, Tobias and Weiss 1998).

Truly fantastic long term predictive power for solar cycles probably won’t be achieved until poloidal magnetic field generation is better understood, which will likely include improvements in flux transport models, and a more complete characterization of the statistical properties of bipolar magnetic regions (BMRs). For a comprehensive overview of the current state of Solar Dynamo Models and their predictive strengths and limitations, see Charbonneau 2010.

In the next installment, I’ll explain how all of this relates to climate change on earth, and address the elephant in the room: “are solar variations responsible for the current global warming trend?”

Related Articles:


Beer, J., Tobias, S., & Weiss, N. (1998). An active Sun throughout the Maunder minimum. Solar Physics181(1), 237-249.

Charbonneau, P. (2010). Dynamo models of the solar cycle. Living Reviews in Solar Physics7(1), 1-91.

Choudhuri, A. R., Chatterjee, P., & Jiang, J. (2007). Predicting solar cycle 24 with a solar dynamo model. Physical review letters98(13), 131103-131103.

Coriolis, G. G. (1835). Théorie mathématique des effets du jeu de billard. Carilian-Goeury.

Dikpati, M., & Charbonneau, P. (1999). A Babcock-Leighton flux transport dynamo with solar-like differential rotation. The Astrophysical Journal518(1), 508.

Dikpati, M., De Toma, G., & Gilman, P. A. (2006). Predicting the strength of solar cycle 24 using a flux‐transport dynamo‐based tool. Geophysical research letters33(5).

Hathaway, D. H. (1996). Doppler measurements of the sun’s meridional flow. The Astrophysical Journal460, 1027.

Rädler, K. H., & Rheinhardt, M. (2007). Mean-field electrodynamics: critical analysis of various analytical approaches to the mean electromotive force. Geophysical & Astro Fluid Dynamics101(2), 117-154.

Schatten, K. H., Scherrer, P. H., Svalgaard, L., & Wilcox, J. M. (1978). Using dynamo theory to predict the sunspot number during solar cycle 21. Geophysical Research Letters5(5), 411-414.

Schrinner, M., Rädler, K. H., Schmitt, D., Rheinhardt, M., & Christensen, U. (2005). Mean‐field view on rotating magnetoconvection and a geodynamo model. Astronomische Nachrichten326(3‐4), 245-249.

Svalgaard, L., Cliver, E. W., & Kamide, Y. (2005). Sunspot cycle 24: Smallest cycle in 100 years?. GEOPHYSICAL RESEARCH LETTERS32, L01104.

Usoskin, I. G., Solanki, S. K., & Kovaltsov, G. A. (2007). Grand minima and maxima of solar activity: new observational constraints. Astronomy & Astrophysics471(1), 301-309.


A Compilation of Studies and Articles on GE Food Safety and the Scientific Consensus

The following is a list of studies and articles on GE food safety and the scientific consensus, which I’ve complied for convenient access. It will be updated periodically.



A practical Introduction to Vectors

The mathematical concept of a “vector” is ubiquitous in the realms of physics, engineering and applied mathematics. Typically, the concept is first introduced (usually in a first semester physics course or perhaps a trigonometry or calculus course) as a quantity with both a magnitude and a direction, and is usually represented as a line segment with an arrow at the end of it (like a ray).

Often, these introductory treatments will distinguish what a vector is by contrasting it to what a scalar is. A scalar is just a quantity (albeit possibly with some units of measurement), such as length, weight, mass, speed, an amount of money, or a frequency. On the other hand, examples of vectors would be things like velocity, acceleration, force, momentum, and position (relative to some set of coordinates). In other words, they have a direction as well as a magnitude.


Although vectors can in principle be adapted to any coordinate system, for the sake of simplicity, we will presuppose a Cartesian coordinate system in either 1, 2, or 3 dimensions. In this representation, we’ll have a horizontal x axis and a vertical y axis for 2D, and for 3D we’ll have the y axis horizontal, the z axis vertical, and the x axis coming out of the page at the reader. This is just a matter of historical convention. Occasionally one might see the z axis being used as the one coming out of the page, but most of my pictorial examples involved the former and not the latter. This is a type of coordinate system with what’s called an “orthonormal” basis. I’ll cover the concept of a basis in a later post, but in practical terms, orthonormal just means that the coordinate axes are perpendicular to one another. Any points or vectors in the space are constructed by adding linear combinations of  unit vectors (magnitude equal to 1) in the x, y and z directions.

A vector can be represented by a letter with a little arrow on top of it. but since it’s a PITA to write them that way in this format, I’ll just use capital letters to name vectors for now. Suppose we have a vector, A. There are a number of common notational conventions for representing it.

A = [a1, a2, a3],



where the i, j and k with the little hats each represent a unit vector (a vector of unit length) in one of the component directions (x, y, and z respectively in this case), and the a1, a2 and a3, (or A_x, A_y and A_z) symbols represent the “components” of the vector in the x, y and z directions respectively. I will use the A = [a1, a2, a3] format for convenience.

So, for example, supposing you were traveling at 10 m/s in the x-direction. Then your velocity vector v = [10, 0, 0] m/s, since you’re not moving in the y or z directions at all.

Vectors can be added and subtracted just like scalar quantities, but there is a procedure to it. The process is to add or subtract each component of the vector.

For example, if you had a vector, A = [a1, a2, a3], and another vector, B = [b1, b2, b3].

Then A + B = [a1 + b1, a2 + b2, a3 + b3].

Similarly, A – B = [a1 – b1, a2 – b2, a3 – b3].

Let’s try a more concrete example. Supposing your positive x component corresponded to East, your positive y component corresponded to North, and your positive z compodent corresponded to up in the air. Supposing you had two cars driving in an open field. Car A has a velocity v_a = [40, 30, 0] km/hr, (that’s diagonal motion of 40 km/hr east and 30 km north) relative to an observer on the ground, and car B is moving at a velocity v_b = [0, -40, 0] km/hr.

If you wanted to know A’s velocity from the perspective of B, then you’d subtract the velocity of B from the velocity of A.

v_a – v_b = [ 40 – 0, 30 – (-40), 0 – 0] km/hr = [40, 70, 0].

The following Khan Academy video explains some more examples of this:

Something that is typically not mentioned in such introduction (mostly for the sake of expediency), is the fact that this definition of a vector is really only a special case of a much broader-reaching concept rife with many other special cases, each comprising their own unique attributes and applications. Later on, it is customary for the student to learn a more generalized concept of vector “spaces,” whereby a vector space is defined by a list of rules by which the elements of a space must abide in order to qualify as a vector space. This is traditionally taught in a first course on linear algebra, along with concepts such as “rank,” “dimension,” “linear independence,” and a “basis,” and opens the door to a lot of vector spaces which wouldn’t necessarily fit the description of a vector that students are typically taught in introductory physics. These included things like vector “fields” and topological vector spaces, of which metric spaces are a subset, of which normed vector spaces are a subset, of which inner product spaces and banach spaces are subsets, of which Hilbert spaces are a subset (etc). The latter, (Hilbert Spaces), are ubiquitous in quantum mechanics and quantum field theory for example. There may also come a point at which a student comes to learn that some vectors are also a subset of a class of mathematical objects known as “tensors.” Not all vectors are tensors, (but all first order tensors are actually vectors), and not all scalars are tensors, (but all zeroth order tensors are scalars).



In actuality, the hierarchy is a bit more involved than depicted here, and understanding it would require covering some concepts such as “completeness” from a branch of math known as functional analysis, which I will not attempt to cover here.




Don’t worry about not knowing what distinguishes these types of spaces from one another right now. The point is to state up front that there multiple levels to this concept of vectors, instead of simply conveniently neglecting to mention that until it can’t be put off any longer, as is often done in the earlier college courses. So rather than getting lost down that rabbit hole, just know there are a few basic operations and notational conventions to understand for vectors in the sense of a “quantity possessing a magnitude and a direction,” but that those are but a special case of a broader and very useful mathematical concept that has applications in various scientific sub-disciplines.

In a subsequent lesson, I can go over vector multiplication (i.e. dot products/inner products and cross products), scalar projections, vector projections, vectors as functions of independent variables, vector calculus, the concept of a vector space, and various useful operations involving vectors and vector functions.