Sci Eng Ethics DOI 10.1007/s11948-015-9657-x ORIGINAL PAPER

Facing up to Complexity: Implications for Our Social Experiments Ronnie Hawkins1

Received: 6 October 2014 / Accepted: 21 May 2015  Springer Science+Business Media Dordrecht 2015

Abstract Biological systems are highly complex, and for this reason there is a considerable degree of uncertainty as to the consequences of making significant interventions into their workings. Since a number of new technologies are already impinging on living systems, including our bodies, many of us have become participants in large-scale ‘‘social experiments’’. I will discuss biological complexity and its relevance to the technologies that brought us BSE/vCJD and the controversy over GM foods. Then I will consider some of the complexities of our social dynamics, and argue for making a shift from using the precautionary principle to employing the approach of evaluating the introduction of new technologies by conceiving of them as social experiments. Keywords Complexity theory  Reductionism  Prion disease (BSE, vCJD)  Genetically modified organisms (GMOs)  Substantially equivalent  Insertional mutagenesis  Whistleblowing  Social cage  Social construction  Ontological subjectivity  John Searle  Precautionary principle  Social experiment  Ibo van de Poel For all too long, it seems a reductivist paradigm has prevailed within western culture, fostering a largely implicit but widespread way of understanding reality as modeled on the life-less ‘‘clockwork universe’’ of Newtonian mechanics, a worldview metaphysically haunted by LaPlace’s demon, capable of predicting the position and velocity of every speck of matter for all time and thereby eliminating the possibility of there being ‘‘free will’’ and hence the existence of true human moral agency. The starkly mechanistic metaphysics was grossly inadequate for conceptualizing the world of living beings, including ourselves, but physics & Ronnie Hawkins [email protected] 1

Department of Philosophy, University of Central Florida, Orlando, FL 32816-1352, USA

123

R. Hawkins

remained the model of science embraced by philosophy of science and much of philosophy at large for several centuries, and the successes of reductionistic biochemical and genetic explanations of biological phenomena in the early part of the twentieth century reinforced this view. As increasingly sophisticated technologies for investigation of living processes were developed later in the century and on into the twenty-first, however, scientists in many fields have begun to speak of ‘‘complexity’’ and of the ‘‘emergence’’ of new properties at higher levels of organization in natural systems, while philosophy has been slow to readjust its metaphysics accordingly (see Mitchell 2009b, 2, 25–26).1 Ironically, however, some of those same technologies that are disclosing a much more nuanced view of what it means to be alive, one that promises to be far more accommodating of our own experiences of living and of interacting with other living wholes, are at the same time making it possible for us to intervene more and more drastically into the workings of biological systems, often without guidance from the updated metaphysics/ontology that an integration of this newfound knowledge could bring. I will give a brief overview of what is being learned about biological complexity and then examine some recently adopted technologies that have or might cause harm to our own living systems—our bodies—with an eye to some of the uncertainties that necessarily accompany our making even seemingly subtle changes that affect them and to the steps that have been taken, or not, to regulate them. I will also examine the social context of the science surrounding what is currently the most controversial of these technologies, the genetic modification of food products, and then consider the ontological distinction between the belief systems which structure what John Searle terms our ‘‘social reality’’ and the biophysical systems that complexity science is illuminating. Finally, I will suggest that, since current formulations of the precautionary principle typically do not mark this important distinction, the ‘‘social experiment’’ framework advocated by Ibo van de Poel may better serve to ‘‘put in the picture’’ both what the precautionary principle was designed to protect and the reality of our human agency, insofar as these are both crucial to the evaluation of emerging new technologies.

Part 1: Complex Biological Systems and Examples of Two Technological Interventions into them Biological Complexity: The Patterns of Life There are many kinds of complexity, in the living and the nonliving world. The term ‘‘complexity’’ is itself hard to define, and notoriously difficult to measure, though many theorists seem to agree that whatever it is—and it seems to take in many different kinds of things—it seems to have increased over evolutionary time, on the Earth and perhaps in the universe (see Lineweaver et al. 2013). In Complexity: A 1

Citing a search on ‘‘emergence,’’ ‘‘properties,’’ and ‘‘science’’ in Google Scholar that yielded more than 500,000 hits, Sandra Mitchell asks, ‘‘If the philosophical analyses that dismiss the reality of emergent properties are correct, then why have descriptions of emergent properties in science become so widespread?’’ (2009, 26)

123

Facing up to Complexity: Implications for Our…

Guided Tour (2009), Melanie Mitchell provides examples of both living and nonliving complexities, from the Amazon rainforest and insect colonies to the brain and the immune system to economies and the World Wide Web. She lists three common characteristics of complex systems: the organization of a large number of components following simple rules into a network without centralized control; both internal and external signaling and information processing; and the ability to adapt their behavior to maintain or extend themselves (Mitchell 2009a, 12–13). Nonlinearity of relationships between variables are characteristic of the ‘‘chaotic’’ behavior of nonliving systems like changing weather patterns and the turbulence of water in a stream as well as that of living organisms and assemblages of organisms. This nonlinearity gives rise to the ‘‘butterfly effect’’ type of complexity, wherein a very slight difference in the initial conditions of a system, like the fluttering of a butterfly’s wings, can lead to dramatically different outcomes over time, such as the occurrence, or not, of a tornado somewhere else in the world; such outcomes are deterministic, but unpredictable, since initial conditions are impossible to measure with perfect precision. Small changes in the parameters of a nonlinear system can lead to bifurcation, a sudden change of state affecting the whole system. Another aspect of many types of complexity is the existence of both negative and positive feedback loops; in living systems, they are continually operative at biochemical, cellular, and organismal levels to maintain bodily homeostasis. Individual organisms can also interact to form what E. O. Wilson has termed ‘‘superorganisms,’’ organizational patterns with complex sorts of feedback producing systemlevel properties, such as the colonies of ants and bees; the striking aerial displays by flocks of starlings, which can resemble the movements of a single entity, have also been explained in terms of ‘‘the nonaggregative interaction that derives from the local rules of motion plus feedback among the individuals in group flight’’ (Mitchell 2009b, 35), each individual responding nonlinearly to what it perceives of the flight patterns of its nearest neighbors. Similar superorganismal patterns, moreover, have been detected to arise when humans unconsciously follow a few simple rules in response to the movements of their nearest neighbors in a crowd (see Fisher 2009, 49–65). The above-mentioned phenomena fit into the first two categories of biological complexity recognized by Sandra Mitchell (Mitchell 2003, 4–7), constitutive complexity, complexity of structure, and dynamic complexity, complexity of temporospatial processes; a third category is evolved diversity, diversity of form and function resulting from natural selection operating at multiple organizational levels upon variations in ways of staying alive and reproducing over evolutionary time, and thus marked by irreversible historical contingency. All three of these categories of biological complexity can be seen to be at play in the genetic and biochemical structure and function within living cells, the levels of organization we will be most concerned with here. Just as the ‘‘clockwork universe’’ must be given up, the early view of genes as discrete, fixed ‘‘beads on a string,’’ with ‘‘one gene’’ coding ‘‘one enzyme’’ in a simple, one-way information flow from DNA to RNA to protein must also be relinquished, although its value in founding the discipline must be acknowledged. Genes can change position on chromosomes through a process known as transposition, a single gene can code for more than one protein through

123

R. Hawkins

alternative splicing, its transcription may begin and end at different sites, many stretches of DNA are transcribed into noncoding RNA that plays a regulatory role, and epigenetic changes, such as DNA methylation and histone alterations, can produce heritable changes without altering the DNA sequence (Mitchell 2009a, 274–276). Moreover, large-scale investigations of gene mutations have revealed that pleiotropy, whereby a mutation in one gene produces effects in multiple phenotypes, and epistasis, the dependence of a gene’s contribution to a phenotype on the action of genes at multiple other loci, are the rule rather than the exception, revising our picture further to understand that phenotypes typically are the result of the interactions of hundreds or thousands of genes organized into scale-free networks (networks wherein only a few nodes serve as ‘‘hubs’’ connecting with many other nodes); because of the scale-free organization, most genes can be deleted without having serious effects, although a hub deletion can have effects that propagate throughout the system, affecting hundreds of other genes (Tyler et al. 2009, 224). Degeneracy, ‘‘the ability of elements that are structurally different to perform the same function or yield the same output’’ (Edelman and Gally 2001, 13763), has been found to be widespread within such networks, such that loss of specific gene or protein function can typically be compensated for by adjustment of interactions elsewhere, producing the ‘‘robustness’’ exhibited by living systems and contributing to their complexity; it has been detected at 22 different levels of organization, from genetic regulatory sequences to protein folding to developmental pathways, immune responses and neural connectivity, and is thought to provide the variation needed for natural selection to operate as well as to be the inevitable result of the selection process. ‘‘[L]ife is largely polygenic, pleiotropic, and syndromic,’’ observe Weiss and Buchanan (2011, 763), insofar as ‘‘traits and organisms are made by overlapping, partly sequestered, and partly interacting processes’’; ‘‘[c]ooperation—that is, co-operation—is a fundamental principle of life at all levels, including within cells, between cells, between tissues and organs, and between organisms as well as species sharing ecosystems.’’ Given the great complexity of all the factors at play in biological systems, however, the consequences of making major interventions into their workings are highly unpredictable, and the effects of even small changes are fraught with uncertainty. An Example of Intervention into a Complex Biological System with Unpredictable Consequences: BSE and vCJD The pathogenesis of bovine spongiform encephalopathy (BSE) and variant Creutzfeldt-Jakob disease (vCJD) is instructive in this regard, as well as in its implications for how we should think about policy in the face of the large degree of uncertainty that follows manipulatory intervention into biological complexity. Both diseases are classified as TSEs, ‘‘transmissible spongiform encephalopathies,’’ so named because they can be transmitted from one animal to another, they cause neuron death and dropout in the brain that produces a ‘‘spongy’’ appearance on histological examination, and they produce progressive neurological deterioration, usually taking years to develop, and ultimately death in the organisms they afflict.

123

Facing up to Complexity: Implications for Our…

The first TSE to be studied was kuru, discovered in New Guinea in members of an indigenous group that traditionally consumed the body parts of enemies killed in battle, including the brain; the disease could take up to 40 years to manifest, though once it did it was invariably fatal. Scientists initially thought they were dealing with a ‘‘slow virus,’’ but after much effort, no virus was ever isolated; eventually the preponderance of scientific opinion converged on an abnormally shaped protein macromolecule, small enough to pass through filters that would capture a virus and far more resistant to degradation, as the ‘‘cause’’ of kuru and the other TSEs. This was quite a shocking discovery for medical science, since all the previously known categories of infectious-disease-producing agents—viruses, bacteria, protozoans, and multicellular parasites—are forms organizationally based on DNA (or RNA) sequences, not protein alone. The abnormal protein, dubbed a ‘‘prion’’ by Stanley Prusiner, who won the 1997 Nobel Prize in Medicine for his work on it, is, in the primary linear sequence of its amino acid chain, a normally-occurring membrane protein found in many cell types, including neurons, but one which has somehow become misfolded at the 3-dimensional level, making it resist breakdown by the normal cellular ‘‘clean-up’’ processes, instead forming gooey clumps of insoluble material that build up in and eventually kill nerve cells. As purely protein, it is also resistant to the normal sterilization methods capable of destroying DNA- and RNAbased disease agents. Medical science was aware of certain prion diseases long before their causative agent was identified; Creutzfeldt-Jacob disease, the standard and not the ‘‘variant’’ form, was documented as a rare, fatal neurodegenerative disease of humans that popped up sporadically, and scrapie, a neurological disease of sheep that made them rub against things and ‘‘scrape’’ off their wool, was known to have been present in the United Kingdom for several centuries. The outbreak of BSE that occurred in the UK in the mid-to-late 1980s, however, was something new. Large numbers of cattle, primarily older dairy cows, began losing coordination, sometimes thrashing around as if ‘‘mad,’’ and on pathological examination their brains were found to be full of microscopic holes, like a sponge. It took some time to track down the origin of the new disease after it was officially recognized in 1986, and meanwhile, within a few years, thousands of cows were showing the clinical signs, with spread into Europe and beyond. More than 185,000 infected cattle were eventually identified in the UK, as well as a lesser number in some European countries, with between one and three million ‘‘likely to have been infected with the BSE agent, most of which were slaughtered for human consumption before developing signs of the disease’’ (Smith and Bradley 2003). Its origin was eventually traced to modern, more intensive forms of cattle-rearing technologies, developed to increase the production of meat and milk. The diets of these normally herbivorous animals began to be supplemented with meat and bone meal (MBM) obtained from previously slaughtered cattle, sheep, and other animals, products of the ‘‘rendering’’ of carcasses that included brain and spinal cord tissue, finely ground and treated in ways sufficient to destroy known pathogens but ineffective against the insidious, pathogenic prion protein. Research eventually showed that oral ingestion of a very small amount of material containing the infectious agent—around one milligram (Bradley et al. 2006)—can transform normal, endogenously produced prion proteins into the

123

R. Hawkins

pathogenic form by inducing them to change their three-dimensional structure. Our first example of complexity, this transformation has been visualized as a kind of feed-forward process: a normally shaped molecule of the protein attaches to an asyet-uncharacterized catalytic substrate, which enables it to transform when the complex encounters a molecule of the abnormal protein; the transformed molecule is then released, another normal protein attaches to the substrate, and the process repeats, rapidly amplifying the amount of pathogenic prions in the cell and eventually throughout the organism’s nervous system. Once the evidence converged on MBM as containing the infectious agent, the UK instituted a ban on feeding the rendered products of ruminants (cattle, sheep, goats) back to ruminants in 1988, followed by a ban on adding ‘‘Specified Bovine Offal’’—brain, spinal cord and other tissue expected to be highly infective—to pig and poultry feed in 1989. The bans were not well enforced, however, and by 2006, more than 44,000 cows born after 1988 had been found with BSE, believed to the a result of continuing exposure to cattle feed in a ‘‘massive leakage’’ that was ‘‘attributed to cross-contamination of ruminant rations, mostly with those intended or prepared for pigs and poultry that were legally permitted to contain MBM’’ (Bradley et al. 2006). It was only in March of 1996 that the ‘‘radical step of banning the incorporation of all animal protein in animal feed’’ was taken in the UK, following the announcement of the discovery of the first ten cases of vCJD, a pathologically distinct form of CJD that is believed to be associated with consuming infected beef, marked by neurological changes and dementia, preferentially affecting young people, that is always fatal (BSE Inquiry 2000). The practice of refeeding proteinaceous material back to the same animal species in a never-ending cycle constitutes another kind of feedback loop with amplification potential, another a nonlinear process that resulted in the kind of ‘‘J-curve’’ dynamics exhibited by British cattle when the disease ‘‘erupted’’ in the 1980s. Complexity being what it is, however, a ‘‘two-species’’ loop is also potentially possible, whereby prions may be ‘‘amplified in one species and then fed to a secondary species in which they may or may not be decreased before being fed back to the first species,’’ such as a cow-pig-cow alternation (Barnes and Lehman 2013, 86). The results of modeling show that ‘‘[t]he ecological disease dynamics can be intrinsically oscillatory, having outbreaks as well as refractory periods which can make it appear that the disease is under control while it is still increasing,’’ and they have led to the conclusion that ‘‘non-susceptible species that have been intentionally inserted into a feedback loop to stop the spread of disease do not, strictly by themselves, guarantee its control, though they may give that appearance by increasing the refractory period of an epidemic’s oscillations’’ (ibid, 85). While such a two-species loop is presently only theoretical and has not as yet been observed, it cannot be ruled out, and the authors caution that ‘‘the maximal solution [to prevent disease transmission] is to eliminate all use of animal byproducts in livestock feed’’ (ibid, 90), which was finally done in the UK and other European countries, but has not been done in the United States. The diets of intensively-raised North American cattle, unlike those in the UK and Europe, are said to be more often supplemented with soybean meal or cottonseed meal than with animal byproducts, but, given the large number of animals

123

Facing up to Complexity: Implications for Our…

slaughtered in the US, ‘‘a prodigious quantity of material—up to 24 million tons or more per year’’–is available for rendering into MBM in the US and is legally open to incorporation into the diets of pigs and chickens as well as domestic pets (ibid). As of 2012, 19 cases of BSE had been identified in Canada and 4 in the US. Of the latter, in addition to the first discovery, in 2003, of the disease in a cow said to have been imported from Canada, BSE was subsequently found in a cow that was born and raised in Texas in June 2005, in a downer cow in Alabama in March 2006, and in a dairy cow sent to a rendering facility in California in April 2012; all three of the US-born cows were 10 years or more of age and were said to have ‘‘atypical’’ strains of BSE which may have arisen spontaneously, although feed transmission could not be ruled out (CDC: BSE, 2013). The USDA refused to identify the pet food plant in Texas at which the first domestic case was detected, but it was noted that it specialized in ‘‘turning diseased, dying, and dead animals into pet food or into dried meal for poultry and pigs, as well as into tallow, gelatin and other products’’ (McNeil 2005). The effectiveness of the US ban on feeding ruminant material back to ruminants came into question in a report by the GAO in 2000, which found, in assessing compliance with the regulation, that nearly 1700 rendering plants and processing firms out of over 9100 firms visited ‘‘were not aware of the regulation’’ and hence were not following the procedures for separating the two streams of feed supplement (Dyckman 2000). In 2007 Canada introduced an enhanced feed ban prohibiting potentially infectious ‘‘specified risk materials’’ (SRM) from all animal feeds, pet foods and fertilizers as well as cattle feed, and the US instituted an ‘‘enhanced BSE-related feed ban’’ in 2009 to ‘‘further harmonize’’ with Canadian measures, although the details of this enhanced ban and the methods and effectiveness of enforcement measures are not spelled out on the CDC website (CDC: BSE 2013). The risk for BSE in the United States is said to be ‘‘very low’’ by World Organization of Animal Health Standards (OIE), with a ‘‘best estimate of 0.167 cases per million’’ (CDC: BSE), so it seems there has been a decision made to reduce certain some safety measures that were taken during the height of awareness about these new prion diseases. In light of the US’s ruminant-to-ruminant feed ban, the BSE surveillance program, and the ban on specified risk materials from food, the OIE upgraded the US risk classification to ‘‘negligible’’ in May 2013, thus ‘‘allowing certain commodities that were formerly restricted but pose negligible risk to be imported’’ into other countries, with the anticipation that ‘‘that these changes could convince other countries to remove any remaining restrictions on U.S. cattle and cattle products’’ (APHIS Factsheet 2013). Interestingly enough, several months after the diseased cow was found in Alabama in 2006, a Kansas beef producer, Creekstone Farms, wanted to test every cow it processed for BSE, but it was blocked from doing so by the USDA by restricting the selling of test kits under the auspices of an obscure law from 1913. Around the same time the agency also announced a 90 % cutback of its own BSE testing program, which at that time was only testing about 1 % of the 100,000 cattle slaughtered daily, or more than 36 million a year; the cutback reduced testing to only 110 cows a day, or around 40,000 animals per year, a change said to be saving the USDA about $35 million annually (Editorial, USA Today, 2006). Stanley Prusiner, the scientist who first described

123

R. Hawkins

prion disease and who developed an inexpensive, fast, and accurate test for BSE himself, maintains that the US should eventually test every cow slaughtered, as Japan is doing; after trying unsuccessfully to convince the Secretary of Agriculture to institute such testing after the first positive cow was found, he observed, ‘‘USDA scientists and veterinarians, who grew up learning about viruses, have difficulty comprehending the novel concepts of prion biology…. It’s as though my work of the last 20 years did not exist’’ (Blakeslee 2003). As of June 2014, a total of 229 patients with variant CJD, the vast majority now deceased, have been identified from 12 different countries, including 177 from the United Kingdom and 4 from the United States (CDC Fact Sheet 2014). Yet another kind of complexity related to this disease is displayed at the genetic level: essentially all vCJD patients so far tested have shown a particular genetic profile, being homozygous for the amino acid methionine at codon 129 of the human prion protein gene. This genetic profile may be associated with a shorter incubation time for the onset of the disease, leaving a concern that the disease could show up later in heterozygotes or in persons homozygous for valine at this locus, these two groups together making up a larger percentage of the exposed population than the methionine homozygotes (Belay and Schonberger 2005, 203–204; Collee et al. 2006, 106). This pattern can only be a part of the picture, of course, since the vast number of people homozygous for methionine who were likely exposed have not yet come down with the disease; their resistance to it may be in part a result of the ‘‘robustness’’ of their genomes, along with equally robust networks at higher levels of bodily organization. An even greater appreciation of the complexity at the root of many human disease processes, moreover, has been opened up by the ongoing, detailed study of prion diseases. Stanley Prusiner suggested in 1984 that Alzheimer’s disease might also be caused by a prion, an idea that was widely dismissed at the time. It has since been found, however, that the amyloid-B proteins, which aggregate within and kill nerve cells in Alzheimer’s, are similarly capable of self-replication in a nonlinear, feed-forward manner (Schnabel 2011), and there is a possibility, though not as yet any evidence, that Alzheimer’s and some similar neurodegenerative diseases might be infectious (Aguzzi 2014). As reported in the Schnabel article, it has been noted, in reference to such diseases, that ‘‘[i]n principle… there could be protein structures in our food and water that get into the brain [creating] disease-causing spirals of protein aggregation ‘like the little bit of dust that seeds ice crystals in the windows’’’ (2011, S14). Documentary evidence supporting a parallel increase in the prevalence of Alzheimer’s disease and the growth of beef production and consumption has been gathered by Waldman and Lamb (2004). Whatever the eventual consequences of the BSE-vCJD outbreak may be, and whatever we eventually learn about other diseases that may prove to have a similar etiology, the long lag time between infection with the abnormal protein and the onset of clinical symptoms in vCJD, between 5 and 10 years for the first reported cases (Will et al. 1996) and possibly much longer in others, is a stark reminder of the difference between responses that develop slowly over time through complex processes and those that develop ‘‘acutely’’—within minutes, hours or days—after exposure to a disease-causing agent. Unfortunately, the ‘‘acute’’ response still seems

123

Facing up to Complexity: Implications for Our…

to set the paradigm for recognizing harm in the minds of many, leaving more slowly developing pathologies largely untested-for and so unrecognized. What the eventual outcome of this ongoing experiment will be for consumers in the US and the countries it will be exporting beef into remains to be seen. The lingering uncertainty of the future course of this human disease points not only to a need for continued monitoring not only of the human populations that were significantly exposed to the pathogenic material but of the technology, in this case that of intensive livestock rearing, upon multiple levels. The BSE/vCJD example, moreover, is richly illustrative of the reality we must face as we gain a better appreciation of complexity, and in particular the complexity of lifeforms and the larger systems in which they are embedded. No one could have predicted that hundreds of people would contract a fatal disease as a result of the changes that were instituted by the livestock industry over recent decades, and indeed many may still fail to see these changes as the conscious introduction of a new technology, preferring to see them, perhaps, as the inevitable result of the workings of ‘‘the marketplace,’’ granting to the economic system we currently have in place a determinism that contemporary science is demonstrating is not applicable to the natural world. Such issues will be discussed briefly in the conclusion of this paper. Intervention into Food Crop Genomes: The GMO Controversy in Light of Biological Complexity The emergence of intensive livestock rearing practices is an evolutionary milestone in our human social organization that to date has captured relatively little attention beyond groups concerned about animal welfare; keeping large mammals nearly immobilized for much of their lifetimes and feeding them a diet in some ways dramatically different from the one they evolved to consume seems to have been perceived less as an ‘‘intervention’’ into a previously existing biological system than as the implementation of a new technology entirely within our human purview to develop, based on a simple model with presumably predictable results. Directly intervening in the genomes of plants (and animals) destined for human consumption constitutes a shift to a different level of manipulation that has awakened the concern of many in the public, however, and so it would seem warranted to examine some of the details of the technology involved, some of the regimes currently in place to regulate it, and what sorts of evidence has been gathered to date about its effects on biological systems. With respect to the methodology of production, many proponents of genetic engineering like to claim that there is no fundamental difference between their methods and the intentional selective breeding of plants and animals for desired traits that has gone on in human societies for millennia. Harvard biologist Stephen Palumbi disagrees with this assertion, explaining several ways in which this new technology ‘‘differs enormously’’ from selective breeding, or ‘‘artificial selection’’ as it was called by Charles Darwin (Palumbi 2001). The yak, he points out—a good example of the ‘‘evolved complexity’’ of biological systems that Sandra Mitchell takes note of—is an animal magnificently suited to thrive in the high Himalayas, with ‘‘immense lungs—three times larger than those of similarly sized cows—to

123

R. Hawkins

pull oxygen from the miserly air,’’ as well as differences in red blood cell hemoglobin and in the structure of fine blood vessels in the lungs that make it well adapted it to the altitude. Tibetans have altered them in many ways by selective breeding over the centuries, but no single insertion of a gene or a discrete suite of genes could have produced such an animal, says Palumbi; the multitude of adaptive traits that make the yak exquisitely suited to its environment are the result of selection acting on ‘‘whole organisms,’’ animals in which many different structural and regulatory genes work in concert. Palumbi charges that inserting one or a few genes into an organism by ‘‘brute force’’—forcefully introducing foreign DNA into cells using viruses, plasmids, and even ‘‘gene guns’’— is like trying to create a Girl Scout rifle team simply by ‘‘dropping the guns on Main Street and hoping that everything sorts itself out.’’ The next step in creating a genetically modified product, moreover, is also markedly different from the way selection proceeds; as Palumbi explains, with genetic engineering, the very few viable forms (out of thousands that are not viable) resulting from the procedures are carefully cultured and then cloned to make many more copies of these most expensively produced products, in sharp contrast to the extensive ‘‘weeding out’’ process that acts on the abundant genetic diversity of each new generation of whole organisms, removing those with less functional phenotypes, that goes on with both natural and artificial selection. ‘‘Biological engineers must admit that their profession differs fundamentally from civil engineering,’’ Palumbi concludes, ‘‘because bridges don’t evolve but plants do—often in unexpected ways’’ (ibid, B9). Since the prevailing opinion among those responsible for producing genetically engineered food crops has been not only that their production process is not significantly different from the processes of natural and cultural evolution that created previous generations of such organisms, it has also held that there can be no intrinsic differences between these two sorts of ‘‘products’’ themselves, and the regulatory regime established in the United States has essentially been based on this assumption. At least through the 1990s, it seems that the standard being used for studies on the safety of genetically modified (GM) foods—if studies were performed at all—was that of examination at the level of a reductive ‘‘compositional analysis’’—quantification of amino acids, sugars, fatty acids, and certain other specific compounds present following separation procedures—if not at the even coarser grain size of ‘‘proximal analysis,’’ determination of composition only in terms of the gross amounts of fat, protein, carbohydrate, fiber, ash and water. What this rather loose standard for testing the products of genetic engineering is/was based on was reportedly the notion of ‘‘substantial equivalence,’’ as codified by the FDA and detailed in the Federal Register under regulation 57: 22984–23005, ‘‘Foods Derived from New Plant Varieties’’ (1992).2 It was said to be spelled out 2

I say ‘‘reportedly,’’ because electronic access to the Federal Register only goes back to 1994 (59 FR). After several hours of poring over materials accessed through the Electronic Code of Federal Regulations, http://www.ecfr.gov/cgi-bin/ECFR?SID=f0b278762ebba10bc6cfa008b5f05fe5&page=simple, I was able to bring up nothing under ‘‘FR 57 22984’’ or variations thereof; ‘‘substantial equivalence’’ brought up 876 hits, including Title 21, Food and Drugs, Part 312 ‘‘Investigational New Drug Application’’ and Part 1107, regarding tobacco products, but nothing on a near screen related to food or genetic modification. Getting more specific with ‘‘genetic modification’’ I found listings under Title 7, Agriculture, to include a

123

Facing up to Complexity: Implications for Our…

specifically in FR 57 22985: ‘‘In most cases the substances expected to become components of food as a result of genetic modification will be the same as or substantially similar to [emphasis added] substances commonly found in food such as proteins, fats and oils, and carbohydrates,’’ the regulation that ‘‘lies at the heart of the dispute around GMOs,’’ according to Marie-Monique Robin (2010, 146). In other words, since the GM ingredients in our food, when examined at this very large grain size, seem to be ‘‘similar to’’ ingredients from the non-GM plants and animals we’ve been eating for centuries, these products have been pronounced, a priori, to fall into a category similar to that of food additives ‘‘generally recognized as safe’’ (GRAS), and therefore to require no further study before release into our diets. The notion of ‘‘substantial equivalence’’ was apparently backed up—after the fact of its regulatory adoption—by three reports published in the Journal of Nutrition in 1996 as part of an assessment of the safety of glyphosate-tolerant soybeans (GTS—commercially known as ‘‘Roundup Ready’’ soy, able to withstand treatment with the broad-spectrum herbicide Roundup, both produced by Monsanto) in food and feed (discussed in Robin 2010, 171–177). I believe these are worth examining in some detail, given the concerns about uncertain consequences revealed by our growing knowledge of complexity. The first, ‘‘The Composition of Glyphosate-Tolerant Soybean Seeds Is Equivalent to That of Conventional Soybeans,’’ by Stephen Padgette et al. (1996), was a compositional analysis comparing two varieties of genetically engineered glyphosate-tolerant (GT) Footnote 2 continued section aimed at preventing the creation of new ‘‘plant pests’’ through genetic engineering (Part 340), and in Part 357 ‘‘Control of Illegally Taken Plants,’’ a paragraph §357 defining ‘‘artificial selection’’ as ‘‘The process of selecting plants for particular traits, through such means as breeding, cloning, or genetic modification’’ [emphasis added]. A search under ‘‘GRAS’’ brought up, in Title 21: Food and Drugs, §184.1 ‘‘Substances added directly to human food affirmed as generally recognized as safe (GRAS)’’ brought up: ‘‘(d) The food ingredients listed as GRAS in part 182 of this chapter or affirmed as GRAS in part 184 or §186.1 of this chapter do not include all substances that are generally recognized as safe for their intended use in food. Because of the large number of substances the intended use of which results or may reasonably be expected to result, directly or indirectly, in their becoming a component or otherwise affecting the characteristics of food, it is impracticable to list all such substances that are GRAS. A food ingredient of natural biological origin that has been widely consumed for its nutrient properties in the United States prior to January 1, 1958, without known detrimental effects, which is subject only to conventional processing as practiced prior to January 1, 1958, and for which no known safety hazard exists, will ordinarily be regarded as GRAS without specific inclusion in part 182, part 184 or §186.1 of this chapter.’’ It goes on to say: (f) The status of the following food ingredients will be reviewed and affirmed as GRAS or determined to be a food additive or subject to a prior sanction pursuant to §170.35, §170.38, or §180.1 of this chapter: (1) Any substance of natural biological origin that has been widely consumed for its nutrient properties in the United States prior to January 1, 1958, without known detrimental effect, for which no health hazard is known, and which has been modified by processes first introduced into commercial use after January 1, 1958, which may reasonably be expected significantly to alter the composition of the substance. (2) Any substance of natural biological origin that has been widely consumed for its nutrient properties in the United States prior to January 1, 1958, without known detrimental effect, for which no health hazard is known, that has had significant alteration of composition by breeding or selection after January 1, 1958, where the change may be reasonably expected to alter the nutritive value or the concentration of toxic constituents… [emphases added].’’ In other words, it seems to say that only if such compositional alterations are known or expected (which will not be the case, of course, if they are deemed beforehand ‘‘substantially equivalent’’ or sufficiently similar to what is already GRAS) will the substance be reviewed, and then it may either be affirmed as GRAS or subjected to further regulation.

123

R. Hawkins

soybeans with a non-GT control, the specific nonengineered line from which the modified varieties were created. Comparisons were made of total protein, fat, carbohydrates, fiber and ash, along with levels of specific amino acids, fatty acids, sugars, and certain other compounds, with ‘‘no significant differences’’ found between varieties. The report concludes that ‘‘except for the tolerance to glyphosate, the GTS lines are substantially equivalent to the parental cultivar and other soybean cultivars now being marketed’’ (Padgette et al. 1996, 714, emphasis added). As noted by Marie-Monique Robin (2010), toxicologist Marc Lappe obtained some data ‘‘omitted’’ from what was reported in the composition study by Padgette, which did show some ‘‘significant’’ changes in the levels of protein and the amino acid phenylalanine between Roundup Ready soybeans and the controls. He and a colleague repeated the study, growing the GM and non-GM plants under similar conditions, but sprayed them with Roundup before harvesting according to Monsanto’s instructions, only to discover later that apparently the Roundup Ready plants in Padgette’s study had not actually been sprayed with Roundup, which he declared ‘‘completely invalidates the study’’ (ibid, 173). Padgette and collegues apparently repeated the study, using plants that had been treated with the herbicide, and reported similar results in 1999. In the second study reported in the Journal of Nutrition, ‘‘The Feeding Value of Soybeans Fed to Rats, Chickens, Catfish and Dairy Cattle Is Not Altered by Genetic Incorporation of Glyphosate Tolerance,’’ by Bruce Hammond et al. (1996), feed was prepared from the same three lines and fed to groups of the different animals listed in its title; weight gain, feed consumption, milk production, and some other parameters were compared, all of which ‘‘were comparable’’ among the three varieties of soy. In the 4-week feeding study performed on rats, it was noted that the livers of ‘‘several’’ animals from both GTS and parental-line ground soybean feeding groups appeared unusually dark brown on gross necropsy, but no microscopic examination was performed on the tissue to determine the reason for the strange coloration; pancreatic tissue was examined microscopically, showing some mild inflammation and acinar cell apoptosis, but the findings were said to be comparable in all groups. The final paragraph of this study notes that ‘‘[n]ormally, animal feeding studies are not used to evaluate the quality of new varieties of soybeans that are generated via conventional plant breeding…. Although the animal feeding studies provide some reassurance that no major changes occurred in the genetically modified soybeans, in reality, this was already established by the compositional analyses. It has been recommended that primary reliance be given to thorough compositional analyses to detect potential material differences for new genetically modified lines that may be developed (International Food Biotechnology Council 1990, U.S. Food and Drug Administration 1992 [a reference to Federal Register 57: 22984–23005])’’ (Hammond Hammond et al. 1996, 728, emphasis added). The third paper, The Expressed Protein in Glyphosate-Tolerant Soybean, 5-Enolypyruvyshikimate-3-Phosphate Synthase from Agrobacterium sp. Strain CP4, Is Rapidly Digested in Vitro and Is Not Toxic to Acutely Gavaged Mice,’’ by Leslie Harrison et al. (1996), is even more revealing of a very large ‘‘grain size’’ having been adopted for scrutiny, particularly in light of BSE-vCJD concerns. Mice

123

Facing up to Complexity: Implications for Our…

were given relatively high doses, by stomach tube, of an isolated glyphosate-tolerant enzyme, CP4 EPSPS, known to provide glyphosate tolerance in soybeans (and other plants) by continuing to function in the synthesis of aromatic amino acids even in the presence of glyphosate, which kills nontolerant plants by binding to the normal form of the EPSPS enzyme and shutting down the biosynthetic ‘‘shikimate’’ pathway. The experiment was conducted for only seven days, its extremely short length being justified in this manner: ‘‘[a]cute administration was considered appropriate to assess the safety of CP4 EPSPS because toxic proteins act via acute mechanisms,’’ as if no slowly developing diseases caused by proteins had ever been reported (ibid, 729, emphasis added). It was also revealed that, due to the difficulty of isolating the particular enzyme produced from the DNA that had been incorporated into the soybeans, it being present at very low levels in the seeds, the ‘‘same’’ protein produced in quantity by a microorganism (the microorganism from which the inserted gene had been isolated, the CP4 strain of an Agrobacterium species) was used instead, it having been shown to be ‘‘a suitable substitute’’ on the basis of ‘‘structural and functional characteristics,’’ even though the microbially produced form was known to differ from the plant-produced GT enzyme by having an extra amino acid, methionine. In other words, the actual protein subjected to testing was not the protein that would be consumed upon eating a product made from Roundup Ready soy, but a protein that was just ‘‘very similar’’ to it. It was further admitted in discussion that the CP4 EPSPS enzyme tested was only ‘‘26 % identical to soybean [non-glyphosateresistant] EPSPS,’’ demonstrating that there was quite a substantial degree of difference between these proteins at the level of primary structure, the linear sequence specifying which amino acids are directly hooked up with which others prior to the protein taking on its three-dimensional shape, although such divergence was noted to be ‘‘on the same order as the divergence among [various other] food EPSPS themselves’’ (738). In the feeding study, mice were killed 8 or 9 days after feeding began, and ‘‘no significant differences’’ in weight gain or food consumption were found between experimental and control groups, although some ‘‘minor pathological findings’’ (‘‘corneal opacity, kidney and pituitary lesions, and hydrometra of the uterus’’) were observed on gross examination, said to be ‘‘randomly distributed’’ among females in all groups (735–736); apparently no histological examinations were done, although ‘‘tissues were retained’’ (731). Another part of the investigation was an informational analysis that showed no homologies between amino acid sequences of the new enzyme and the amino acid chains listed in a database of toxic and allergenic substances, an analysis that was taken to rule out any potential toxic effects the new substance might have. In vitro simulations of digestive action, moreover, showed that it was broken down rapidly in simulated gastric fluid, but—if any of the new product should happen to pass intact further down the gastrointestinal tract—it might continue to be active in the lower gut, since it was broken down only relatively slowly in simulated intestinal fluid, showing 93–95 % of its enzyme activity present after 10 min and somewhat less than 6–9 % still present after 4 and a half hours (737). The Harrison paper, taken in combination with the two preceding ones, was said to ‘‘confirm the safety of the CP4 EPSPS protein’’ and to ‘‘support

123

R. Hawkins

the conclusion that GTS is as safe and nutritious as traditional soybeans currently being marketed’’ (ibid, 739). It was also interpreted as laying the ground for applying the same safety assessment to other crops containing the altered enzyme, like cotton and canola, on the basis of their similarly presumed equivalence to the genetically modified enzyme examined in this study. Remarkably, Robin was able to conduct a three-hour interview in 2006 with James Maryanski, a microbiologist who had been the FDA’s Biotechnology Coordinator for Food Safety and Applied Nutrition from 1985 to 2006 and who had been intimately involved in the establishment of the ‘‘substantial equivalence’’ doctrine. Interviewed shortly after his retirement in 2006, he let himself be drawn, reluctantly, into a discussion of the concept. She quotes his defense of it at some length: ‘‘What we do know, is that the genes that are being introduced currently, to date, using biotechnology, produce proteins that are very similar to proteins that we’ve consumed for many centuries… Using Roundup Ready soybeans as an example, this is a plant which has a modified enzyme that is essentially the same enzyme that’s already in the plant. It has a very small mutation, so, in terms of safety, there’s no big difference between that introduced enzyme and the one that already occurs in the plant’’ (James Maryanski, as quoted in Robin 2010, 146–147, italics in original). One seemingly glaring problem with this line of thought should be obvious to anyone who considers it with prion disease in mind. The normal prion protein found in cell membranes (PrP) is apparently not just very similar but in fact identical, in its primary structure, to the disease-producing prion protein (PrPsc), since the significant difference between them is found in the tertiary structure, the overall three-dimensional shape of the two proteins. While advances in our knowledge of protein chemistry are advancing rapidly, I think it’s fair to say that we still don’t know a great deal about what sorts of downstream consequences might result from ‘‘slightly different’’ conformations of other proteins, proteins that might be ‘‘very similar’’ in their primary structure but potentially significantly different in their final shapes, the three-dimensional shape of the protein generally being responsible for its particular type of functionality. What seems an incontrovertible conclusion, at any rate, is that introducing foreign genetic material into or otherwise altering a stretch of DNA that codes for protein is likely to change the primary structure of that protein—since the linear order of nucleotide bases in DNA is translated directly into the protein’s linear arrangement of amino acids—if only by ‘‘a little bit’’—indeed that is the point of making the intervention. This in turn risks altering the tertiary structure of the protein product, since tiny differences at crucial points of folding may lead to big changes in overall conformation. The problem of possibly altered protein conformation with pathogenic potential as a result of genetic modification of food crops was not lost on Robin; she recounts the story of an employee who recalls suggesting to a Monsanto scientist that the cotton from an experimental field might be sold at a higher price because it was ‘‘only one gene’s difference from the current premium variety’’:

123

Facing up to Complexity: Implications for Our…

‘‘That’s when he told me: ‘No, there are other differences, transgenic cotton plants produce not only Roundup resistance protein but also other unknown proteins as a product of the manipulation process.’ I was flabbergasted. There was a lot of talk at the time about mad cow disease— bovine spongiform encephalitis [sic], and its human counterpart, variant Creutzfeldt-Jakob disease, severe pathologies caused by macroproteins known as prions. I knew that our transgenic cotton seeds would be sold as cattle fodder, and I said to myself we hadn’t even bothered to find out whether these’unknown proteins’ were prions. I told the Monsanto scientist about my concerns, and he answered that they didn’t have time to worry about such things’’ (Robin 2010, 192). The scientists employed by corporations like Monsanto may not have worried about such things in the late 1990s, when the above conversation took place, and it’s not clear how much they or the regulatory agencies worry about them now. We have come a long way since antacid commercials could plausibly promote the image of two tablets dropping down a metal pipe to soothe a cast-iron stomach, and if the three-dimensional structure of a protein can be preserved after being ingested and cross the blood–brain barrier, it should be clear that our foods are not always broken down during digestion into their smallest component parts, ‘‘reduced’’ to single amino acids, fatty acids, simple sugars, and other small molecules, the ‘‘grain size’’ examined through compositional analysis. In light of what we are learning about complexity, on what level, and at what grain size, should we be looking if we are concerned about potential hazards in the food that we eat, if derived from a new technological process? Examination of foodstuffs at such levels could not possibly show a difference between steak from a healthy cow and a steak from a subclinical BSE-positive cow, for example, nor could they detect similarly subtle but unknown differences in the structure of components potentially having the ability to cause human diseases in other ways, currently unknown and unpredictable. By 1999 the notion of ‘‘substantial equivalence’’ was coming under fire, called ‘‘a pseudoscientific concept,… a commercial and political judgment masquerading as if it were scientific…. inherently anti-scientific because it was created primarily to provide an excuse for not requiring biochemical or toxicological tests,’’ thereby discouraging further scientific research (Millstone et al. 1999, 526). But there are additional levels of complexity, beyond the structure and function of the new proteins that introduced genes add to recipient cells, that the GM foodrelated biotechnologies bring to our attention. We now know that most traits are produced by multiple genes working together, and that a vast complex of feedback loops comes into play at levels of gene regulation, protein synthesis, cellular signaling, metabolic functioning, and so on within whole living organisms. The field of functional genomics investigates dynamic aspects of processes such as gene transcription and translation into proteins and their effects on organism development and functional integrity; its investigations have been enhanced by many new techniques developed over the last decade or so, some of which are also being applied to human gene therapy as well as agricultural biotechnology. Among these techniques are targeted mutagenesis and targeted gene replacement, whereby mutations or insertions of alternative genetic material can be placed rather precisely

123

R. Hawkins

at selected sites within the genome, through the use of such agents as zinc-finger nucleases (ZFNs), transcription activator-like effector nucleases (TALENS), and, since 2013, newer CRISPR (clustered regularly interspaced short palindromic repeats) technologies. Such interventions can also produce ‘‘off-target’’ insertions and mutations, however, disrupting essential protein-coding or regulatory genes, with damaging or lethal effects, such that ‘‘[w]hen ZFNs are contemplated for use in human gene therapy, much greater care must be taken to avoid potentially harmful unintended genome alterations’’ (Carroll 2011, 10). Arpad Pusztai, a world-famous expert on lectins, naturally occurring insecticidal proteins found in certain plants, was funded in 1995 by the Rowett Research Institute in Aberdeen, Scotland, to research the health impact of some GM potatoes that had had the gene encoding snowdrop lectin inserted into their genome. To his surprise—and he was a strong supporter of genetic engineering at that time—he found that the GM potatoes and those of the non-GM line they were derived from were not ‘‘substantially equivalent’’ upon compositional analysis, and, worse, animals fed the GM line showed pathological changes in brains, livers, testes, pancreas and intestine, as well as a troubling proliferation of the cells lining the stomach, effects that did not result from exposure to the products of the lectin gene alone (Ewen and Pusztai 1999). (The firestorm that resulted when he published his findings will be discussed later in this paper.) In a 2009 interview, Pusztai had this to say about how he understands what happened to his potatoes, an understanding that applies to all of current GM technology: ‘‘Scientifically, these are the key words: insertional mutagenesis. When you are inserting the transgene construct, you are changing the whole genome. Anything can happen.’’ (Anderson 2009, 23, emphasis added) The hazards of insertional mutagenesis were first made clear in the world of human gene therapy when five of 20 patients treated for X-linked severe combined immunodeficiency disease (SCID) developed leukemia several years after treatment using a retroviral vector, with two eventual deaths (Hacein-Bey-Abina et al. 2003, 2010). Even though there are much more precise ‘‘targeting’’ technologies available today—likely far more precise than the old scattershot technologies that gave rise to the predominant GM crops in use today, soy, corn, cotton, and canola—the insertion of new genetic material can still result in ‘‘off-target’’ effects, as noted above. We won’t fully know the organismic changes that even a very precise exchange of one gene for another will bring about, moreover, once it starts to work in concert with the rest of the genetic orchestra—it does appear that the process of insertion itself brings about changes, some of which will be unanticipated. Pusztai reminds us that all the other parts of the potato plant, excepting the tuber, are naturally poisonous to mammals, and so, as he explains, ‘‘what we did is by splicing in a gene from the snowdrop plant, we disturbed the potato genome, and [the tuber] became just as poisonous as any other part of the potato plant. When you look at the stupid idea of ‘substantially equivalent’… you can’t say that [the genome] is substantially equivalent, because you change it’’ (ibid, 19). Insertional mutagenesis was recognized early on as one of the ‘‘mechanisms by which food hazards may arise during crop genetic engineering,’’ along with direct effects of ‘‘inserted genes and

123

Facing up to Complexity: Implications for Our…

their expression products’’ and ‘‘secondary and pleiotropic effects of gene expression,’’ which may include altered metabolic pathways and gene expression in tissues other than those expressing the transgene, although the article maintains that the same sorts of problems may arise from conventional plant breeding, which for decades has included the introduction of mutations by radiation and chemical agents as well as artificial selection of parental strains (Conner and Jacobs 1999). There has been a noted paucity of published studies aimed at examining the safety of GM foods until about 2006, after which time the number of citations in databases was found to have ‘‘dramatically increased’’ (Domingo and Bordonaba 2011). In their review of animal feeding studies in the post-2006 literature, Domingo and Bordonaba found two studies out of 15 performed with GM corn that reported signs of hepatorenal toxicity, one study out of four with GM rice that showed some immunological changes in test animals, and two out of 9 using GM soy that found metabolic and cellular changes in experimental groups; their review comes across as relatively even-handed, reporting ‘‘a certain equilibrium’’ between research groups finding the GM crops to be as safe and nutritious as products of conventional breeding and those raising concerns, noting that most of the studies examined were carried out by biotechnology companies, and concluding that the debate over the safety of GMOs ‘‘remains completely open at all levels’’ (ibid, 741). In contrast, a review of 12 long-term (over 90 days-2 years) and 12 multigenerational feeding studies by Snell et al. (2012) concluded that ‘‘[t]he studies reviewed present evidence to show that GM plants are nutritionally equivalent to their nonGM counterparts and can be safely used in food and feed’’; a number of the studies reviewed in this report did find significant abnormalities in experimental groups, but these were all discounted on the basis of ‘‘inadequate experimental design’’ or other ‘‘major flaws,’’ and the value of performing studies of more than 90 days duration was also put into question. As of January 2015, over 300 scientists and other academics had signed the statement ‘‘No Consensus on GMO Safety’’ prepared by the European Network of Scientists for Social and Environmental Responsibility (ENSSER). It strongly objects to claims that there is a scientific consensus on the safety of GM foods, observes that studies showing signs of toxicity have not been followed up by efforts to replicate or refute their claims, points out that there are no epidemiological studies on the effects of human consumption of GM foods, noting the impossibility of conducting such studies in places like the U.S. where GM products are neither labeled nor monitored, and concludes that ‘‘the scarcity and contradictory nature of the scientific evidence published to date prevents conclusive claims of safety, or lack of safety, of GMOs’’ and that the scientific evidence regarding that safety must be ‘‘obtained in a manner that is honest, ethical, rigorous, independent, transparent, and sufficiently diversified to compensate for bias’’ (Hilbeck et al. 2015). The 90-day rodent feeding trial recently included in EU regulations as a routine measure for authorization of new GM food and feed applications was itself criticized by Harry Kuiper as unnecessary and less informative than the more sophisticated ‘‘omics’’ technologies (proteomics, transcriptomics, metabolomics) currently available (Kuiper et al. 2013). Kuiper has been instrumental in crafting the ‘‘SAFE FOODS framework’’ (Konig et al. 2010) for assessing the safety of food

123

R. Hawkins

products, which relies on such a reductive analysis of GM crops without wholeanimal studies, emphasizes the extent of natural variation in conventional crops according to these analytic criteria, seeks to change the ‘‘mindset of optimizing the public health outcome rather than reducing risks’’ (2010, 1584) by promoting the ‘‘benefits’’ of genetically engineered foods as part of the ‘‘framing’’ of the issues, and proposes to weigh economic, social, and ethical concerns more heavily against scientific considerations than has been done in the past, though exactly how such concerns are to be delineated and by which ‘‘interest groups’’ is not made clear. In addition to the potential problems generated by genetic modification itself, there are serious concerns about other aspects of the GM technologies. Glyphosate, the official active ingredient of Roundup, is the most heavily used herbicide in the US and increasingly around the world, largely as a result of the expanding use of glyphosate-tolerant GM crops. At the time of its introduction it was billed as a very safe chemical, since its mode of killing is attributed to ‘‘disruption of the shikimate pathway,’’ a biochemical pathway not found in animals (Samsel and Seneff 2013). The pathway is present in gut bacteria, however, which—at yet another level of interaction—are being discovered to exist in ‘‘an integrated biosemiotic relationship with the human host,’’ helping us by synthesizing vitamins, detoxifying xenobiotics, and playing a role in gut permeability and maintenance of our immune system (ibid, 1417), and there are indications that glyphosate’s multiple direct effects on them may be significant. Myles (2014) expresses concern about changes in the microbiome at another level, moreover, noting studies showing that ‘‘functional genes, from both industrial and natural sources, ingested by animals could be internalized by gut bacteria,’’ bacteria that could then ‘‘transcribe the engrafted genes into functional proteins, and the genetic changes could be inherited by offspring via microbiome transfer’’ (ibid, 9). He also notes that ‘‘an intact and functional industrial gene’’ was found in bacteria in the small bowel of patients with ileostomies, and considers the ‘‘hypothetical potential for transferring the production of harmful compounds to the microbiome in a manner that would circumvent gastric-acid inactivation’’ (ibid), a finding of possible concern in light of Harrison’s results on this point. Glyphosate has many other effects of potential concern, however. It suppresses the activity of important enzymes, and it may interfere with the transport of sulfate, which seems to be a factor in the development of autism and some other diseases. Samsel and Seneff make a case for glyphosate also possibly being associated with ‘‘gastrointestinal disorders, obesity, diabetes, heart disease, depression,’’ as well as ‘‘infertility, cancer and Alzheimer’s disease’’ (2013, 1416), and they conclude by speculating that it ‘‘may in fact be the most biologically disruptive chemical in our environment’’ (ibid, 1445). In a stunning judgment issued as this article was going to press, moreover, the International Agency for Research on Cancer (IARC) Monograph Working Group, under the auspices of the World Health Organization, classified glyphosate as ‘‘probably carcinogenic to humans’’; the evidence cited included DNA and chromosomal damage in human and animal cell culture and in living mammals, an increased risk of non-Hodgkin lymphoma in controlled studies of human occupational exposure conducted in several countries, and positive trends in the incidence of renal tubule carcinoma, hemangiosarcoma, pancreatic islet-cell

123

Facing up to Complexity: Implications for Our…

adenoma, and skin tumors in rodent studies (Guyton et al. 2015). How this will affect the controversy over GM foods, given the millions of acres being sprayed worldwide and the unavoidable presence of glyphosate residues on food crops produced with this technology, remains to be seen. Another recent study further complicates the picture; it seems that adjuvants— compounds used to help the main ingredient penetrate cells, which are usually presumed to be inert—of the major commercially available glyphosate-based herbicides may be toxic in themselves, synergistically increasing the toxicity of the commercial product beyond that of glyphosate alone (Mesnage, Bernay, and Seralini 2013). Its authors conclude that toxicity studies of all pesticides should be examined as mixtures, not as pure formulation of a presumed single active ingredient—a caution that might be construed as showing the need for a move away from reductionism—and they suggest, in addition, that ‘‘chronic toxicities of whole formulations’’ be ‘‘tested with mammals over a 2-year period,’’ not just in the very short-run tests that have been done to check for acute toxicity. To do so, however, would imply ‘‘a complete shift in the concepts underlying chemical toxicology’’ (ibid, 127). Residues of glyphosate and its adjuvants are being introduced into our systems when we ingest products of a glyphosate-tolerant crop variety without benefit of regulation on the basis of such studies, however, as is the insecticidal protein produced by Bacillus thuringiensis, which is engineered into Bt corn and the other Bt plants, the other major type of genetically modified crops now on the world market; since this insecticide is expressed systemically, some of it ends up in whatever part of the Bt crop we happen to take in as food. As Pusztai points out, ‘‘This is an irreversible technology’’—it’s ‘‘irreversible and unpredictable. You don’t know what the consequences will be. If you put a plant into the ground, that plant is a living thing. Through roots, you are communicating with the soil; through the leaves you are communicating with the air, with other organisms. You cannot look at it in isolation. This is a living thing, and that living thing is going to produce new DNA which gets into the ground, gets into the gut of animals and everything’’ (Anderson 2009, 21). Not only must living organisms be seen as wholes, they must be understood as parts of larger, ecosystemic wholes, in which large changes of other kinds have also been put into motion. The above are cautionary considerations based on our growing appreciation of biological complexity and the uncertainty of the consequences when we make an intervention into biological systems. But what do we know, on the basis of empirical evidence, about the ‘‘benefits’’ we would stand to lose of we were to put an end to genetic engineering of organisms destined for human consumption? It is acknowledged that the alterations made to the major types of GM crops, which now occupy millions of acres around the world, were aimed not at improving ‘‘second generation’’ characteristics like nutritional value or vitamin supplementation, but rather at meeting ‘‘first generation’’ demands like herbicide tolerance and insect resistance (Fernandez-Cornejo et al. 2014, 1). Despite these efforts, ‘‘yields of herbicide-tolerant or insect-resistant seeds’’ have not increased overall, and in fact ‘‘may be occasionally lower than the yields of conventional varieties’’ (ibid, 12), while the incidence of ‘‘superweeds’’ resistant to glyphosate and of insect pests

123

R. Hawkins

resistant to Bt toxin is growing worldwide (ibid, Tabashnik et al. 2014, and Gilbert 2013). Agreeing that many of the promised benefits to consumers and local growers have so far failed to be realized by genetic engineers, Pusztai notes that ‘‘there are no [GM] crops that can tolerate abiotic stress’’ (2009, 22). In other words, the major accomplishments of genetic modification to date have been geared toward biocidal activity—the killing of other forms of life, plant and animal, that compete with or consume some of what is planted for our human benefit—rather than aimed at developing crops to tolerate drought, heat stress, increasing salinity, and so on, conditions that may increasingly hamper our ability to ‘‘feed the world’’ as the effects of climate change manifest themselves. What about ‘‘golden rice,’’ a GM variety engineered to produce beta-carotene by the insertion of genes from the daffodil and a soil bacterium, in hopes of treating the Vitamin A deficiency that affects millions of poor people in developing countries? The first version apparently did not yield effective levels of the vitamin precursor, but substituting a gene from corn for the daffodil gene in ‘‘Golden Rice 200 has been claimed to increase the production of beta-carotene dramatically, and clinical as well as field trials are apparently underway, amid controversy (see Harmon 2013); what will come of them is yet to be seen. The discussion in this section is intended to inject a better appreciation of the multiple levels of complexity surrounding the employment of the biotechnologies required to put ‘‘GM foods’’ on our tables. Unlike the BSE/vCJD example, there is as yet no human disease that has been officially recognized as linked to ingestion of such foods; one might ask, however—given the many factors at play in modern society, and the lack of tracking or labeling in many countries—how such a disease might be detected. Considering the multiple domains in which subtle but significant change may result from human intervention—not only the biological systems of the plants and animals so modified but the larger ecological system and the systems of the organisms that consume them, including our own—it seems clear that the uncertainties surrounding the use of these particular biotechnologies are considerable and ineliminable. Moreover, the regulatory standards currently in place in the United States appear to be far too coarse-grained, with the employment of finegrained analytic tools and long-term whole-animal studies failing to be broadly required to screen for potentially harmful effects that the use of such technologies— which include the effects of using an accompanying suite of biocides to produce foods as well as the effects of ingesting particular plant products—might be having. The technologies are already widely used, however, and we are faced with the question of where we should go from here; do we continue on along the same trajectory followed so far and proceed with another round of escalation, increasingly developing plants with two, three, or more (up to as many as seven at last count) ‘‘stacked’’ traits (see Fernandez-Cornejo 2014, 16–18) such that they can resist multiple different herbicides and insecticides, for example,3 and make an increasing share of the world’s agriculture dependent on these additional chemicals? Shall we 3

‘‘Stacked’’ transgenic plants have been reported to manifest proteins changed in ways falling outside the natural variability of the parent non-GMO landrace, accompanied by ‘‘major metabolic pathway alterations’’ and impacts on the expression of other genes in the recipient genome, in addition to differences from GM varieties containing only one introduced trait; see Agapito-Tenfen 2014.

123

Facing up to Complexity: Implications for Our…

continue to tweak the genomes of organisms without examining the multiple downstream effects of such alterations in a detailed and conscientious manner and making the results of such examinations open to the public? Should GMO-derived foods, at the very least, be clearly labeled? How we should approach and attempt to answer questions such as these, regarding genetic engineering as well as other new technologies, is the subject of the last part of this paper. Before addressing it, however, another level of biological complexity needs to be examined.

Part 2: Complexity at the Level of Our Human Organization: How We Should Conceptualize Ethical Decisions Regarding New Technologies Another Level of Biological Complexity: The Human Social Psychology Surrounding the Scientific Discourse on GMO Crops Dr. Pusztai was interviewed on British TV in 1998, before his study was published, on the authorization of the director of the Rowett Institute; he did not provide details about his experimental findings, but he did admit to being concerned about the lack of safety tests for GMOs in general, and stated that, ‘‘as a scientist actively working in the field, I find it’s very unfair to use our fellow citizens as guinea pigs’’ (Robin 2010, 182). A furor descended on him two days after the interview, at which time he was relieved of his job, put under a gag order, and had all documents and data associated with the study confiscated; worse, he was subject to personal attacks and challenges to his professional reputation. Richard Horton, longtime editor of The Lancet, came to his defense, deciding to publish the study (Ewen and Pusztai 1999) and editorializing that ‘‘[g]overnments should never have allowed these products in the food chain without insisting on rigorous testing for effects on health.’’ He later reported that he felt subject to ‘‘intense pressure… to suppress publication.’’ The social pressures that were exercised to halt Pusztai’s studies and to stop dissemination of his findings were eventually traced to ‘‘the highest level,’’ communications from Bill Clinton to Tony Blair (Robin 2010, 185–187). A similar tempest ensued over the work of Gilles-Eric Seralini roughly a decade later. After a GM corn was approved for use in Europe, Seralini and coworkers reanalyzed the data, obtained through court order, from an in-house 90-day rat feeding study submitted as evidence of its safety by Monsanto, reporting that this reanalysis of their study showed evidence of liver and kidney disease in the experimental animals and declaring ‘‘it cannot be concluded that GM corn MON863 is a safe product’’ (Seralini et al. 2007). Five years after that, Seralini and his research group came out with an original study of another variety of GM corn (Seralini et al. 2012); rats were fed Roundup-tolerant NK603 transgenic maize over a 2-year period, in what he claimed was ‘‘the first life-long rodent (rat) feeding study’’ to evaluate the effects of such agents, since most such feeding studies have lasted only 3 months or less. Again, the team reported finding biochemical and histological signs of liver and kidney damage, as well as greater female mortality and an excessive development of large mammary tumors in females of the experimental groups. Color pictures of animals with large tumors no doubt added to the shock value of the

123

R. Hawkins

paper, and its publication provoked an outraged response within the biotechnology community. Detailed critical analyses followed, and the study was soon retracted, the publisher reportedly ‘‘[b]owing to scientists’ near-universal scorn,’’ even though it found ‘‘’no evidence of fraud or intentional misrepresentation of the data’’’ (GM Study Retracted 2013). It was eventually republished in an online journal, eliciting, among more vitriolic comments, a tweet of ‘‘Holy lumpy rats!’’ from another scientist (Republished Paper Draws Fire 2014). While the validity of Seralini’s findings might be debatable, the negativity of the social feedback with which it was greeted cannot be denied. Even before this last round of sparring, however, note was taken of the emotional vehemence with which reports of possible problems with GMOs were often attacked. A news feature in Nature tells the story of Emma Rosi-Marshall, a young biologist who reported finding ‘‘reduced growth and increased mortality’’ of aquatic insects feeding on residues of Bt corn in streambeds and concluded that ‘‘widespread planting of Bt crops has unexpected ecosystem-scale consequences’’ (Rosi-Marshall et al. 2007). Within weeks of her paper’s publication, ‘‘complaints about the paper had rippled through the research community,’’ according to Emily Waltz, author of ‘‘GM Crops: A Battlefield’’ (Waltz 2009). Rosi-Marshall, like some other scientists who have dared to enter this contested field and report their findings, became subject to the kind of attacks that ‘‘are launched from within the scientific community and can sometimes be emotional and personal; heated rhetoric that dismisses papers and can even, as in Rosi-Marshall’s case, accuse scientists of misconduct’’; in Rosi-Marshall’s own words, ‘‘’The response we got — it went through your jugular’’’ (ibid). On the basis of her interviews with both supporters of Rosi-Marshall and her critics, Waltz offers some insight into the social dynamics of the situation. She downplays the role of strictly self-interested thinking, claiming, rightly or wrongly, that ‘‘financial or professional ties to the biotech industry don’t seem to be the impetus’’ driving the critics,’’ though she admits they do exist. She emphasizes, however, the fact that ‘‘many of them feel strongly that transgenic crops are safe and beneficial to the environment and society’’ [emphasis added], quoting one who said ‘‘’[w]e have to get emotional’’’ in decrying the overregulation he believes is, for example, depriving the several hundred thousand children who go blind every year from vitamin-A deficiency the transgenic ‘‘golden rice’’ that might have preserved their sight. On the other hand, she notes that ‘‘the emotional and sometimes harsh quality of some of the attacks strikes some scientists as strange’’; these are the scientists who appreciate the role of studies reporting possible problems indicating the need for further investigation, and who ‘‘’don’t understand the resistance to that notion’’’ (ibid). I suspect this resistance has deeper roots than an assurance that we can know that GMOs are safe on the basis of currently employed testing, given the level of uncertainty now acknowledged to pertain to such highly complex systems as ourbodies-in-their-environment. In a policy forum on ‘‘Forbidden Knowledge’’—‘‘the censorship of knowledge that is controversial, taboo, or politically sensitive’’—that was published in Science, Kempner et al. (2005) reported on their findings after interviewing 41 scientists regarding ‘‘experiences or knowledge with science that

123

Facing up to Complexity: Implications for Our…

had been suppressed’’; they found most of the constraints that were described were ‘‘informal or… self-imposed, reflecting social, political, and cultural pressures on what is studied, how studies are performed, how data are interpreted, and how results are disseminated.’’ It seems there is a way that most of us, scientists included, are heavily influenced by what others are doing and the kind of feedback they give us about what we do, even though it goes mostly unnoticed and unremarked at the conscious level. If we look beyond philosophy into studies of social psychology and related fields, we find accounts of the emergence of such processes as conformity, denial, whistleblowing, and scapegoating at the level of the human group. I think a better appreciation of such processes is badly needed, especially when urgent policy matters are at issue, since if the workings of such processes are not articulated they may have effects that operate but without our collective conscious awareness or approval. In order to understand some of them— in a bow to complexity again—we must move beyond what has been an almost exclusive focus on the individuals and begin to analyze our behavior as partly overlain by influences arising from our positions as members of groups. What happened to Arpad Pusztai, Gilles Seralini, and Rosi-Marshall when they ‘‘blew the whistle’’ on harmful effects of GMOs illustrates what frequently happens to whistleblowers. Pusztai, for example, was informed by the director of the Rowett Institute that ‘‘his contract had been suspended, he would be dismissed, and the research team would be dissolved’’ (Robin 2010, 182). Political theorist C. Fred Alford, studying personal accounts of many whistleblowers, found that ‘‘somewhere between half and two-thirds’’ of them lose their jobs (Alford 2001, 18); he notes, furthermore, that many of them are shunned by their social and professional groups and often subject to personal attacks. Why do whistleblowers do what they do, and why do they receive the responses that they get? As Pusztai explained to Robin, he answered as he did in his TV interview ‘‘because I thought it was my ethical duty to alert British society to the unknown health effects of GMOs at a time when the first transgenic foods were being imported from the United States’’ (Robin 2010, 181). According to Alford, whistleblowers are unusual in being unable to ‘‘double,’’ as Robert Jay Lifton has termed the splitting off of a part of the self in order to deal with an ethically troubling situation; through doubling, ‘‘we are able to ignore our ethical qualms at work because our work self temporarily speaks for our whole self’’ (Alford 2001, 72). The reason why it’s so difficult to maintain one’s ethical compass when others are looking the other way and saying nothing, and why most people fall short, is because to do so is ‘‘acting against an aspect of human nature.’’ Alford quotes Frans de Waal in explaining why the underlying emotions are so powerful: ‘‘’Regardless of circumstances, chimpanzees, monkeys, and humans cannot readily exit the group to which they belong. The double meaning of ‘belonging to’ says it all: they are part of and possessed by the group’’’ (ibid, 68, quoting de Waal 1996, emphasis added). This feeling of belonging may be experienced as pleasurable by highly gregarious animals such as ourselves, but the underside is that it can also be highly restrictive of certain behaviors and even certain thoughts, imposing limitations that lead de Waal to dub it ‘‘the social cage’’ (de Waal 1996, 166–170). Our being wired this way is an example of our contingent, evolutionary complexity, and it has served us

123

R. Hawkins

well maintaining group cohesion and functionality over the ages, but we must not let it immobilize us if there are strong indications that the current self-reinforcing functioning of many human social groupings needs to change. At present there seems to be a ‘‘heavy sound of silence’’ in the scientific literature regarding GMO health effects, a term used in The Elephant in the Room by Eviatar Zerubavel to indicate denial, a refusal to acknowledge things that ‘‘actually beg for attention’’ (2006, 9). Katie Marie Norgaard describes similar processes preventing the inhabitants of Bygdaby, Norway, from explicitly acknowledging the evidence of climate change that they were starting to see all around them: ‘‘[s]ocial norms of attention—that is, the social standard of ‘‘normal’’ things to think about—are powerful, albeit largely invisible social forces shaping what we actually do think about’’ (Norgaard 2011, 112). When I came across a copy of Nature a few years back featuring on its cover ‘‘GMOs: The Promise. The Reality,’’ for example, I was excited; I wanted to see how this journal would cover ‘‘the heart of the matter’’ in the larger GMO debate, the issue of what these products are doing to our bodies, and perhaps even ‘‘the heart of the heart,’’ the point that Robin honed in on, the notion of ‘‘substantial equivalence’’ and the ways in which this paradigm is being both challenged and defended. I turned to ‘‘A Hard Look at GM Crops’’ (Gilbert 2013), hoping to find an up-to-date, well-integrated article on the topic. I was disappointed; discussion of health effects of GMOs at the genetic/biochemical/physiological level was conspicuous in its absence. It seems that, among themselves, biotechnologists are coming to realize the implications of the complexity of the genome and problems like insertional mutagenesis—as Pusztai puts it, ‘‘this is now the accepted wisdom’’ (Anderson 2009, 20)—but ‘‘the social cage’’ is so far sealing their lips, at least with respect to published communications. But if one can understand the reluctance of ‘‘team players’’ to voice doubts about what the group they belong to is doing even if they secretly harbor some, fearing negative feedback from other members of the group aimed at ‘‘keeping them in line,’’ what sort of scientists would ‘‘denigrate research by other legitimate scientists in a knee-jerk, partisan, emotional way,’’ as Waltz admits some of them seem to do (2013)? Robin documents in detail the ‘‘revolving door’’ that has for years granted free movement between positions of power within the biotechnology industry and positions of power in politics, and I do not doubt that financial interests have motivated much of what has happened. With respect to scientists, however, I believe it is again a matter of group forces coming into play, a stronger motivation even than money or status. The harshest critics of ‘‘negative’’ GMO research seem to be members of a certain subgroup of scientists, a small but vocal group who steadfastly remain loyal to one another and to the reductive beliefs that join them together, who often trained well before the complexity of the living world was appreciated and so may still wield much influence in biotechnology circles, and who will try to defend the mechanistic reductionist paradigm to the bitter end (Kuhn 1962) against what they see as a heresy that, if true, might call into question their works of a lifetime. One can feel sympathy for these defenders of the faith, even as one shudders to think of the genies proceeding under their paradigm may already have let out of the bottle.

123

Facing up to Complexity: Implications for Our…

The silencing of discussions and the scapegoating of those who bring disconcerting information to light are emergent group processes likely to be of ancient origin, tracing back our evolutionary roots as group-living, group-dependent primates, and investigation of their workings is an open-ended project. Simpler emergent patterns of human behavior, however, have been subjected to analysis by complexity theory, as explained by Len Fisher; the behavior of human crowds, for example, like that of flocks of starlings, can generally be explained in terms of a few simple rules of motion combined with feedback mechanisms that maintain proper spacing from the positions of nearest neighbors (Fisher 2009, 49–65). Fisher also recognizes epistemic phenomena that arise at the group level: an ‘‘informational cascade’’ is a ‘‘quorum response’’ resulting from positive feedback, as ‘‘each individual’s likelihood of choosing an option increases steeply (nonlinearly) with the number of near neighbors already committed to that option’’ (85). The trouble with such an evolved, near-automatic response is twofold: the shared conceptual world of the whole group can float free of reality, as in cases of dangerous groupthink, whereby social pressures push group members into ‘‘a pattern of thought that is characterized by self-deception, forced manufacture of consent, and conformity to group values’’ (93) or, worse, the majority can be manipulated by a few unscrupulous individuals who hold the reins of power, a situation for which Fisher suggests the term ‘‘disinformational cascade’’ (87). Fisher cites Richard Feynman’s conclusion that the 1986 Challenger disaster—a paradigm example of a bad decision in the face of uncertainty—was the result of groupthink within NASA management, and he cautions that ‘‘groupthink is everywhere’’ (99); his recommendation for better decision making is a switch to a method of enriching the process by drawing upon the diversity of perspectives in a group, such that each individual ‘‘[gets] out of the group environment for a while,’’ reaches a conclusion through independent thinking, and then returns to the group to input that considered perspective into the collective deliberations (97–104). How might we take Fisher’s advice and ‘‘push back from the table,’’ reflect on the processes that seem to be operative in different domains, including that of our own social behavior, and then assess anew what our relationship should be to newly developing technologies such as those represented in the two examples above? Formulating my answer to this question entails introducing an important distinction that seems to have been missing from our ethical deliberations heretofore, a distinction that can only be appreciated by keeping our propensity to engage in groupthink and adhere to a ‘‘coherence theory of truth’’ rather than thinking independently enough to subject our beliefs to a rigorous test of their correspondence with reality, as is the goal within the sciences. The Construction of Our Social Reality: The Crucial Ontological Distinction When we consider the superindividual, sociopsychological processes at work that are effective in patterning our collective human activities, our attention shifts to another level of complexity, a level at which we can conceptualize what we humans are doing within the context of our wider reality. John Searle, a philosopher of mind

123

R. Hawkins

and language, has adopted a perspective that captures some other emergent features discernable at this level of analysis that are importantly different from the ones discussed in the previous section. Another form of shared cognition, one that is importantly unlike feelings of solidarity or of the need to ‘‘rein in’’ heretical thoughts, is our ability to use language and symbolize. Searle maintains that it is through the workings of such a process, operative society-wide through what he terms our collective intentionality, that we have constructed a shared conceptual world in which what counts as true—a ‘‘fact’’—is true simply because of its coherence with what others believe is true, as generally in accord with the coherence theory of truth. In The Construction of Social Reality (1995), Searle defends a realist ontology and a correspondence theory of truth, employing arguments that go well beyond the scope of this paper. I think it is safe to say that such is the conceptual framework largely taken for granted by most working scientists, however, and the one that I believe should frame the ethical discussion regarding our new technologies. And on a realist ontology, as Searle points out, there are things that doesn’t really ‘‘exist’’ other than in our shared networks of beliefs. We currently live with a considerable ‘‘metaphysical burden’’ (1995, 1), he remarks, because there currently seems to be a lacuna in our collective understanding of the world we live in, illustrated by the difficulty we face when we attempt to go from descriptions of the kinds of entities studied by science—things that have an existence independent of the way we attempt to represent them to ourselves, things like chemical compounds, biological organisms, and ecosystems—to descriptions of our conceptual constructions, things like money and the other entities that are treated as noun terms by disciplines such as economics and law but which have no independent existence outside the world of our representations. He attempts to overcome this apparent ontological discontinuity in our reality, seeking to ‘‘show the continuous line that goes from molecules and mountains to screwdrivers, levers, and beautiful sunsets, and then to legislatures, money and nation-states’’ (ibid, 41). The link arises out of our biological tendency toward group-orientation, combined with our ability to create and share meaningful symbols. Like a number of other social animals, we exhibit collective intentionality, the ability to cooperate as members of a group to carry out an action or share an attitude; a soccer team displays collective intentionality when it moves the ball down the field, as does a pack of hyenas when they bring down an antelope. In addition, we humans, along with some of the great apes and a few other animals, have the ability to impose a function upon an object; an orangutan and a human may use a stick as a lever, for example, and a human can also impute a symbolic meaning to one, as when a scepter is seen to signify royalty. One of Searle’s favorite examples of the construction of an aspect of our social reality is money: the strips of paper bearing variously colored ink markings have few physical uses in and of themselves, short of raw material for origami or starting fires, but when we collectively agree to impute an amount of ‘‘value’’ to them, giving a certain sort of status to them that they would not have intrinsically, they become able to function in a powerful way within human society. And with the seemingly endless potential for repeating of this imposition of shared symbolic status and thereby function on virtually any starting material (including the various sounds and markings of human speech and writing), we have been able to build up

123

Facing up to Complexity: Implications for Our…

‘‘institutions’’ of remarkable complexity. Once you have money, for example, you can, by following what Searle calls ‘‘constitutive rules,’’ create entirely new forms of behavior having to do with borrowing and lending, the setting up of a banking industry, the development of a stock exchange, and on and on.4 Searle, as a philosopher of mind and language, presents quite a sophisticated analysis of how all this comes about, but what’s important for our discussion here is the ontological distinction itself. It is not just a matter of my individual, subjective opinion, for example, that a certain piece of paper in my pocket is a five-dollar bill (assuming that it is), but rather the collective agreement (if rarely considered consciously) of the vast majority of members of my society that it is five-dollar bill, so the fact that it is a five-dollar bill seems to be an ‘‘objective’’ fact, independent of my volition. In Searle’s terminology, such a fact would be epistemically objective. The mode of existence of that ‘‘five dollar bill,’’ however, would be ontologically subjective—were it not for us all continuing to believe in its being a five dollar bill, the ‘‘five dollar bill’’ would simply be a strip of paper. Money, in other words— although it may be instantiated in any substance—gold, paper, electric patterns on plastic—is a conceptual ‘‘object’’ that only emerges at the level of our human social reality, and in fact only by means of our human cognitive ability to abstract and symbolize, to let one thing ‘‘stand for another’’ and thereby ‘‘function’’ in certain ways within society as a result of its added symbolic status. There may be logical and mathematical rules we follow that define ‘‘how to do things with money,’’ but these rules have been constructed by us; they are not the things that can be studied by science (though one may study the emergent psychosocial processes that structure how we behave when operating in a context where people accept a symbol that functions in the role of ‘‘money’’). In order for money to exist, and for all the rest of the conceptual superstructure of modern economics that has been constructed upon the assumption of the existence of money to exist, there must be living human beings who continue to share a belief system in which such entities exist, in their ontologically subjective manner. And of course, in order for those human beings who share the belief system in which such entities exist, the complex biological systems that are and that support those human beings must exist, and exist in a dynamic state that remains within a certain ‘‘basin of attraction’’—because those living systems are ontologically objective. This important distinction is what I believe has been left out of many of our ethical discussions, including those surrounding new technologies. The Precautionary Principle: What it is Intended to Protect, but Frequently does not ‘‘We live in one world,’’ Sandra Mitchell maintains; ‘‘[e]ven though scientists study different aspects of the one world, fundamental particles, chemical reactions, biological development, cosmological evolution, and so forth, the fact is that they 4

The details of how we do this exceeds the limits of this discussion, but Searle’s general formulation of how it works is through the iteration of ‘‘X counts as Y in context C,’’ such that paper strip X counts as money, Y, in context C, coming off a government printing press, an amount of money X’ counts as loan payment Y’ in context C’, your bank’s friendly lending office, and so on.

123

R. Hawkins

are all studying the same world’’ (Mitchell 2009b, 23), a state of affairs that prompts her to call for what she terms ‘‘integrative pluralism’’ (Mitchell 2003) in the philosophy of science, a way of utilizing multiple levels of analysis in our attempts to conceptualize that highly complex world—an approach attempting to show, in a similar vein as Searle’s, ‘‘the continuous line’’ between molecules and money. On Searle’s definition, ‘‘[r]ealism does not say how things are but only that there is a way that they are,’’ independent of our representations (Searle 1995, 155); it is up to science, working within a correspondence theory of truth, to attempt to inform us about ‘‘the way that they are.’’ At the heart of what is known as ‘‘the precautionary principle,’’ I would maintain, lies an intimate connection to this ‘‘way that things are,’’ insofar as this means the way that living systems maintain themselves fundamentally ‘‘in the living state’’ (Smith 2013, 211–216) and that individual organisms strive to maintain themselves in the state of ‘‘being in health,’’ insofar as their parameters remain within a certain basin of attraction, fashioned by evolution, that is characteristic of the species. This principle has been formulated in many different versions, both negatively and positively; the 1992 Rio Declaration on Environment and Development states ‘‘[w]here there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation,’’ whereas the 1998 Wingspread Statement on the Precautionary Principle holds that ‘‘[w]hen an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause-and-effect relationships are not fully established scientifically’’ (Steel 2015, 1). When concerns about ‘‘threats,’’ ‘‘damage,’’ or ‘‘harm’’ are raised, there must be a ‘‘something’’ that is the target of these threats, with a ‘‘way that it is’’ that can be altered for the worse. That ‘‘something’’ is our ontologically objective reality, the living world that includes and maintains us, in the big picture and the small. I would argue that all the different formulations of the precautionary principle, in their varying strengths, are intended to protect the current, living, health-seeking state of the organisms and ecosystems that make up this reality, to keep it functioning optimally, to rein in human actions that might interfere with its self-maintaining processes. But the idea that there might be such a ‘‘way that they are’’ ascribed to living systems, or that it might be something that we would value and want to protect over and above other concerns, seems to be exactly what is challenged by Russell Powell, in an essay entitled ‘‘What’s the Harm? An Evolutionary Theoretical Critique of the Precautionary Principle’’(2010). Kennedy maintains that ‘‘a common feature of the Principle, especially as it concerns the modification of organisms and ecosystems, is a prima facie preference for the regulation of activity that proposes to intervene in natural living systems’’ (Powell 2010, 182), but he argues such regulation would only be warranted if the natural order, organisms and ecosystems, ‘‘(1) consisted of optimal equilibria that (2) tend to coincide with what humans value, and (3) are complexly configured (causally opaque) and highly sensitive to perturbation’’ (192), which he claims are ‘‘assumptions about the causal structure of the world that are at odds with contemporary evolutionary theory’’ (182). In so characterizing the ‘‘default epistemic stance’’ of the precautionary principle, however, he constructs a

123

Facing up to Complexity: Implications for Our…

straw man, mischaracterizing the kinds of emergent wholes that are being revealed by the scientific study of complexity. The notion of ‘‘equilibria’’ implies a static nature to living systems that belies their inherent dynamism and hence is not a feature of what contemporary scientists would defend, while the term ‘‘optimal,’’ which I would adopt if understood dynamically, seems to have been chosen for its implications of conscious engineering, in order to contrast it with the satisficing activity of evolution, which Powell charges with producing a ‘‘sheer ubiquity of inferior design in nature,’’ a nature riddled with ‘‘imperfection and irrationality’’ (194); Powell thus rejects assumption (1). Moreover, he points out that ‘‘[w]hat is fitness enhancing for one organism may be highly destructive to others with which it interacts strategically’’ (196), and so concludes that what natural selection produces, in words attributed to Julian Huxley, may be ‘‘‘just as likely to be aesthetically, morally, or intellectually repulsive to us as they are to be attractive’’’ (195). Powell thereby claims to have defeated assumption (2), although on this point he seems to have made a subjective judgment likely to differ from the aesthetic and moral sensibilities of many. Finally, since ‘‘recent work in molecular developmental biology has demonstrated a surprising degree of ‘internal’ tolerance to genetic perturbation’’—apparently the sort of robustness discussed earlier in terms of the degeneracy of networks linking genes, proteins, biochemical processes and the like—he concludes that ‘‘the trajectory from genotype to phenotype is not nearly as brittle as it was once thought’’ (196), and hence assumption (3) founders as well. The precautionary principle fails, therefore, because its proponents have an ‘‘idyllic’’ view of ‘‘the causally untidy realm of biology’’ (200–201), a belief in something that doesn’t really exist. Moreover, he charges, adhering to stronger versions of the precautionary principle that prevent us from intervening in natural systems, ‘‘can lead to quietism in the face of imminent moral disasters brought about by a failure to permit’’ (184). Powell does not attempt to characterize what might constitute such ‘‘imminent moral disasters,’’ or what sorts of things he is concerned to protect from them, but one may assume that they must be of an economic nature. Surely Powell is throwing the baby out with the bathwater here, on the basis of what seems to be a rather superficial, poorly integrated, and in some ways simply mistaken version of the contemporary scientific picture of reality. It is true that the vision of the natural world being disclosed by contemporary science is far from a naive, ‘‘static’’ equilibrium model, and that many living systems are being found to possess a surprising degree of robustness. On the other hand, there is a coherent integrity to what complexity theorists are describing at their different levels of analysis, a dynamic state of wholeness with a ‘‘way that it is’’ which is, in Searle’s terminology, ontologically objective, and which can be impinged upon by human actions in ways that can have unpredictable and sometimes quite damaging effects. Eric Smith describes a ‘‘conserved core network’’ of biochemical pathways ‘‘responsible for the conversion of inorganic carbon and nitrogen sources into biological molecules in all ecosystems on Earth,’’ an organized, globally emergent system that ‘‘may constitute the oldest fossil on Earth’’ (Smith 2013, 204–205). Weiss and Buchanan observe, on the other hand, that, as organisms, we’re not ‘‘just the Krebs cycle writ large’’ because ‘‘shared metabolism does not account for the

123

R. Hawkins

diversity of life and its evolution, which creates many emergent levels of organization that, beyond the assemblage of their parts, one might quip, have a life of their own’’ (2011, 767, emphasis added). Stuart Kauffman makes a point of recognizing the existence of organisms as ‘‘Kantian wholes,’’ systems wherein, as described by Kant, ‘‘the parts exist for and by means of the whole, and the whole exists for and by means of the parts,’’ and he maintains that the functions of the parts that sustain these wholes, and their agency, their ability to act and evaluate in their own self-maintenance, be they bacteria or human beings are ‘‘real in the universe’’ (Kauffman 2013, 168–169). But perhaps the most salient evidence both for the existence of a ‘‘way that things are’’ and for the fact that our human activities can alter it such that it might come to be another ‘‘way,’’ a way that might be harmful to our own lives, comes from the work of the Earth Systems theorists who are studying climate change. Although the biogeophysical systems of the planet are both extremely dynamic and possessed of their own forms of robustness, a growing worry is that, if we continue on along the same trajectory we have been pursuing, not only with respect to the emission of greenhouse gases but also in regard to species extinctions, the collapse of food webs, our interference with the nitrogen cycle, and so on, we will shift the Earth System out of its 11,700 years of Holocene stability and into another state altogether, a state that may display some markedly different parameters, some of which may be rather inhospitable to our form of life. According to Steffen et al. (2015), ‘‘[t]here is increasing evidence that human activities are affecting [Earth System] functioning to a degree that threatens the resilience of the ES—its ability to persist in a Holocene-like state in the face of increasing human pressures and shocks.’’ The resilience of a system has been defined as ‘‘the capacity of that system to maintain itself upon exposure to a certain amount of disturbance ‘‘so as to still retain essentially the same function, structure, identity, and feedbacks’’ (see Folke et al. 2010, 1). In the language of dynamic systems theory, however, a complex system can have multiple attractors, such that ‘‘a perturbation can bring the system over a threshold that marks the limit of a basin of attraction or stability domain of the original state,’’ something that is ‘‘qualitatively different from returning to the original state’’ (ibid, 1–2). It is crucially important, in terms of maintaining our own lives and those of subsequent human generations, that the dynamic state of the Earth System not be shifted out of its current stability domain, nor, presumably, the state of our living human organisms out of a health-maintaining domain. Current paradigms of economic development, however, and the behavioral patterns that support and are supported by them—the impetus of which is toward ever more ‘‘growth’’ in both human numbers and per capita consumption—are serving to push the planetary system, not away from a fanciful ‘‘equilibrium’’ position, but rather across the stability landscape toward a different basin of attraction (see Rockstrom et al. 2009; Steffen et al. 2011; Barnosky et al. 2012). Facing the possibility of such a shift resulting from our continuing and escalating intrusion into ‘‘the way things are,’’ Folke et al. lament ‘‘Alas, resilience of behavioral patterns in society is notoriously large and a serious impediment for preventing loss of Earth System resilience’’ (2010, 2, emphasis added).

123

Facing up to Complexity: Implications for Our…

By making such an observation, Folke et al. are putting us humans in the picture in such a way that our collective engagement in ‘‘making the social world’’ (Searle 2010) comes into focus. And, I would submit, on such a picture the crucial ontological distinction between the biospherical integrity of the Earth and the entities which lie at the heart of contemporary economics comes into view as well. However, such a distinction seems not to be marked by most current philosophical defenses of the precautionary principle, of which I will take that of Daniel Steel (2015) as an example. Steel tackles an argument against the precautionary principle (PP) that he refers to as ‘‘the dilemma objection’’; according to this objection, weak versions of the principle, holding that ‘‘uncertainty does not justify inaction in the face of serious threats,’’ is trivially true, whereas strong versions are incoherent because ‘‘environmental regulations themselves come with some risk of harmful effects, and hence PP often precludes the very steps it recommends’’ (ibid, 3, emphasis added). To examine some of the potential harmful effects of such regulations, Steel examines seven examples of ‘‘excessive precaution’’’ provided by Marchant and Mossman, whose judgment that the precautionary principle may be ‘‘the most reckless, arbitrary, and ill-advised’’ new concept in environmental policy is displayed alongside the two formulations of the principle given earlier (Steel 2015, 1). These range from a never-implemented proposal to ban the use of saccharin to a ‘‘voluntary moratorium’’ imposed by the FDA on silicone breast implants to restrictions or bans on the use of Bt corn in certain European countries, ostensibly resulting from concerns about its toxic effects on monarch butterfly larvae (Steel 2015, 74–81). In comparison, Steel details fourteen ‘‘‘late lessons from history’’’ assembled by the European Environmental Agency, cases of failures to heed warning signs and thus of failures to act with precaution, ranging from the collapse of a fishery with continued overfishing to lung diseases resulting from over a century of failure to restrict people’s exposure to asbestos to the unfortunate deaths from vCJD that followed upon the UK’s decade of ineffective measures against BSE (2009, 71–73). Most of the harms in the examples of Marchant and Mossman, harms attributed to the regulation of human behavior rather than to technological intervention into natural systems, are economic in nature, and Steel judges this set of harms overall to be of lesser severity than those detailed by the EEA, with the possible exception of the ban on Bt corn, with respect to which he acknowledges not only potentially more substantial economic harms but the possibility that a need for increased use of externally applied insecticides might pose an increased risk of cancer among farmworkers (86). Steel goes on to observe that all 21 of the examples ‘‘involve a trade-off between short-term economic gain for an influential interest against a harm that was uncertain or distant in terms of space or time’’ (82), and he emphasizes the contrast between ‘‘readily apparent commercial benefits’’ and harms that are less immediate, more difficult to characterize, and more likely to be suffered by the less powerful in society. He fails articulate, however, the important ontological difference between these two sets of harms, even as he singles out a potential cancer risk—harm to a biological system—as worthy of a different level of concern. After presenting these examples, Steel ultimately rejects the notion which he acknowledges some might embrace, that ‘‘the whole purpose of PP [is] to

123

R. Hawkins

prioritize environmental over economic harms’’; this rejection, he explains, is based on the grounds that such a ‘‘lexical ordering leads to absolutism’’ (84). In response to this reply, I would point out that, yes, the distinction between something ontologically objective, in Searle’s sense—something that exists independently of our beliefs and that can be studied by science—and something ontologically subjective—something that is entirely dependent upon a belief system collectively accepted by human beings and that can be changed by human decision making—is, in a sense, ‘‘absolute.’’ The ‘‘second horn of the dilemma’’ that Steel examines is constructed so as to obscure the significantly different levels of ontological organization that prevail between the economic effects of a regulation of our human activities and the physical and biological effects of a new technology once it impinges upon a complex living system; it ignores the fact that there is a difference in what is harmed as well as in how it is harmed. Steel does point out that regulations ‘‘can be rescinded through political or legal actions’’ whereas the dissemination of material entities cannot be reversed by the act of changing our minds (81, emphasis added), but he fails to emphasize the importance of the difference in kind of both process and result. Moreover—and unfortunately a more thorough consideration of this point lies beyond the scope of this paper—Steel evidences a willingness to accept certain assumptions of the current economic system that appear to play a central role in perpetuating some of the most ecosystem-destabilizing collective activities we are engaging in, from the burning of fossil fuels to the deforestation of land masses, without questioning the ontological basis of their formulation. Despite expressing some misgivings about the economic practice of ‘‘discounting the future’’ because of its implications for future human generations, Steel avoids challenging the notion directly, as one might do by, for example, insisting on a need for intact living systems whose dynamic stability is not time-limited; instead, he rejects the ‘‘pure time preference’’ that would ‘‘discount the future simply because it is the future’’ (121), a maneuver that lets him out of making an outright rejection or prohibition of a feature which functions as a central motor maintaining our present economic reality. He notes, rather, that, ‘‘[a]ccording to the descriptive approach, the future should be discounted by a factor that reflects a rate of return on investment, which is regarded as an indicator of the extent to which people actually discount the future in comparison to the present’’ (120). One question that might be raised immediately, if one is defending the precautionary principle, is why a subjective propensity to ‘‘eat your cake now’’ without worrying much about ‘‘having it tomorrow’’ should set the standard we adhere to. A somewhat different question, however, concerns why such a propensity should be taken to justify the reification of a number (the origins of which remain obscure) into a ‘‘cause’’ that algorithmically ‘‘creates something out of nothing’’ by the mere operation of multiplication. The ontological difference between what green plants produce in the real world—something quite tangible, if estimated abstractly in terms of, e.g., the NPP, or ‘‘net primary productivity’’ of the solar energy they trap globally for use by other organisms over the course of a year—and what is ‘‘produced’’ by the entirely abstract calculation of compound interest, which in itself creates absolutely nothing in a direct, physical sense, should be apparent, but unfortunately it is not. Later in Steel’s discussion of discounting,

123

Facing up to Complexity: Implications for Our…

moreover, we find the statement that, ‘‘[i]n the context of climate change economics, the significance of [the growth rate] for discounting is that, if [the growth rate] is positive, then future generations will be wealthier than current ones and, consequently, better able to pay for reductions in GHG emissions and adaptations to climate change’’ (124). That context itself stands unquestioned, as does the definition of ‘‘wealth’’ (is it to be measured in terms of a quantity of symbols—dollars or some other currency—or in terms of something concretely of value in our lives?), while the difference between our ceasing, as a species, to emit GHG and some of us eventually being ‘‘able to pay for reductions’’ in emissions (pay whom? reduce how?) eludes his analysis. What, exactly, ‘‘making money’’ translates into in the real world of organisms and ecosystems is left unexplored, while its ability to ‘‘weigh’’ heavily against actual harms to these things goes largely unquestioned. Economics is not a science. Science tries to represent that which exists independently of our representations, but the entities at the heart of modern economics are representations themselves, representations that do not ‘‘bottom out’’ on something ontologically objective (Searle 1995, 55) in the way that those of science do.5 Writing in the aftermath of the 2008 financial panic, Searle warns us, ‘‘[i]t is, for example, a mistake to treat money and other such instruments as if they were natural phenomena like the phenomena studied in physics, chemistry, and biology. The recent economic crisis makes it clear that they are products of massive fantasy. As long as everyone shares the fantasy and has confidence in it, the system will work just fine. But when some of the fantasies cease to be believable,… then the whole system begins to unravel’’ (2010, 201). This ontologically subjective system of beliefs may seem to work just fine, that is, until following its dictates makes our collective human project ram into ontologically objective biological reality so as to seriously alter its state, the ‘‘way that it is’’ that is required for our healthy living. It is the resilience of this system of beliefs, and the human actions they produce, that is lamented by Folke et al. above. At present, we are behaving as though this socially constructed system functions just as deterministically as complexity science is showing us natural systems do not; but, as natural systems ourselves, this is not our only choice of behavior. As a contemporary defense of the precautionary principle, Steel’s position seems to show that the approach is unable to challenge the many problematic assumptions of economics by calling the ontological cards on it. The ‘‘SAFE FOODS framework’’ advocated by Kuiper and others (Konig et al. 2010), which advocates explicitly introducing economics into the risk analysis of foods, for example, should be expected to acknowledge the ontological grounding of that discipline, but such a demand does not, as yet, seem to be forthcoming in current discussions of precaution. There may, therefore, be a need for taking another approach to evaluating our technologies, be they the older, established fossil fuel technologies or emerging ones like biotechnology, nanotechnology, and robotics (see Joy 2000).

5

There can, of course, be a science of how we humans act given the fact that we accept the entities that populate the belief system of modern economics.

123

R. Hawkins

The Advantages of Taking the ‘‘Social Experiment’’ Approach to Conceptualizing Ethical Dimensions of Our Technologies The concept of a ‘‘social experiment’’ appears to trace back to a paper entitled ‘‘Society as a Laboratory: The Social Risks of Experimental Research,’’ by Krohn and Weyer (1994), who write: In modern science, there is an increasing tendency to extend research processes and their related risks beyond the limits of the laboratory or other such institution, and directly into the wider society. To greater or lesser extents, these moves are experiments…. In keeping with at least the German way of viewing them, they are declared to be the ‘implementation of verified knowledge’, and are justified on the basis of non-scientific (for instance commercial) interests. (Krohn and Weyer 1994, 173) The concept has been clarified and extended by van de Poel (2009, 2010, 2013), who claims that ‘‘focusing on technology-as-a-social-experiment would shift the debate’’ away from the ‘‘inherent’’ ethical acceptability of a particular technology to ‘‘questions about the acceptability of social experiments with the technology,’’ thereby making it ‘‘an approach for dealing with the uncertainties, and potential hazards and benefits, of new technologies’’ in a way that would avoid some of the difficulties of approaches appealing to, e.g., risk-cost-benefit analyses or the precautionary principle, since such approaches typically require more knowledge than we have available about the consequences of employing many of our new technologies (2013, 352), given the complexities of both the technologies themselves and those of the social, biological and ecological systems to be affected by them. Instituting a significant change in the life processes of certain animals that are later fed to humans on a large scale, for example, could, in retrospect, be considered a ‘‘social experiment,’’ since the risks of employing intensive rearing practices were extended ‘‘directly into the wider society,’’ largely for the ‘‘commercial interests’’ of livestock producers. The change seems not to have triggered the precautionary principle to any significant degree because its consequences were unknowable beforehand, and it seems the obvious ‘‘short-term economic gain’’ was seen to outweigh them as they began to come to attention early on. Unfortunately, it was a ‘‘social experiment’’ that did not end well for some of its participants in countries where the BSE outbreak occurred. Van de Poel proposes four ‘‘prima facie moral conditions’’ to help judge the acceptability of conducting such experiments: (1) the absence of alternative methods for gaining knowledge regarding how the technology will actually function and what the consequences and likely hazards of employing it might be; (2) the controllability of the experiment, such that emerging hazards may be discovered, monitored, constrained, and contained so as to prevent large-scale or irreversible harms; (3) informed consent, such that all human subjects directly involved in the experiment have attained a sufficient understanding of potential risks and benefits, have voluntarily agreed to take part in the experiment, and have a reasonable means of ending their participation in the experiment should they so decide; and (4) a weighing of the proportionality of expected benefits and credible hazards, a criterion

123

Facing up to Complexity: Implications for Our…

that admittedly opens the door to a ‘‘more speculative’’ analysis unless it ‘‘sets an a priori burden on the proposer of the societal experiment to quantifiably show that based on current best knowledge the benefits outweigh the risks’’ (adapted from van de Poel 2010, 107–110). He utilizes these four conditions to evaluate a specific change, resulting from a new technology, that has already been imposed upon our wider society: the dissemination and use, brought about by active, unregulated marketing, of sunscreens containing nanoparticle-ized titanium dioxide (TiO2). With respect to (1), the absence of alternative methods for gaining knowledge6 about likely hazards of the technology, van de Poel notes that he has examined more than 150 laboratory studies of this substance (which ‘‘have investigated dermal penetration, inhalation, photo-activity, and cellular damaging’’; (2010, 108), discovering not only that ‘‘some questions remain on these topics’’ but that other sorts of relevant studies, such as oral ingestion, remain largely uninvestigated, and he concludes that ‘‘there seems to be no good reason not to employ these alternatives first’’ (ibid). Condition (2) seems glaringly unmet, since they have been marketed widely with no attempt at post-market monitoring of any possible adverse health effects resulting from the use of such sunscreens, nor any effort to ‘‘contain’’ the particles after use, to avoid their washing off and further disseminating into the ecosystem. Obtaining the ‘‘informed consent’’ (3) of individuals seeking to shield their skin from the harmful UV rays of the sun could only be achieved by informing the public about what is known about the nature of nanoparticles in general, that chemical compounds formulated in this particular size range often exhibit properties quite different from those they manifest at the macro level, and of what is known so far about these nanoparticle-containing preparations in particular, for example, that some nanoparticles of titanium dioxide are able to form reactive oxygen species (ROS), which can have damaging effects on living cells, on their surfaces in response to UV light (104, 111), in combination with labeling such sunscreens to permit consumers a choice between using sunscreens that contain them and those that do not. Van de Poel admits that (4), a weighing of expected benefits and credible hazards ‘‘makes the analysis more speculative’’ (109); he suggests these be measured in a ‘‘more quantitative’’ way, with the continued gathering of information regarding them, and notes that ‘‘the proposed criterion then sets an a priori burden on the proposer of the societal experiment to quantifiably show that based on current best knowledge the benefits outweigh the risks’’ (109–110). Since neither condition (3) nor (4) have been met by the introduction and continuing marketing of nanoparticle-ized TiO2-containing sunscreens, van de Poel concludes that it ‘‘can be perceived as a morally unacceptable societal experiment’’ (110).7

6

As contrasted with alternative methods for accomplishing the same goal through another technology, which he considers under condition (3).

7

I remember the shock with which I discovered in the early 2000s both that nanoparticles were already being widely used in commercial products and that my own university was heavily involved in their development and ‘‘commercialization’’; I also remember the consternation I felt upon showing my dermatologist an image of distorted mitochondria in cell culture after exposure to one such formulation of titanium dioxide (see Long et al. 2006) and learning that she had no idea if the sunscreens her spa was selling contained nanoparticles.

123

R. Hawkins

It would seem that the introduction and continually expanding presence of the products of biotechnology on our plates and within our ecosystems also could be considered a similarly questionable ‘‘social experiment’’ according to the above criteria. Just as many relevant studies of the effects on complex living systems have as yet not been conducted with sunscreens and other consumer products containing nanoparticles, there are studies at many levels (genomic, proteomic, physiological, ecological) that remain to be conducted with GMOs and their products (including products contaminated with residues of glyphosate, since its use is an integral part of the technological practice), and performing such studies in the laboratory or in a carefully ‘‘contained’’ field experiment (if such is possible at all) pose alternatives to the large-scale social experiments being conducted now, which violate van de Poel’s conditions (1) and (2). Some countries require the labeling of products containing GMOs, but many do not, thus making informed consent (3) impossible in these places. With respect to (4), it would seem reasonable that those who propose not only to continue to utilize these technologies (which, as Pusztai pointed out, unfortunately cannot be recalled) but to expand both the number of acres to be affected and the degree of alteration implemented (e.g., the ‘‘stacking’’ of genetically altered traits and the addition of multiple herbicides to the weed-control regimes) be expected to engage, from this point on, in a more serious effort to quantify the benefits and risks than has been shown heretofore. The striking advantage of taking the ‘‘social experiment’’ approach, as I see it, is that, in adopting a perspective that conceives of making particular changes to our extant ‘‘social-ecological systems’’ (Folke et al. 2010, 1) as scientific experiments, it forces us to look first at that which science studies, our ontologically objective reality. In their recent attempt to draw attention to our human interventions on the global scale by defining a set of ‘‘planetary boundaries’’ which should not be crossed, Steffen et al. (2015) note that ‘‘the planetary boundary (PB) framework contributes to such a paradigm by providing a science-based analysis of the risk that human perturbations will destabilize the ES at the planetary scale’’; moreover, with respect to the subject matter under discussion here, it is significant that they say ‘‘although we cannot identify a single [planetary boundary] for novel entities (here defined as new substances, new forms of existing substances, and modified life forms that have the potential for unwanted geophysical and/or biological effects), they are included in the [planetary boundaries] framework, given their potential to change the state of the [Earth System]’’ (ibid). Taking such an approach also leads us to evaluate expected economic benefits in terms of how they ‘‘cash out’’ within this concrete, three-dimensional reality, not just in terms of ‘‘making money’’ within an ontologically subjective belief system. Moreover, it emphasizes the reality of our own objective status as moral agents— that our agency is ‘‘real in the world,’’ as observed by Kauffman (2013, 168–169). In putting us human beings centrally in the picture as agentive beings who can evaluate and make judgments regarding the acceptability of such experiments, it also, in effect, contributes to Searle’s goal of ‘‘showing the continuous line’’ between molecules, organisms like ourselves, and the belief systems and social institutions that are our own creations. Yes, changing the way our extant global economic system functions can also be counted as a ‘‘social experiment,’’ but this

123

Facing up to Complexity: Implications for Our…

level of organizational complexity is one at which conscious human intervention and change quite legitimately can and does occur without threatening the way organismal and biospherical systems function.8 Facing up to the dazzling complexity of the living world and our place within it means awakening to the good news that LaPlace’s demon has finally been vanquished and that we humans aren’t biologically ‘‘determined’’ to behave in any particular way. It also means that we needn’t behave as though we were economically ‘‘determined’’ to do things in a predetermined way—particularly not if there is a possibility that doing so might be harmful to the living systems of our bodies and the planet. An advantage of making this paradigm shift in our perspective, and utilizing the methodology of a nonreductive science to examine and evaluate ongoing and proposed interventions into such complex systems at many levels, moreover, is that it allows us to go ahead and make moral judgments about new technologies without fearing to tread on the toes of economists; it corrects the ontological reversal that currently has us obeying the directives of our social constructions at the expense of the complex workings of living systems upon which all such constructions must be founded.

References Agapito-Tenfen, S. Z. (2014). Effect of stacking insecticidal CRY and herbicide tolerance EPSPS transgenes on transgenic maize proteome. BMC Plant Biology, 14, 346. doi:10.1186/s12870-0140346-8. Aguzzi, A. (2014). Alzheimer’s disease under strain. Nature, 512, 32–33. Alford, C. F. (2001). Whisteleblowers: broken lives and organizational power. Ithaca: Cornell University Press. Anderson, S. W. (2009). A conversation with Dr. Arpad Pusztai. GeneWatch 22, no. 1. reprinted. In S. Krimsky & J. Gruber (Eds.), The GMO deception (pp. 16–24). New York: Skyhorse Publishing, 2014. APHIS Factsheet. (2013). Questions and answers: BSE comprehensive rule. APHIS Veterinary Services. Barnes, R., & Lehman, C. (2013). Modeling of bovine spongiform encephalopathy in a two-species feedback loop. Epidemics, 5, 85–91. Barnosky, A. D., et al. (2012). Approaching a state shift in earth’s biosphere. Nature, 486, 52–58. Belay, E. D., & Schonberger, L. B. (2005). The public health impact of prion diseases. Annual Review of Public Health, 26, 191–212. Blakeslee, S. (2003). Expert warned that mad cow was imminent. The New York Times, December 25. Bradley, R., Gerald Collee, J., & Liberski, P. P. (2006). Variant CJD (vCJD) and bovine spongiform encephalopathy (BSE): 10 and 20 years on: part 1. Folia Neuropathologica, 44(2), 93–101. BSE Inquiry. (2000). The Inquiry into BSE and variant CJD in the United Kingdom. http://Webarchive. nationalarchives.gov.uk/20090505194948/http://www.bseinquiry.gov.uk/report/. Accessed 20 September 2014. Carroll, D. (2011). Genome engineering with zinc-finger nucleases. Genetics, 188, 773–782. doi:10.1534/ genetics.111.131433. CDC: BSE. (2013). http://www.cdc.gov/ncidod/dvrd/bse/index.htm. Accessed 21 September 2014. CDC Fact Sheet. (2014). CDC Fact sheet: Variant Creutzfeldt-Jakob disease. http://www.cdc.gov/ncidod/ dvrd/vcjd/factsheet_nvcjd.htm June 3, 2014. Accessed 21 September 2014. 8

For an example of a limited but very successful such conscious intervention, the collective decision by the people of Sweden to switch from driving on the left side of the road to driving on the right, see Kincaid (1986, 159–162).

123

R. Hawkins Collee, J. G., Bradley, R., & Liberski, P. P. (2006). Variant CJD (vCJD) and bovine spongiform encephalopathy (BSE): 10 and 20 years on: part 2. Folia Neuropathologica, 44(2), 102–110. Conner, A. J., & Jacobs, J. M. E. (1999). Genetic engineering of crops as potential source of genetic hazard in the human diet. Mutation Research, 443, 223–234. de Waal, F. (1996). Good natured: The origins of right and wrong in humans and other animals. Cambridge: Harvard University Press. Domingo, J. L., & Bordonaba, J. G. (2011). A literature review on the safety assessment of genetically modified plants. Environment International, 37, 734–742. Dyckman, L. J. (2000). Food safety-controls can be strengthened to reduce the risk of disease linked to unsafe animal feed. Report to the honorable Richard Durbin, United States senate, September 22, 2000 (B-285212). Edelman, G. M., & Gally, J. A. (2001). Degeneracy and complexity in biological systems. PNAS, 98(24), 13763–13768. Editorial. (2006). Mad cow watch goes blind. USA Today, August 3. Ewen, S. W. B., & Pusztai, A. (1999). Effects of diets containing genetically modified potatoes expressing Galanthus nivalis lectin on rat small intestine. The Lancet, 354, 1353–1354. Fernandez-Cornejo, J., et al. (2014). Genetically Engineered Crops in the United States. USDA Economic Research Service, Report No. 162. Fisher, L. (2009). The perfect swarm: The science of complexity in everyday life. New York: Basic Books. Folke, C., et al. (2010). Resilience thinking: Integrating resilience, adaptability and transformability. Ecology and Society, 15(4), 20. Gilbert, N. (2013). A hard look at gm crops. Nature, 497, 4–26. GM Study Retracted. (2013). Nature, 504, 13. Guyton, K. Z., et al. on behalf of the IARC Monograph Working Group. (2015). Carcinogenicity of tetrachlorvinphos, parathion, Malathion, diazinon, and glyphosate. www.thelancet.com/oncology. Published online March 20, 2015. doi: 10.1016/S1470-2045(15)70134-8. Hacein-Bey-Abina, S., Hauer, J., Lim, A., et al. (2010). Efficacy of gene therapy for X-linked severe combined Immunodeficiency. New England Journal of Medicine, 363, 355–364. Hacein-Bey-Abina, S., von Kalle, C., Schmidt, M., et al. (2003). A serious adverse event after successful gene therapy for X-linked severe combined immunodeficiency. New England Journal of Medicine, 348, 255–256. Hammond, B. G., et al. (1996). The feeding value of soybeans fed to rats, chickens, catfish and dairy cattle is not altered by genetic incorporation of glyphosate tolerance. Journal of Nutrition, 126(3), 717–727. Harmon, A. (2013). Golden rice: Lifesaver?. The Sunday Review News Analysis: The New York Times, 24. Harrison, L. A., et al. (1996). The expressed protein in glyphosate-tolerant soybean, 5-Enolypyruvyshikimate-3-phosphate synthase from Agrobacterium sp. Strain CP4, is rapidly digested in vitro and is not toxic to acutely gavaged mice. Journal of Nutrition, 126(3), 728–740. Hilbeck, A., et al. (2015). No scientific consensus on GMO safety. Environmental Sciences Europe, 27, 4. doi:10.1186/s12302-014-0034-1. Joy, B. (2000). Why the future doesn’t need us. Wired, April Issue 8, 04. Kempner, J., Clifford, S. P., & Jon, F. M. (2005). Forbidden knowledge. Science, 307(5711), 854. doi:10. 1126/science.1107576. Kauffman, S. A. (2013). Evolution beyond newton, darwin, and entailing law: The origin of complexity in the evolving biosphere. In C. H. Lineweaver, P. C. Davies, & M. Ruse (Eds.), Complexity and the arrow of time (pp. 162–190). Cambridge: Cambridge University Press. Kincaid, P. (1986). The rule of the road: An international guide to history and practice. New York: Greenwood Press. Konig, A., et al. (2010). The SAFE FOODS framework for improved risk analysis of foods. Food Control, 21, 1566–1587. Krohn, W., & Weyer, J. (1994). Society as a laboratory: the social risks of experimental research. Science and Public Policy, 21(3), 173–183. Kuhn, T. S. (1962). The structure of scientific revolutions. Chicago: The University of Chicago Press. Kuiper, H. A., Esther J. K, & Howard V. D .(2013). New EU legislation for risk assessment of gm food: No scientific justification for mandatory animal feeding trials. Plant Biotechnology Journal, 11: 781–784.

123

Facing up to Complexity: Implications for Our… Lineweaver, C. H., Davies, P. C. W., & Ruse, M. (2013). What is complexity? Is it increasing? In C. H. Lineweaver, P. C. Davies, & M. Ruse (Eds.), Complexity and the arrow of time (pp. 3–16). Cambridge: Cambridge University Press. Long, T. C., et al. (2006). Titanium dioxide (P25) produces reactive oxygen species in immortalized brain microglia (BV2): Implications for nanoparticle neurotoxicity. Environmental Science and Technology, 40(14), 4346–4352. McNeil, D. G. (2005). Case of mad cow in texas is first to originate in U.S. The New York Times, June 30. Mesnage, R., Bernay, B., & Seralini, G. E. (2013). Ethoxylated adjuvants of glyphosate-based herbicides are active principles of human cell toxicity. Toxicology, 313, 122–128. doi:10.1016/j.tox.2012.09. 006. Millstone, E., Brunner, E., & Mayer, S. (1999). Beyond ‘substantial equivalence’. Nature, 401, 525–526. Mitchell, S. D. (2003). Biological complexity and integrative pluralism. Cambridge: Cambridge University Press. Mitchell, M. (2009a). Complexity: A guided tour. Oxford: Oxford University Press. Mitchell, S. D. (2009b). Unsimple truths: Science, complexity, and policy. Chicago: The University of Chicago Press. Myles, I. A. (2014). Fast food fever: Reviewing the impacts of the Western diet on immunity. Nutrition Journal 13: 61. pdf, http://www.nutritionj.com/content/13/1/61. Accessed 28 September 2014. Norgaard, K. M. (2011). Living in denial: Climate change, emotions, and everyday life. Cambridge: The MIT Press. Padgette, S. R., et al. (1996). The composition of glyphosate-tolerant soybean seeds is equivalent to that of conventional soybeans. Journal of Nutrition, 126(3), 702–716. Palumbi, S. R. (2001). The high-stakes battle over brute-force genetic engineering. The Chronicle of Higher Education, April 13. Powell, R. (2010). What’s the harm? An evolutionary theoretical critique of the precautionary principle. Kennedy Institute of Ethics Journal, 20(2), 181–206. Republished Paper Draws Fire. (2014). Nature, 511, 129. Robin, M.-M. (2010). The world according to monsanto: pollution, corruption, and the control of the world’s food supply. New York: The Free Press. Translated from the French edition, published 2008. Rockstrom, J., et al. (2009). A safe operating space for humanity. Nature, 461, 472–475. Rosi-Marshall, E. J., et al. (2007). Toxins in transgenic crop byproducts may affect headwater stream ecosystems. PNAS, Proceedings of the National Academy of Sciences, 104, 16204–16208. Samsel, A., & Seneff, S. (2013). Glyphosate’s suppression of cytochrome P450 enzymes and amino acid biosynthesis by the gut microbiome: Pathways to modern diseases. Entropy, 15, 1416–1463. doi:10. 3390/e15041416. Schnabel, J. (2011). Alzheimer’s disease: Outlook. Nature, 475, S12–S14. Searle, J. R. (1995). The construction of social reality. New York: The Free Press. Searle, J. R. (2010). Making the social world: The structure of human civilization. Oxford: Oxford University Press. Seralini, G.-E., et al. (2012). Long term toxicity of a roundup herbicide and a roundup-tolerant genically modified maize. Food and Chemical Toxicology 50, 4221–4231. Retracted. Republished 2014, Environmental Sciences Europe, doi:10.1186/s12302-014-0014-5. Seralini, G.-E., Cellier, D., & de Vendomois., J. S. (2007). New analysis of a rat feeding study with a genetically modified maize reveals signs of hepatorenal toxicity. Archives of Environmental Contamination and Toxicology, 52(4), 596–602. Smith, E. (2013). Emergent order in processes: The interplay of complexity, robustness, correlation, and hierarchy in the biosphere.’’. In C. H. Lineweaver, P. C. Davies, & M. Ruse (Eds.), Complexity and the arrow of time (pp. 191–223). Cambridge: Cambridge University Press. Smith, P. G., & Bradley, R. (2003). Bovine spongiform encephalopathy (BSE) and Its epidemiology. British Medical Bulletin, 66, 185–198. Snell, C., et al. (2012). Assessment of the health impact of GM Plant diets in long-term and multigenerational animal feeding trials: A literature review. Food and Chemical Toxicology, 50, 1134–1148. Steel, D. (2015). Philosophy and the precautionary principle: Science, evidence, and environmental policy. Cambridge: Cambridge University Press. Steffen, W., et al. (2011). The anthropocene: From global change to planetary stewardship. Ambio, 40, 739–761.

123

R. Hawkins Steffen, W., et al. (2015). Planetary boundaries: Guiding human development on a changing planet. Science, doi:10.1126/science.1259855. Tabashnik, B., Thierry B., & Yves, C. (2014). Insect resistance to genetically engineered crops: Successes and failures. ISB News Report, pdf. Accessed 24 September 2014. Tyler, A. L., et al. (2009). Shadows of complexity: What biological networks reveal about epistasis and pleiotropy. BioEssays, 31, 220–227. van de Poel, I. (2009). The introduction of nanotechnology as a societal experiment. In S. Arnaldi, A. Lorenzet, & F. Russo (Eds.), Technoscience in progress: Managing the uncertainty of nanotechnology (pp. 129–142). Amsterdam: IOS Press. van de Poel, I. (2010). Sunscreens with titanium dioxide (TiO2) nano-particles: A societal experiment. Nanoethics, 4, 103–113. van de Poel, I. (2013). Why new technologies should be conceived as social experiments. Ethics, Policy & Environment, 16(3), 352–355. Waldman, M., & Lamb, M. (2004). Dying for a hamburger: modern meat processing and the epidemic of alzheimer’s disease. New York: St. Martin’s Press. Waltz, E. (2009). GM crops: Battlefield. Nature, 461, 27–32. doi:10.1038/461027a. Weiss, K. M., & Buchanan, A. V. (2011). Is life law-like? Genetics, 188, 761–771. Will, R. G., et al. (1996). A new variant of creutzfeld-jakob disease in the UK. The Lancet, 347, 921–925. Zerubavel, E. (2006). The elephant in the room: Silence and denial in everyday life. Oxford: Oxford University Press.

123

Facing up to Complexity: Implications for Our Social Experiments.

Biological systems are highly complex, and for this reason there is a considerable degree of uncertainty as to the consequences of making significant ...
616KB Sizes 2 Downloads 7 Views