Glyphosate and Celiac Disease, Gluten Intolerance

Glyphosate, pathways to modern diseases II: Celiac sprue and gluten intolerance

Advertisements

LLP: How Low Will We Go?

(Mary Lou is an Ontario lawyer and one of the founding members of NoMore GMOs.)

In my last two articles I explained how testing of genetically modified organism (GMOs) in food is not a legislated requirement in Canada, and that the term “substantial equivalence” is one method used by the law that allows GM foods to slip out of testing.   This article explains how our federal legislators are now trying to extend their reach and pull the wool over the eyes of not just Canada but our trading partners as well.  They are doing this through the proposed adoption of “Low Level Presence” or “LLP”[i].  I’ll first explain the proposal and the rationale, then identify the concerns.

Proposal: Low Level Presence is a proposal that would allow our crops in Canada to be contaminated by genetically modified (GM) imported crops that have not been approved as safe in Canada.  The federal government (Government) itself admits, “[u]nder the current Canadian regulatory framework, the presence of any unauthorized GM crop constitutes non-compliance”.  So what they are trying to do is currently illegal.

The proposal is that two contamination levels be set. The first permits contamination below an “Action Level” of 0.1% or 0.2% contamination.  The only requirement is that the GM crop has been approved for use as food in another country and Canadian officials are of the view that the safety assessment conducted by that country is “consistent” with the Codex Food Safety Assessment Guidelines[ii]

The second contamination level is the “Threshold Level” which will be set “to reflect achievable levels of unintentional presence” based on how crops are handled and transported.  This Threshold Level will be set by a committee of stakeholders in the agricultural sector (food or feed producers and processors, retailers, crops developers, importers and exporters) and academia.  Contamination at or below the Threshold Level will be allowed, provided a risk assessment (as conveniently defined) is conducted.  The definition indicates the aim is to identify potential hazards and potential routes of exposure.  The assessment will rely on information “available at the time” to determine the likelihood of an adverse effect on health or the environment.

Rationale: The Government’s stated reason for the proposed policy is that the number and variety of commercialized GM crops are expected to increase and it is difficult to prevent contamination from occurring.  In other words, GM contamination is a given.

However it is apparent that the Government is really promoting LLP part of its trade agenda.  It wants other countries to adopt the LLP standard so that Canada can export GM contaminated crops to these countries.  It sees adoption of the standard at home as a first step toward this goal.

Like Canada, other countries have zero tolerance for the presence of GM substances in crops, comprised of seed, animal feed and food (although the European Union has allowed up to 0.1% of GM in animal feed imports, subject to certain provisos, on the basis that the 0.1% is “technically zero”).  The legislators of these countries have seen fit to protect their imports from GM.  However such legislation effectively blocks the trade of contaminated crops, in particular grain, from Canada to other countries with such policies or who have not approved the particular GM variety for human consumption of environmental release.

The issue came to a head in 2009 when the European Union rejected GM contaminated flax from Canada and an embargo was imposed.  Since then, the Government’s goal has been is to change its laws and get other countries to do the same.  As the Agriculture Minister has stated “Well, you can’t point at anyone else when your own standard is zero (tolerance)… We have begun the process of moving away from zero to help strengthen our commitment to this. We’re working out the details”[iii].

2 Main Concerns:

1. GM contaminated crops imported into Canada avoid Canadian health and safety regulations

The idea that Canadian law is not applicable, provided the GM crop is at a low quantity and has been approved elsewhere, is not protecting the health and safety of Canadians or the environment.  Some points on this:

  • The Action Level of 0.1% has been arbitrarily set.
  • There is no evidence that GM presence at low quantities is safe.  Safety at low levels is just presumed. However risk assessment is about quantity AND quality.  GM at low quantities could have a huge qualitative impact, because genetic modification causes changes in organisms that are not linear and cannot be predicted.
  • Safety is also presumed because another country approved the product.  But this presumption doesn’t stand up either:
    • Under LLP officials won’t perform any rigorous checks on the other country’s approval.  It is not known if access to foreign assessments would even be allowed;
    • If the approval is “consistent” with the Codex Guidelines, contamination is permitted.  However no guidance is provided on what “consistent” means, which opens the door for arbitrary and subjective decision making.
    • The Codex Guidelines in risk assessment require examination of various qualitative factors including the cultivation, development, storing, handling, transporting, processing, preparation and consumption of the item in issue.  These qualitative factors often differ from region to region.  Given these differences, it is misguided to adopt an assessment that includes these factors from one country to another.
    • The Codex Guidelines are based on the principle of “substantial equivalence”, but this principle cannot be rationally applied in the GM context.  See my article on the principle in the November newsletter.
    • There may be other factors, such as political or monetary factors, brought to bear in the other country that influence the approval process.  The Canadian official may not know about these, and even if known, their import cannot be estimated.  It cannot be presumed that because a GM crop was approved in one country that it should be considered safe for Canadian purposes.
    • Looking to other countries is shifting the legislative responsibility of our Government for health and safety to others.  This is a case of transferring legislative jurisdiction to third parties who do not have authority to decide on matters for Canadians.  It is against the public and administrative law.
    • The Threshold Level will be set by a committee of people where the majority of members (food or feed producers and processors, retailers, crops developers, importers and exporters) stand to gain from increased trade in GM contaminated crops.
    • The Threshold Level is to establish “achievable levels” of contamination based on current methods of handling and transporting grain.  In other words, the committee will look at the contamination that is occurring and legalize it.  They will legislate the harm, rather than prevent it.  This approach is not prescriptive and runs counter to the legislated mandate of the Government to protect the health and safety of humans, animals and the environment.
    • The rationale provided for legislating the harm is that the number and variety of GM products is expected to increase and it will be difficult under current best management practices to prevent GM contamination from increasing as well.  There are various points here:
      • If it is the case that the number and variety of GM products commercialized will increase, it is the case that the likelihood of harm to human and animal health and the environment will increase.  Again, small quantities of GM can interact with non-GM varieties in qualitative ways that are unpredictable.
      • The expectation that the number and variety of GM products commercialized is expected to increase reveals the approach the Government expects to take to GM products: it will approve them.
      • GM contamination can occur in various ways, depending on the crop. In some cases, such as alfalfa, the non-GM alfalfa is immediately prone to contamination because alfalfa is pollinated by insects and cannot be contained.  In other cases, contamination occurs only by contact between the GM and the non-GM variety.  By avoiding contact, contamination is avoided.  The solution to avoid contamination in the first case is to simply not introduce the GM variety at all.  The solution to avoid contamination in the second case is to simply avoid bringing the non-GM and the GM varieties into contact during handling. Segregation of crops is a practise that is achievable. In addition, testing of crops for contamination at the export stage would need to be implemented.  These are technical fixes that would require better management practices to address the issues but are by no means impossible.  Such contamination should not be taken as a given.
    • The Threshold Level test is based on a risk assessment which is focussed on “likelihood” that an adverse effect is caused by the GM crop.  It is also “based on current information”.  Because GM substances cause changes that are unpredictable and unknown, the likelihood of an adverse effect will always be difficult to establish; even more so since “current information” doesn’t include information on low level presence because LLP is not currently legal. 

2. The Canadian Federal Government is trying to force GM onto zero tolerance countries 

No country has enacted legislation that tolerates GM contamination in imported crops (subject to the EU feed exception).  Zero tolerance and the ban on contaminated GM imports is part of a larger ban on the presence of GM crops in many European, Middle Eastern, Asian African and South American states.[iv]

However in the international arena the Canadian Government is trying to make the case that these laws should be changed, and that other countries should accept contaminated crops.  And what rationale could Canada provide to convince these other countries?  There is no rational basis for the arbitrary level, and no evidence to show that GM is safe even at “low levels”.  The arguments put forward for change should be sound, not that “it is difficult to deal with” or “do it because Canada did” or the real reason: “it will promote Canadian trade in contaminated crops”.

The conclusion to be drawn is the proposal for LLP is sinking to a new low.  Not only does the proposal skirt analysis of the health and safety of Canadians and the environment, it even goes so far as to try to convince others to do the same in their countries.  Do we really want to go that low?


[i] Agriculture and Agri-Food Canada. (2012). Government of Canada Proposed Domestic Policy on the Management of Low-Level Presence of Genetically Modified Crops in Imports and its Associated Implementation Framework.  http://www.agr.gc.ca/eng/about-us/public-opinion-and-consultations/consultation-on-the-proposed-domestic-policy-and-implementation-framework-on-the-management-of-low-level-presence-of-genetically-modified-crops-in-imports/government-of-canada-proposed-domestic-policy-on-the-management-of-low-level-presence-of-genetically-modified-crops-in-imports-and-its-associated-implementation-framework/?id=1348076201400

[ii]Codex Guideline for the Conduct of Food Safety Assessment of Foods Derived from Recombinant-DNA Plants (CAC/GL 45-2003). http://www.codexalimentarius.org/input/download/standards/10025/CXG_046e.pdf&sa=U&ei=YQ3sUvPBGYOqyAHM3YDoBg&ved=0CAUQFjAA&client=internal-uds-cse&usg=AFQjCNEzOS62HsAnNT-dnH1M9_MtxSRw8A/

[iii] Allan Dawson. (October 24, 2011). Canada Working on Low-Level GM Presence Policy. Alberta Farmer Express  http://www.albertafarmexpress.ca/2011/10/24/canada-working-on-lowlevel-gm-presence-policy/

 

Genetics 101: Why you should be concerned about GMOs.

By Ramsey Affifi (author of GMO-related blog, thegeneticengineeringdebate.blogspot.ca)

When genetic engineering hits the news, the headlines are so confusing and contradictory that it is hard for any of us to make sense of it. On the one hand, biotech proponents claim that genetic engineering is just a more precise way of breeding, one that holds great promise for ending malnutrition and alleviating ecological collapse. On the other hand, biotech activists claim that genetic engineering is unnatural, unethical and inherently dangerous. Is this a case of crazy, hypochondriac foodies picking fights they know nothing about with an established and highly regulated science? Or of biotech companies claiming science as their authority in order to force unwanted products on hapless consumers?

As is often the case with polarized issues, each side overstates their own case. The fact is that there are many novel risks associated with genetically modified organisms (“GMOs”), many of which are not adequately acknowledged. This does not mean that every GMO will be perilous to humanity or to life on this planet. Indeed, some of them may turn out to be safe (though still not necessarily desirable). What is not safe is that many governments, regulatory bodies, and citizens are persuaded that the biotech companies’ version of the story is the accurate “scientific” version. As it turns out, genetics as a science can hardly condone the haphazard mixing and matching of genes undertaken in company labs. It is simply false for anyone to claim that people concerned about GMOs are anti-science (though many of them do make scientifically questionable claims). To understand why the science of genetics leads to skepticism about genetic engineering, we need to familiarize ourselves with some basics of how genes and genetic engineering work. Doing so will provide the clarity we need to understand the risks of GMOs and enable us to make better choices in maintaining a safe and sustainable food system.

1. How is “genetic engineering” different from “traditional breeding”?
Traditional breeding occurs by having two organisms of a single species mate (see footnote i. below). When they mate, their offspring will have a set of genes, some of which come from the male and others from the female. By controlling which organisms breed, breeders can gradually enable certain traits to get expressed more strongly. For instance, if I wanted to breed cats with longer ears, I would choose a male and female with long ears, create the conditions for them to mate, and wait for the results. Of their children, some in turn will have longer ears than others and I can choose to continue breeding them with other long- eared cats. Eventually, over many generations, the cats’ ears would get longer and longer, (provided, of course, that this trait was able to grow while maintaining the integrity of the rest of the organism’s physiology). The point is that what can and cannot be bred is dictated strictly by what is possible for each species based on variations that are already occurring within it.

With genetic engineering, something quite different is going on. A scientist will isolate a specific gene or genes that seem to be responsible for a specific trait and will then insert it into another organism with the aim of getting that trait expressed in the new organism. So, for example, scientists have isolated the genes responsible for making petunia plants resistant to the herbicide known as glyphosate and have inserted the gene into a number of stable crops, such as soy and alfalfa for animal feed, to produce plants that can withstand applications of herbicides. Biotech companies are doing similar experiments on a variety of lifeforms, from bacteria, to trees, to animals. In one particularly shocking experiment, scientists identified the genes that make fireflies glow in the dark and have inserted those genes into the genetic code of cats. As a result, they have now produced cats that glow in the dark (pictures on Google).

While this may all seem like a neat sci-fi trick, and certainly appeals to the tech-geek in some of us, there are a number of well-documented ways in which these sorts of experiments can, and do go wrong. There are certainly ethical issues to be carefully considered when conducting these sorts of experiments too. These issues need to be steadfastly separated from the thrilling power and curiosity some scientists feel at being able to create a seemingly endless number of wild and wacky things simply by combining and recombining genes. Many bioethicists have opened discussions as to whether other species have a right not to be experimented on and modified in these ways (Vorstenbosch, 1993; Oritz, 2004). My purpose here is more modest. I seek simply to outline some of the established reasons why there are real risks associated with the production, release, and consumption of GMOs. Radical and exciting developments in our understanding of the genetic code have emerged in the last decade or so, casting serious doubt on the innocent-until-proven-guilty stance of proponents of GMO technology. At the very least, these developments indicate that a great deal more regulatory scrutiny is necessary than is currently required.

To introduce the reader to how geneticists are now thinking about the behaviour and properties of genes, I will outline the concept of “genetic networks” and “dynamic gene activity.” Then, I will discuss the techniques that genetic engineers use in order to create GMOs, showing how these methods contrast with geneticists’ emerging understanding of how genes work.

2. Our current understanding: Genetic networks
Biotech proponents usually tell a simplified and mechanical version of how genes work.The story, which some of you may remember from high school biology classes, goes something like this: DNA is a long string of molecules, most of which is random and meaningless. However, there are occasional segments of DNA which produce RNA which go on to produce specific proteins. These active segments of DNA are known as genes, and people generally abbreviate the process by saying that “genes code for proteins”. These proteins are extremely varied and extremely important because they are the building blocks of the entire body. Not only do they build organs and tissues but they also metabolize many of the molecules that enable all the physiological processes that regulate the body. In other words, proteins are the stuff that make up both the form and the function of an organism. If the proteins change, the form and/or function of the organism changes as well.

As the story is told, each gene has a specific and discrete role, so a genetic engineer can simply “knock out” a gene that is doing something undesired or insert in another gene that produces a valued protein, and thereby enhance the organism’s form or function. The novel gene can come from a number of sources, either from a different location in the organism’s own code, from a closely related organism, from a completely different organism, or even from the lab after being manufactured synthetically (see footnote ii. below).

The story is widespread in our textbooks, in the media, and on the internet. A part of the reason why it is difficult for people to see the dangers of genetic engineering is because when the process is framed in this way, it seems intuitive and logical. It appeals to our understanding of how machines work, for instance. We can (or should) be able to take out and replace or update parts of our car or computer. So why not bigger mechanical structures, like lifeforms? The problem is that the genome is not really built in the same way as a machine.

Genetic engineering is based on a misleading understanding of what the relationship between genes in the genetic code is like. In fact, “genes” as isolated entities may only very rarely exist. The logic that assumes otherwise long ago fell out of favor with scientists who now prefer to think about “genetic networks” (Dillon, 2003; for a debate on how such networks evolved, see Sansom, 2011). The basic idea of a genetic network is that the behavior of a gene depends on other genes, and that combinations of genes influence each other in complex ways that make it difficult to isolate what a gene “does” except in some rare cases. A gene often has strongest influence (and in turn is most strongly influenced by) the genes closest to it on the genetic code, but this is not always the case. Intricate chain reactions can occur between relatively faraway parts of the code, especially when the code is bent such that certain parts come into closer contact with one another (as occur through what are known as “histone modifications” (Fischle, Wang, & Allis, 2003)).

One way that genes can influence each other is through a process known as methylation. In methylation, a gene can be partially or completely silenced through having parts of its code bound to methyl molecules (Jaenisch & Bird, 2003). There are specific portions of the genetic code known as “regulatory DNA” that control the extent to which various genes are and are not methylated. Genes interact with each other directly but can also interact indirectly through influencing regulatory DNA, which in turn methylate or demethylate other genes. In addition to methylation and histone modifications, there are a number of other ways that genes influence one another, including acetylation, translocation, pleiotropy, and transvection. The field studying these processes is collectively known as “epigenetics” and it is a well-established field of research blossoming strong and fertile research programs (see Jablonka & Lamb, 1995; Francis, 2012, for accessible discussions; see footnote iii. below).

In 2000, entrepreneur Craig Venter’s company Celera Genomics mapped the entire genetic code of a human. The news hit headlines worldwide but the excitement faded fast once people realized the limited relevance of what had actually been done. The active coding portions of the DNA (i.e. the genes) had been identified, and many had been assigned specific functions in terms of their role in producing the human organism. But without any mapping of the actual genetic networks, the project ended up being a lot of hot air. Without knowing the relationship between the genes, it was like the company had pulled a bunch of words out of a storybook, produced a crude list of definitions for each word, and were left with little understanding of the story itself. The important point here is that while individual genes may have specific functionality, it’s the interaction of these genes through their specific positions in the genome and their proximity to other genes that really dictates their true nature and consequently, that of the organism. If the genome is a machine, it is quite unlike anything humans have ever built before, its parts threaded together and interacting in complex ways, like how the precise meaning of words and sentences get defined by the paragraphs and stories in which they are found.


3. The dynamic genome: Context matters

But even the storybook metaphor is misleading because it relays the sense that the relationship between the genes are fixed, as are the words in a book. Not so with the genome. The story of the genetic code hasn’t been written yet; or rather it gets written and rewritten constantly as an organism interacts in its changing environment. Not only are genes embedded in complex, interconnected networks but the networks themselves are themselves ever changing. For example, one gene will cause another to get expressed, which will down-regulate another, but only as long as a third is still coding for proteins. Whether or not a gene is expressed and what in fact its protein products go on to do are entirely contingent, depending on the state of the rest of the genetic code and of the organism in which the gene is found. In an organism’s infancy, a gene may do something quite different than it does in an organism’s adulthood, which in turn is different from what that gene might contribute to in periods of environment-induced stress. For example, a recent study found that 86% of a fruitflies’ genes change significantly in expression throughout its lifespan (Arbeitman et al., 2002). On the other hand, parts of the DNA thought to be non-coding may activate in certain contexts (Makalowski 2003; Biémont & Vieira, 2006)). In fact, large sections of supposedly random and meaningless portions of the genetic code have turned out to have important roles in the changing behavior of the genome. Normally silent and lurking in the recesses of the genetic code, these genes and other DNA elements can be thought of as part of the built-in resiliency of the system, containing important back-up information and alternative routes and pathways that can kick in when critically needed (for example, Schlesinger, 1994).

The genome is therefore an incredibly complex and responsive system, adjusting itself to changes on multiple levels from changes within the cell, to changes between cells in the organism, to changes in the environment that the organism is continually adapting to. As a result, external changes can modify the way in which genes are or are not expressed. An intuitive way to think about it is to consider how a single fertilized egg cell eventually becomes a fetus and then a baby. That first cell, as we have all seen on TV, splits into two cells, and each of those cells split in turn, producing four cells. This splitting goes on and on until eventually a fetus emerges. But how did certain parts differentiate into separate body parts? The genes and the DNA in all of these cells are, after all, identical. How is it that some of the cells eventually become brain cells and others liver cells or blood cells? All of this happens precisely because of the genetic networks dynamically turning on and off certain genes at specific times, all based on the location of the cells relative to one another (Ridley, 1999). Context matters.

It isn’t just the human fetus that bears this complexity. All multicellular organisms do. And because of this complexity, it cannot be stated with any certainty that the genes that a biotech company lodges into a genetic code are benign. It is difficult to know whether or not inserted genes will be interfering with a genetic network, disrupting, upregulating or downregulating other genes, or perhaps even splitting apart an important gene that goes unexpressed until an organism meets some specific stressful or demanding environmental situation. Even the most expensive and thorough current tests, which go far beyond any regulatory requirements for the industry, are not technologically capable of providing this information because we simply do not know the entire genetic networks (nor how they can and do change over time) of the species that we genetically engineer. For example, “OMIC tests”, which are more specific forms of gene expression profiling, can catalogue gene products (such as transcripts, proteins, or metabolites) to give more detailed snapshots of some of the physiological changes that occur in the organism (Heinemann, Kurenbach, & Quisr, 2011), but these are themselves limited, difficult to interpret, and not required in any current safety evaluations of GMOs (Lay, & Liyanage, 2006). One of the biggest limitations of OMIC tests is that each test only gives results of changes at one point in time so many tests would need to be taken to understand how the dynamic genome changes during the organism’s lifespan.
In the first two sections, I tried to show how the genome is connected in many still unknown ways such that the behavior of a gene depends on the behavior of other genes, its location relative to other genes, and that these connections themselves change over time as the organism adjusts to its internal and external environment. The “one gene, one protein/function” model is not reflective of the way scientist are now thinking about genetic networks and the interactions among them.

An important point here is that genetic networks are only jeopardized by genetic engineering and not by traditional plant or animal breeding. Traditional breeding, which mates organisms of a single species together, respects genetic networks because when the chromosomes of the male and female join, their genes (alleles) are nearly invariably located in corresponding locations and thereby match up. In other words, the gene(s) for eye colour, for example, are located in the same place of the same chromosome for almost all organisms of a given species, so when meiosis occurs in sexual reproduction, the genes are aligned and present no problem in the offspring when some genes come from the male and others from the female. By contrast, genetic engineering does not respect genetic networks but instead inserts genetic material into the genetic networks at random locations. Because context matters within the genetic network, the result of the insertion cannot be known, predicted, or controlled.

4. How do they get the genes in anyway?

For a biotech company to produce a GMO they have to insert genes into an organism’s genome and then get them expressed (i.e. coding for the desired protein). As it turns out, this is not an easy task and it is precisely here where many of the most concerning risks of genetic engineering are brought to the fore. In essence, the whole process involves shooting the desired gene randomly into the host organism’s code with the hopes that it will get lodged in somewhere where it will not disrupt any genetic network too much. The gene, it should be pointed out, is not fired into the host code on its own. As it turns out, it is actually a part of a patched together assembly of genes from at least multiple different organisms (mostly viruses and bacteria) known as a “vector” or a “genetic cassette” (Ho, 1998). Among these genetic components there is an “origin of replication”, which is used to replicate the desired genes in preparation to shoot them into the host code; a “multiple cloning site”, which allows the scientists to insert the desired gene into the vector; a “genetic marker”, which is used to establish whether or not the vector was successfully inserted into the host code; and a “promoter”, which is used to ensure that once the vector is in the host code that the desired gene is being expressed.

The origin of replication, multiple cloning site, and promoters are of particular concern because they can make the desired gene, the vector, and the host organism’s code more unstable. They are all acquired from different pathogenic organisms such as viruses, and may have been further altered in the lab through synthetic DNA modification. Viruses replicate by parasitizing host genetic codes to create further copies of the virus. Genetic engineers have isolated various parts of viral DNA in order to make use of these hijacking techniques by taking parts of the viral machinery of different organisms and parceling them together into a vector. So, for example, a viral promoter is a gene that viruses use to ensure that once they got into a host code, the viral genes would be recognized and expressed. This is necessary for genetic engineering because cells usually have mechanisms to ensure that foreign DNA is not continuously planting itself into the cell’s genome.

It should be apparent here that genetic engineering is far from being simply a “more precise” way to breed for desired traits. By contrast, it is a technique to insert DNA into another organism through bypassing its own defense system by using DNA from viruses and other pathogens. It should also be noted that viruses insert their DNA into sex cells only very rarely. Why is this important? If a virus inserts itself into any other cell of my body and uses the cell to spread, its genes are not being passed down to my descendants. For example, the rhinovirus, responsible for the common cold, only inserts itself into certain respiratory epithelial cells. Its genes do not travel to my sex cells and therefore have no impact on my future offspring. This minimizes ecological and evolutionary impact. Sex cells are generally well-protected in the body to avoid the risky disruptions that occur through viral infection. When genetic engineers say that GMOs are natural because they are employing a form of gene transfer that viruses and bacteria have been using for eons, we should view their claims in a critical light. In fact, in multicellular organisms such as plants and animals, the body has mechanisms precisely to prevent viral transfer of genetic material into its sex cells. This should warn us of the risks of doing so.

Using promoters to express the desired gene in the host code is especially risky because promoters often overpromote. They upregulate the gene (just like the regulatory DNA discussed earlier) such that it produces far more of its protein products than would be normal in its original context. Further, its capacity to overpromote changes the ways in which the host genes surrounding the vector end up getting expressed. It thereby often ends up promoting unintended genes in the organism (Ho, Ryan, & Cummins, 1999). The concern here is that promoters have a strong capacity to modify the genetic networks of the host organism in unpredictable ways. When these are combined with other elements, such as the “origin of replication” genes, we sometimes find that the desired gene actually gets replicated and reinserted in different places across the genome, functioning like “jumping genes” (or, as they are technically known, transposons (Keller, 1983), see footnote iv. below). This further exacerbates the functional integrity of the genetic networks. Promoters and origin of replication genetic segments may also make the vector (or parts of it) unstable and more likely to jump out of the host’s genome and into the environment. This now appears to be an uncommon process (de Vries & Wackernagel, 2004), though researchers point out that the colonization of the soil by bacteria that have incorporated such transgenic material should take much more time than has been assumed by many of these studies (Nielsen & Townsend, 2004). Further research is clearly necessary: if these genetic elements can be taken up by soil and gut bacteria, remaining in these micro-ecosystems indefinitely, they carry the risk that they will in turn incorporate themselves into other organisms at some point in the future (with obvious possible unpredictable results on those organismss genetic function).

5. Summary
The risks of genetic engineering emerge on various fronts. First, the “single gene, single function” model is not appropriate. Indeed it is destructive because it trains us to think of life as a machine made out of isolatable, detachable and replaceable parts. The organism is better thought of as an evolving and interconnected system, where its parts are always defined contextually and contingently. When viewed from the perspective of fluid and dynamic genetic networks, the effects of inserting a gene into a genome are not predictable or easily measurable. This is one clear reason why public concern about GMOs needs to be taken seriously by regulatory agencies. Second, the techniques of genetic engineering are problematic and may exacerbate the problems associated with inserting foreign DNA into organisms. Current biotech techniques, which use DNA material from pathogenic organisms, can make the desired gene, the vector, and the host organism’s code unstable, and can cause the promotion of proteins and unintended genes, disrupting genetic networks and sometimes possibly jumping into the environment to get taken up by other organisms.

Although genetic engineering technologies are advancing rapidly, our understanding of the genome is undergoing a significant shift that puts into question many of the premises upon which GMOs are based. We can expect that genetic engineers will attempt to accommodate better the fact the genetic networks and epigenetic dynamics exist and cease to employ some of the riskier techniques described in this article. In any case, we must keep vigilant and continue to update ourselves on these evolving approaches so that we can publicly discuss and critically evaluate the technologies for their potential impacts on our health and on the environment. Above all, we must raise our concerns in a clear and articulate manner to our government officials, who themselves may well need some educating on the science of the genome in the 21st Century.

Footnotes

i. A third approach to breeding is called “mutagenesis” and it involves subjecting cells to high levels of radiation to induce mutations and selecting from the mutated cells those that have desirable traits. Mutagenesis has been around for about 100 years, but should not be considered a form of “traditional plant breeding”. In any case, it will not be discussed in this article.

ii. Although these procedures have different technical names (known as cisgenic, linegenic, transgenic, and xenogenic engineering respectively), they are all forms of genetic engineering that rely on essentially the same technologies and bear many of the same risks. The common scientific name for them is usually simply “transgenic” engineering.

iii. Of course, as the science of epigenesis evolves so will the desire to apply the knowledge. Epigenetic engineering seeks to modify the behavior of genes through altering the way in which the genes are read and/or expressed in cells. Epigenetic engineering is not necessarily a benign alternative to genetic engineering because it can instigate many of the same interactional effects as genetic engineering. Consumers should be on the lookout for developments in this new field (will they be called EGMOs, as in “epigenetically modified organisms”?).

iv. Botanist Barbara McClintock discovered that the presence of transposons, mobile genetic elements that are a part of an organism’s genome and which jump around and can replicate inside that genome. Their evolutionary and immunological functions are a subject of great interest. In some ways, transposons may appear to be similar to the overactive promoters introduced by genetic engineers. Closer inspection reveals that the genome has actually evolved to let certain types of genes jump around while restricting this capacity in others. When genetic engineers add promoters to genes and insert them into the genome, the location of the promoter can create new types of transposons that do not have a history of interaction within the organism’s genetic networks.

References

Arbeitman, M. N., Furlong, E. E. M., Farhad, Johnson, E., Null, B. H., Baker, B. S., Krasnow, M. A., et al. (2002). Gene expression during the life cycle of Drosophila melanogaster. Science, 297(5590), 2270-2275.

Biémont, C., & Vieira, C. (2006). Genetics: Junk DNA as an evolutionary force. Nature, 443, 521-524.

de Vries, J., & Wackernagel, W. (2004). Microbial horizontal gene transfer and the DNA release from transgenic crop plants. Plant and Soil, 266, 91-104.

Dillon, N. (2003). Gene autonomy: Positions, please… Nature, 425, 457.

Fischle, W., Wang, Y., & Allis, C. D. (2003). Histone and chromatin cross-talk. Current Opinion in Cell Biology, 15(2), 172-183.

Griffiths, P. E., & Stotz, K. (2013). Genetics and philosophy. Cambridge, UK: Cambridge University Press.

Heinemann, J. A., Kurenbach, B., & Quist, D. (2011). Molecular profiling – a tool for addressing emerging gaps in the comparative risk assessment of GMOs. Environmental International, 37(7), 1285-1293.

Ho, M.-W. (1998). Genetic engineering: dream or nightmare? Penn Valley, CA: Gateway Books.

Ho, M.-W., Ryan, A., & Cummins, J. (1999). Cauliflower mosaic viral promoter – A recipe for disaster? Microbial Ecology in Health and Disease, 11(4), 194-197.

Jablonka, E., & Lamb, M. J. (1995). Epigenetic inheritance and evolution: The Lamarkian dimension. Oxford, UK: Oxford University Press.

Jaenisch, R.; Bird, A. (2003). “Epigenetic regulation of gene expression: how the genome integrates intrinsic and environmental signals”. Nature Genetics. 33 Suppl (3s): 245–254.

Keller, E. F. (1983). A feeling for the organism: The life and work of Barbara McClintock. New York, NY: WH Freeman and Company.

Lay, J. O., & Liyanage, R. (2006). Problems with the “omics”. TrAC Trends in Analytical Chemistry, 25(11), 1046-1056.

Makalowski, W. (2003). Genomics: Not junk after all. Science, 300(5623), 1246-1247.

Nielsen, K. M., & Townsend, J. P. (2004). Monitoring and modeling horizontal gene transfer. Nature Biotechnology, 22, 1110-1114.

Oritz, S. E. G. (2004). Beyond welfare: Animal integrity, animal dignity, and genetic engineering. Ethics and the Environment, 9(1), 94-120.

Ridley, M. (1999). Genome: Autobiography of the species in 23 chapters. New York, NY: HarperCollins.

Sansom, R. (2011). Ingenious genes: How gene regulation networks evolve to control development. Cambridge, MA: MIT Press.

Schlesinger, M. J. (1994). How the cell copes with stress and the function of heat shock proteins. Pediatric Research, 36, 1-6.

Vorstenbosch, J. (1993). The concept of integrity: Its significance for the ethical discussion on biotechnology and animals. Livestock Production Science, 36(1), 109-112.