“Are GMO’s Good or Bad?” and Why That’s a Stupid Question


“Are GMO’s good or bad?” and Why That’s a Stupid Question

All over the internet, you can hear people arguing about whether or not GMO’s or GM crops are safe. Are GMO’s good, or are they bad? It’s a pretty stupid question, honestly. Saying you have a philosophical stance that they’re inherently good or inherently bad is basically admitting up-front that you’re foregoing the use of logic in favor of dogma. The genetic code of an organism can be modified in an infinite number of ways. It would be quite an amazing coincidence if those infinite possible modifications just happened to be all harmful, or all beneficial.

Whether or not a GM crop is good or bad depends entirely on how it has been modified.  Therefore, this post will not attempt to prove that genetic modification itself is inherently right or wrong. Instead, it will focus on whether or not specific GM crops in circulation today can be shown to be harmful. The question shall be rephrased as “Are any existing GMO’s bad?”  My post will also explore the differences between genetic modification and artificial selection.


One of the first anti-GMO claims I heard had to do with “roundup-ready” crops.  These crops tolerate  the herbicide Roundup, particularly its active ingredient glyphosate. It is worth noting that the metabolic pathway that is targeted by glyphosate, the Shikimic Acid pathway, is not used by animals. (1) We ingest our aromatic acids directly, instead of synthesizing them.  It just goes to show you that the term “poison” is relative. Many poisons do affect multiple types of life, but there are also major differences in toxicities between kingdoms of life. That doesn’t mean that glyphosate is automatically safe in all quantities, but neither can we simply assume that weed-killer is automatically animal-killer.

Many images used by anti-GMO  activists  feature people spraying crops down with glyphosate while wearing hazard suits. To be sure, a freshly-sprayed field of roundup-ready crops is not something you would want to frolic around in. It produces eye and skin irritation. More extreme symptoms, such as vomiting and diarrhea, are thought to be due to the surfactants that are mixed in with the glyphosate to increase its dispersal and penetration ability. (2)

Many sources seem to point to these surfactants being more toxic than glyphosate itself. However, some studies take surfactants into account, particularly POEA (Polyethoxylated Tallow Amine) which is the primary surfactant used.  They conclude that under current consumption levels, neither is a danger to the human population.(3)

For glyphosate, very high doses above 3,125 mg/kg body weight per day do seem to be associated with adverse effects in mice, such as weight loss, decreased sperm count, growth retardation,and hepatocyte hypertrophy. However, some animal studies show no ill effects from doses as high as 20,000 mg/kg. (4) For comparison, the lethal dose for caffeine is around 100 mg/kg. (5) Many of us regularly ingest over 1% of a lethal dose of caffeine.

For research on GMO crops, one source of misinformation is Gilles-Éric Séralini. He is most famous for a study in which he fed glyphosate-resistant corn to 10 rats of each sex over the span of two years. Those fed glyphosate-ready corn developed more tumors, and died earlier. However, considering the frequency with which these rats develop cancer (>70%), and the long duration of the study, tumors would be expected to be very common. In fact, it wouldn’t be hugely surprising if all of them had developed tumors in that time frame. With these factors taken into account, the experiment should have included at least 65 of each sex in order to be statistically sound.  (6) The paper has since been retracted by the Journal of Food and Chemical Toxicology. (7)

A joint statement has been released by the French National Scientific Academies, denouncing work by the same author.(8) This is compelling, considering that GMO’s have been largely banned throughout Europe for political reasons. Clearly, the anti-GMO activists in Europe have a lot more political power than agricultural biotech companies like Monsanto. Yet even in these countries, scientists agree that Seralini is unreliable.  It’s clear that Seralini is not a respectable source.

However, Seralini may not be incorrect in principle, even if his methodology sucks. The comparison of Roundup to caffeine may not be a valid one. One concern regarding roundup is that it may be an endocrine disruptor. The endocrine system includes a number of glands that  release hormones, like testosterone, estrogen, and thyroxine. These can cause physiological changes at incredibly low concentrations, sometimes below the picomolar range. So a compound that disrupts the endocrine system could also potentially be harmful at very low concentrations. Additionally, the usual linear dose-dependent assumptions that hold true for many toxins may not hold true for roundup if it is shown to be an endocrine disruptor, because endocrine disruptors often demonstrate non-linear relationships between dose and effect.  In fact, some endocrine disruptors may have non-monotonic dose response curves (NMDRC’s) ,  in which lower doses are sometimes more harmful than higher doses.(9)

How can this be? As I wrote in my last post, one of the Bradford criteria for demonstrating epidemiological correlation is supposed to be dose-effect response. However, this should not be interpreted as an assumption of a linear relationship. If a mechanism for non-linearity exists, it should not be rejected outright. There are a number of potential causes for NMDRC’s. For one, an endocrine disruptor may cause cancer cells to proliferate at low doses, but kill them at higher doses. There may also be a negative feedback mechanism, like when the abundance of a hormone inhibits its own production. Other mechanisms are more complex. One demonstrated in prostate cell lines, can involve two different populations of cells with different types of receptors on their surfaces. One population may rapidly multiply at low doses, and the other population divides more slowly at higher doses. At intermediate doses, the proliferation of one population compensated for the slower proliferation of the other. However, at low doses and at high doses, only one type of cell is affected, so no balancing occurs.

It seems there are many dangers to assuming linearity for dose-effect from endocrine disruptors. One study shows it can cause a certain type of breast cancer cell (T47D cells) to grow more quickly, although the other type of breast cancer cell tested did not. Others have found similar evidence of roundup interacting with estrogen receptors, and altering estrogen production. This could translate into an increased risk of reproductive problems, such as miscarriage and premature birth. Higher levels of estrogen production are also associated with increased breast cancer growth.  This actually seems to be a credible risk. (10)(11) Through the same mechanism, roundup also seems to cause reduced testosterone production in rats. (12) As with many of the issues with roundup, it seems that much of, if not most of the negative effects are not due to glyphosate alone, but involve the numerous other additives in the mix.

One study found that when treated with the full roundup mix, cells produced 40% more estrogen, followed by an eventual decline in estrogen production. However, as  Dorothy Bonn (13) admits, it is uncertain how the results of cell culture tests translate into effects on cells in the human body. She says serum proteins can help cleanse the body of such chemicals. This, coupled with the lack of strong evidence for ill effects in most animal/human tests may suggest that these effects are mitigated in vivo (in the body) versus in vitro. Even so, it seems that whatever the effect is, it may not be negligible, as has been widely assumed. Previous research does seem to have underestimated two things; 1) the toxicity of the entire roundup formulation vs just the active ingredient (glyphosate) alone, and 2) The extremely low dose at which an endocrine disruptor can alter cell signaling.  To date, studies monitoring testosterone levels in men over their life-span have tentatively identified a gradual decline that may be associated with endocrine disruptors. (14) However, more research on the subject is required. At least one review (already cited [3]) does take these things into account, and still concludes that roundup is safe. It calls into question whether or not the concentrations of roundup used to demonstrate endocrine disruption reflect the low doses human consumers are exposed to. Not to mention, a cell culture is not a human being.

Where possible, I think it’s clear exposure to these things should be minimized. However, even as we acknowledge the danger, we must remember not to panic before we quantify it.  After all, we are  sometimes exposed to natural endocrine disruptors, like phytoestrogens in soy .  Some have argued that even these are harmful, with others contending that they can actually have health benefits. (15) So does roundup have a greater effect on the endocrine system than phytoestrogens? I don’t believe the answer to that question is clear at this time.

It remains to be seen if roundup is any more toxic than the alternative herbicides it has displaced. Studies have found Reglone and Stomp to be more genotoxic than Roundup. (16) From what I can tell, there is literature finding health issues with pretty any much herbicide out there, and yet this may not justify an abandonment of herbicides. In the early 2000’s, the most commonly used, most abundant herbicide in the U.S. water supply was atrazine, which is also an endocrine disruptor, with at least equal evidence of toxicity. (17) (18) Yet atrazine is not particularly associated with GMO’s.  Since the spread of GMO’s, atrazine  has been largely displaced by roundup.  Clearly, agrochemicals can be just as harmful on conventional crops as on GMO’s. This makes me wonder if the problem is really specific to GMO’s, or with our current standards for agrochemicals in general.

So Roundup is suspect, at least. Yet this is a potential problem with an agrochemical, not with any genetic modifications themselves. Roundup ready maze is genetically altered to tolerate roundup, but there are equally harmful herbicides that are not associated with GMO’s. More to the point though, it is not the altered genome of the plant that is harmful. Even if we do discover extreme toxicity issues with roundup, it is still a dubious argument against genetic modification.

Bt Toxin

GMO corn does seem to get singled out a lot by activists. When they’re not complaining against glyphosate-ready corn, they’re usually complaining about Bt corn. There are a variety of Bt crystal (Cry) toxins  normally produced by the soil bacterium, Bacillus thuringiensis, during sporulation.  Different Bt toxin genes are used in genetic modification, to target different insects. These crystallized proteins are ingested by insects, where they encounter a very different environment from the human gut. The human gut is highly acidic, with a pH of 2, whereas many insects have an incredibly basic digestive system, with a pH of around 10, or even 11. (19)(20) . Even subtle changes in pH can change the distribution of charges throughout the length of a protein molecule, causing it to denature, or rearrange its three-dimensional structure. So the Bt Cry toxin behaves drastically different in the stomach of an insect than it would in that of a human.  In fact, you would be hard-pressed to find any protein that was functional at both a pH of 2 and 11. In addition to denaturing the Bt toxin, the higher acidity of the human stomach also rapidly breaks it down. (21) That’s just one of many differences between insects and humans.

The Bt Cry toxin acts by binding to cell receptors on insect gut epithelial cells, and forming pores in the cell surface that allow water and harmful ions to flood the cell. (22) So even supposing t Bt Cry proteins survived  vertebrate stomach acid, our epithelial cells would need to have receptors that are recognized by the Bt Cry protein. This seems unlikely, considering that many Bt Cry toxins target only a few species of insect.  Tests on Bt toxin have shown a lack of toxicity against mammal lymphocytes, erythrocytes, bacteria, yeast, or brine shrimp- strongly supporting a narrow range of target organisms for Bt toxin. (23) Unsurprisingly, animal studies on Bt toxin demonstrate its safety repeatedly. (24) (25) (26)

It is a curious rallying point for the anti-GMO movement anyway, considering that the use of Bt toxin as a pesticide predates genetic engineering. As early as the 1920’s, it was used as a pesticide spray, without any protest. Only later were plants genetically engineered to produce it.(27) It seems the activists care more about the means of introducing a pesticide than they do about the pesticide itself.

In fact, it’s very difficult to find a pesticide with less evidence of toxicity to humans than Bt Toxin. Unlike many pesticides, it also breaks down easily in soil, sparing the environment of the sort of contamination commonly caused by pesticides. (28)

Unlike roundup, this is a direct result of genetic modification. On the internet, you see it argued back and forth over whether or not GMO’s increase the amount of chemicals used, or decrease them. On the one hand, you have roundup ready crops, but on the other, there are crops that have been engineered to require fewer chemical pesticides. In reality, both are possible outcomes of genetic modification.  If we just stuck with Bt toxin crops, we would be reducing pesticides, but the roundup ready crops unfortunately erase these gains by driving up herbicide use. (29) The fact remains; some genetic modifications can and do reduce chemical use. For example, it is estimated that Bt cotton has reduced pesticide use in cotton by about 19.4%. (30)

35S CamV Promoter

Another of the first GMO-scares I was exposed to involved the claim that a virus had been discovered in GMO cauliflower. This sounded pretty fishy from the start, and upon research, it doesn’t get much more credible.  I was unsurprised to find that a scientific paper had been misrepresented, as they so often are. The paper cited actually suggests that a fragment of a single viral protein may get translated in the cauliflower plant. (31)

In short, scientists add the 35S promoter from the virus genome to a DNA sequence that they want to be very strongly expressed. This sequence subverts the plant’s own DNA transcription mechanisms, causing it to produce mRNA for the gene attached to 35s promoter. It is used by the virus to trick the host into transcribing the viral genome, but that means we can also use it to subvert the plant’s RNA polymerase for our own purposes. The concern stems from the fact that the promoter overlaps with a particular viral gene (gene VI) coding for Protein P6. The concern is that the gene fragment might cause the plant to produce a fragment of the protein P6.

More specifically, under the scenario supported by the authors, the fragment might be incorporated into a plant protein, creating a sort of chimeric or hybrid protein, particularly with domain 1 of the P6 protein.  While you would expect a chimeric protein to be pretty non-functional, the fact that P6 has so many different functions makes it difficult to ascertain whether or not  domain 1 can do anything on its own. Evidently, the P6 protein allows viral clusters to transport themselves along microfilaments throughout a plant cell. It also counteracts RNA silencing, which involves halting viral RNA in its tracks before it can be copied or translated. Finally, it interferes with cell signaling. With so many diverse functions, it’s understandable why people might be concerned about a fragment having negative effects on plant health.

However, there is strong evidence that a partial P6 protein would have limited impact on a plant, much less a human. The DNA coding for the N-terminus of P6 protein is not included in any GMO crop. Any chimeric protein would exclude it. It has been demonstrated that without this segment, interference with RNA silencing and sialic-acid cell-signaling (via PR1a) is eliminated.  So even if such a protein was abundant in GMO’s, which has yet to be proven, we would expect its function to be badly impaired. For a detailed, fragment-by-fragment look at the P6 protein, there’s a good paper on pubmed. (32)

Viruses require multiple complete proteins working in concert, and most of those proteins don’t work too well unless the virus can first gain entry into a cell, much less while immersed in stomach acid.  Viruses are highly specific in which type of organism they infect. After all, they have to trick their host’s system into copying them, so they usually are specifically tailored to a host, or a group of similar hosts. Of course, viruses are very adaptive, and host-switching does happen. However, it seems that switching to a distantly related host is a rare event, even for a virus. Only three families of virus are known to include both animal and plant pathogens. This proves that at some point in the evolutionary history of those virus families, a transmission between plant and animal did in fact occur. Plant viruses often use animals as transmission vectors, particularly insects. However, actual entry of a plant virus into an animal cell appears to be an extremely rare event, even on the long evolutionary timescale. No plant pathogen is currently known to cause us any harm. One exception may be the Pepper mild mottle virus (PMMoV) from chili peppers. Some evidence suggests it can cause abdominal pain and fever in humans, although that might just be from the chili peppers themselves. (33)

One big issue with the idea of P6 protein toxicity to humans is that viruses are part of nature. Even if you eat organic, home-grown plants, those plants will contain viral genetic material, and viral proteins. If those plants include cauliflower, turnips, or even potatoes, then you may very well be ingesting complete P6 CamV proteins, not just fragments. Naturally infected plants can have as many as 10^5 copies of 35S promoter per cell, whereas a genetically modified plant will generally contain only a few copies per cell. So if this CamV promoter region really is dangerous, you would be exposed to far more danger from plants naturally infected with the virus than from any GMO. Despite frequent human exposure to CamV, and other pararetroviruses in plants, no evidence of danger to humans has been found. (34) In fact, of the tens of thousands of viruses isolated from the human gut in one study, the majority appeared to be plant viruses, largely ingested from crops. (35) So it seems very disproportionate to worry about a function-impaired fragment of a viral protein when we’re all constantly ingesting plants that are chock-full of natural virus. Yum!

So the viral DNA in some GMO crops faces three hurdles to being harmful to humans. One is the barrier between plants and humans, which are very different hosts. Additionally, simply on an evolutionary basis, we can assume that our bodies are equipped in such a way to withstand the many plant viruses we and our ancestors have always ingested in our food. The final hurdle is the fragmentary nature of the genetic material, which is only a small part of the total virus. Unsurprisingly, the main concern of the authors who initially warned of a chimeric P6 protein was that it might be allergenic, not that it might somehow mimic a viral infection in humans.  However, they found little evidence of allergenicity. Even if they eventually do, potential allergenicity is hardly a deal-breaker where food is concerned. Just look at peanuts and shellfish.


Benefits and Comparisons to Conventional Crops

A lot of this talk of GMO-danger overshadows the question of potential benefits. After all, every decision in society is just a question of cost-benefit analysis.  These benefits are particularly compelling for impoverished nations. Rice plays a crucial role in feeding about half of the world’s population, and in many nations, farmers devote a large proportion of their land to rice. Studies on GM rice have revealed immense potential benefits. The reduction in pesticides associated with insect-resistant rice is associated with measurably improved health in Chinese farmers. (36)

The beta-carotene producing GMO strain of golden rice can help with the worldwide problem of vitamin A deficiency. In India alone, it has the potential to prevent 40,000 child deaths a year, not to mention health problems such as blindness. (37)  In addition to vitamin A, plants have been produced with elevated levels of vitamin C and vitamin E. (38) There are of course vegetables with plenty of these vitamins already present. However, in a world where many people subsist mainly off of a single starchy crop, such as rice, wheat, potatoes, or corn, it seems more beneficial to try and increase the nutrition of such crops. People don’t want to swap rice for carrots when they’re hungry, and just want calories, but vitamin-A enriched rice may be an easier sell.

Obviously, we have been genetically altering our food for some time, through selective breeding. To some GMO activists, this is an outrageous comparison. They seem to assume that artificial selection is somehow more natural, or inherently safer. This is just a vague gut feeling on their part, though not an established fact. Artificial selection does rely on alleles already present naturally in a population, so it could be considered more natural in that sense. However, many of the alleles it selects for would never persist in nature. Additionally, since artificial selection doesn’t knowingly target any specific genes, the way genetic modification does, you generally alter the plant in far more ways than you are even aware of. Artificial selection has been used to drastically alter the entire genome of some plants, far beyond the level of change that has been attempted with GMO’s. The source of these changes is from the natural gene pool of that species, but there are far more of them. Just for one example, the wild plant Brassica oleracea  has been manipulated to create kale, collard greens, broccoli, brussel sprouts, cauliflower, and cabbage. (39) Whenever you have genetic manipulation on that level, there is potential for unpredictability.

As for being inherently safer, many of the same potential issues with GMO’s can be found in artificially selected crops. The recent rise in celiac disease has been linked, not to GMO wheat, but to hybridized wheat. (40) This wheat was produced just as all of our crops were produced; by selecting strains with desirable traits over multiple generations.  Yet despite this “natural” approach to genetic modification, the result has been an increase in celiac disease epitopes in modern wheat gluten.  (41)

An epitope is sort of like a bar-code for the immune system. In just the past hundred years, our wheat has become more prone to triggering an immune response, leading to celiac disease. This is reminiscent of the concerns over the P35 protein being allergenic, yet with more evidence. Absurdly, some GMO activists actually confuse the issue, and argue that the rise in celiac disease is caused by GMO wheat- something that does not even exist! Yet somehow I doubt they would turn against artificial selection with the same fervor, if it were possible to correct them.

Ideally, the two approaches should not be seen as mutually exclusive. Breeding can be equal or even superior to genetic modification in some areas. Current attempts to produce better drought-resistant strains of wheat have fared best where selective-breeding is involved. It seems unlikely that artificial selection will ever be rendered useless, because artificial selection allows you to select traits with mechanisms far beyond your understanding, whereas genetic modification is limited to constructing mechanisms we understand reasonably well. However, what if you breed multiple plant species with multiple drought-resistant mechanisms? What if the plants are not inter-fertile? Genetic modification would allow you to import adaptations across species, or even combine them all into one. After the genes were inserted, you could later smooth things out with a bit more selective breeding, possibly breeding around any fitness problems caused by the new gene/s. So it seems likely that both techniques together can be superior to either one alone.

As with many controversial issues in science, it’s risky to unequivocally endorse either side as being 100% correct. Even a broken clock is right twice a day.  There are potential risks with engineering crops to be more resistant to agrochemicals, but there are also potential benefits in terms of yield, nutrition, and chemical use. Even though the anti-GMO camp has some interesting arguments, it seems that the pro-GMO camp is more correct overall based on current knowledge.

1) http://www.annualreviews.org/doi/abs/10.1146/annurev.arplant.50.1.473

2) http://npic.orst.edu/factsheets/glyphotech.html#references

3) http://www.ncbi.nlm.nih.gov/pubmed/10854122

4) http://www.inchem.org/documents/ehc/ehc/ehc159.htm#SectionNumber:7.6

5) http://onlinelibrary.wiley.com/doi/10.1002/j.1552-4604.1967.tb00034.x/abstract;jsessionid=56520B78D423AC385FEB4F3D208B8267.f04t03

6) http://www.nature.com/news/hyped-gm-maize-study-faces-growing-scrutiny-1.11566

7) http://www.sciencedirect.com/science/article/pii/S0278691512005637

8) http://www.academie-sciences.fr/presse/communique/avis_1012.pdf

9) http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3365860/

10) http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1257596/

11) http://www.sciencedirect.com/science/article/pii/S0278691513003633

12) http://www.ncbi.nlm.nih.gov/pubmed/20012598

13) http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1257636/

14) http://www.ncbi.nlm.nih.gov/pubmed/19396984

15) http://www.ncbi.nlm.nih.gov/pubmed/21175082

16) http://mutage.oxfordjournals.org/content/21/6/375.full

17) http://pubs.usgs.gov/circ/circ1225/pdf/

18) http://www.ncbi.nlm.nih.gov/pubmed/16967834

19) http://jeb.biologists.org/content/172/1/355.short

20) http://www.ncbi.nlm.nih.gov/pubmed/11171351

21) http://www.ask-force.org/web/Bt/Herman-Rapid-Digestion-2003.pdf

22) http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1899880/

23) http://www.hindawi.com/journals/bmri/2014/810490/

24) http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3678139/

25) http://www.ncbi.nlm.nih.gov/pubmed/17050059

26) http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0036141

27) http://www.annualreviews.org/doi/abs/10.1146/annurev.arplant.58.032806.103840

28) http://www.ncbi.nlm.nih.gov/pubmed/19295059

29) http://www.enveurope.com/content/24/1/24

30) http://afrsweb.usda.gov/sp2userfiles/person/4056/naranjoetal.btbook2008.pdf

31) http://www.landesbioscience.com/journals/gmcrops/2012GMC0020R.pdf

32) http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3836500/

33) http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3550769/

34) http://www.microbecolhealthdis.net/index.php/mehd/article/viewFile/8034/9373

35) http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.0040015

36) http://www.ncbi.nlm.nih.gov/pubmed/15860626

37) http://www.ncbi.nlm.nih.gov/pubmed/20643233

38) http://www.ncbi.nlm.nih.gov/pubmed/12940549

39) https://botanistinthekitchen.wordpress.com/2012/11/05/the-extraordinary-diversity-of-brassica-oleracea/

40) http://celiacdisease.about.com/od/celiacdiseasefaqs/f/Genetically-Modified-Wheat.htm

41) http://www.ncbi.nlm.nih.gov/pubmed/20664999


Epidemiology and RF-EMF Radiation

Epidemiology, the study of negative health effects, is a thorny subject. Because it’s very scary to think of things secretly influencing our health, the public often responds to epidemiological studies with fear, without any attempt to place that study in its proper context. One epidemiological study showing a correlation proves very little. The so-called “Bradford Criteria” lists what you generally need in order to prove something is causing death or disease in epidemiology. These include strength, consistency, specificity, temporality, biological gradient, plausibility, coherence, experiment, and analogy.  Without these standards, you can make anything appear harmful.

Let’s test these criteria on an example. What about cell phones and cancer? To be sure, you can find papers out there claiming to link cell phones to brain cancer. Some of them seem to have a statistically strong correlation, which satisfies the first requirement. (1) Some people see this, and immediately go out to buy one of those nifty cell phone pads to protect them from the deadly radiation. You can hardly blame them. Fear is the mind killer, and when a scientific paper tells you to be afraid, it can be very hard to think or investigate further.

So let’s take a look at a few of the other Bradford criteria. What about consistency? Well, that’s where it all starts to fall apart. There has been study after study finding no correlation between cancer and RF-EMF frequency radiation.  The same is true for the electromagnetic fields generated by other electronic devices.

At worst, an increased risk of childhood leukemia at exposures above a time-weighted average of 0.4 microteslas is supported by some evidence. However, this is a fairly high level of EMF exposure. It is possible to remain well below this level of exposure while still using plenty of electronics, cell phones included. It is also the strongest correlation that can be found at this time, with other correlations appearing much weaker. Other types of cancer or health problems do not seem to have a consistent correlation with EMF exposure. (2)  Another one of the Bradford criteria, biological gradient, says that the incidence of disease should be more-or-less proportionally higher with higher levels of exposure. As the review cited explains, some of the evidence for EMF hazard fails this standard as well.

As for plausibility, that’s even worse. It’s true that EMF can produce electrical currents in the body, which is one supposed mechanism for them to cause damage. However, the natural electric currents produced in your body by nerve signals can be much stronger than anything produced in the body by electronics. So are the electric fields we are exposed to from the Earth itself. Because of this, the International Agency for Research on Cancer (IARC) goes so far as to say that there is “no scientific explanation” established for the association between >0.4 microtesla exposure and childhood leukemia, and suggests that it may be due to selection bias.(4)(5) This is not an unreasonable conclusion, since the mechanism for EMF-caused cancer is practically nonexistent. Furthermore the  IARC points out that even >0.4 microtesla EMF seems to have no effect on childhood brain cancer, solid tumors, or adult cancers of any kind. This is a problem, because not only do we need a mechanism for EMF to cause cancer, we also need a mechanism by which it would cause only one type of cancer. So perhaps the selection bias explanation is not so far-fetched.

Hold on though, you might say. Everyone knows radiation causes cancer, and RF-EMF from cell phones and other electric appliances is radiation, isn’t it? Well, sort of. RF-EMF radiation is non-ionizing, which by definition means it lacks the energy necessary to remove an electron from its atom.  (6) It cannot therefore directly damage DNA, the way ionizing radiation (e.g. a gamma ray) does. Yet as I have pointed out in my post on energy generation, even a fair amount of ionizing radiation is completely natural. The potassium-40 in the ocean, in bananas, and in your own body, emits gamma rays. In fact, some regions have natural background radiation high enough that they would be evacuated if the cause was anthropogenic, but since it’s natural, people live there. In general, people living in places with elevated natural background radiation like Ramsar, Iran show no increase in cancer rates.(7)

If the body can handle such high levels of natural ionizing radiation, then what chance do the weaker non-ionizing frequencies have of doing any real damage? You would increase your risk of brain cancer more by holding a banana to your head, since that actually contains potassium-40, which gives off gamma rays. Whatever mechanism you think a cell phone has for causing cancer, it should apply even more to a banana. For a comparison, gamma rays have orders of magnitude more energy (Joules per mole) than visible light, whereas radiowaves and microwaves have orders of magnitude less. (6)

Thus far, laboratory testing shows no evidence for EMF caused DNA damage, even at exposure levels well above 0.4 microtesla.  In fact, many organisms live perfectly well in the presence of fields measured in thousands of millitesla. There are also particle accelerator workers  who are exposed to 300 milliT fields at times, with no increase in cancer.  There has been a failure to find any association between static magnetic fields and DNA strand breaks, chromosome aberrations ,sister chromatid exchanges, cell transformation, mutations, or micronucleus formation . (8)

The WHO has classified RF-EMF as a possible carcinogen. Of course, people are fearful when they hear that something is a “possible carcinogen”, but after so much exhaustive research, it actually reflects a failure to find all that much in terms of evidence.

So when it comes to consistency and plausibility, there are some very large marks against EMF as an etiological agent. That doesn’t necessarily mean it’s not harmful at all, but the list of things that *might* be harmful is infinite. Before we start darting at shadows of small risks that might be there, we need to worry about the ones that are large and obvious enough to measure. In my previous posts, I have expressed similar views about vaccines and nuclear power, while acknowledging the measurable dangers of infectious disease and air pollution. In my next post, I will apply the same standards to GM crops. I have my preconceptions about this subject, but as always, I do not yet know for certain what my position will be.  Wish me luck!

1) http://www.spandidos-publications.com/10.3892/ijo.2013.2111#b44-ijo-43-06-1833


3) http://www.ncbi.nlm.nih.gov/pubmed/10612900

4) http://www.iarc.fr/en/media-centre/pr/2001/pr136.html

5) http://www.arpa.emr.it/cms3/documenti/cem/IARC.pdf

6) http://chemed.chem.purdue.edu/genchem/topicreview/bp/ch23/radiation.php)

7) http://www.inderscience.com/info/inarticle.php?artid=7892

8) http://www.mcw.edu/radiationoncology/ourdepartment/radiationbiology/Static-Electric-and-Magnetic-F.htm


A lot of people online seem to be afraid of vaccines. The belief that they are harmful seems a lot more pervasive than I had realized. I will have to delve into quite a few scientific papers to be sure I am giving this subject a fair examination. However, before I get into the science, let’s deal with the obvious; a lot of anti-vaccine people are naturally distrustful of any evidence you can provide them. What I call science, they call “Big-Pharma” propaganda. It doesn’t matter if it’s a government funded study, the FDA, or even an international source like the WHO. To them, these sources are all suspect, but some “natural news” website is totally trustworthy. How plausible is this conspiracy theory?

Big Pharma

A lot of people will claim that vaccines are promoted by these organizations because they’re in the pocket of “Big Pharma”. To be sure, the pharmaceutical industry does spend a lot of money on lobbying, which influences U.S. legislation. What is the outcome of this lobbying however? Does it really appear to have pacified the FDA into looking the other way on unsafe products? Not by any measure that I can find.

First of all, it is incredibly difficult for a drug to get FDA approval. (1) To summarize, a drug starts out with 1-6 years of preclinical testing. If it is one of the 1 in 1000 that looks promising, the company will file an Investigational New Drug (IND) application. Phase I clinical trials test the safety of the drug, and weeds out about 70% of the drugs being tested. This costs about $15 million. Phase II tests effectiveness, proper dosage, and safety, and whittles it down to just 14%. It costs about $23 milion. Then comes phase III, which leaves only about 9% of the original drugs, and costs about $86 million. 8% Make it to FDA approval. All of this takes about 7 to 17 years. After this comes phase IV, which is the post-market surveillance. Even after all of this, the drug can still be recalled after it hits the market. The FDA link also points out that testing requirements have increased over time.

“Over time there has been a clear tendency for FDA regulations and requirements to expand and multiply. In 1980, the typical drug underwent thirty clinical trials involving about fifteen hundred patients. By the mid-1990s, the typical drug had to undergo more than sixty clinical trials involving nearly five thousand patients.”

As the testing requirements have increased, so has the cost of getting a new chemical entity (NCE) into the market. Clearly, R&D is more expensive than ever. (2) If “Big Pharma” really was calling the shots, and willing to increase profits without regard for human life, wouldn’t they be cutting corners and stripping away the costly and laborious regulations surrounding FDA drug approval, rather than letting them increase? Stripping away the costly requirements of clinical trials would increase their profits far more than the promotion of vaccines, which are not terribly profitable anyway. In fact, pharmaceutical companies are increasingly abandoning vaccines, which cost as much as or more than any drug to produce, yet can grant immunity to a disease with only a single treatment. This cuts down on the profit margin considerably, compared with a drug that must be taken multiple times. Additionally, the public hysteria against vaccines has taken its toll on the industry. In particular, the swarm of lawsuits, many unsubstantiated, have added to the cost. The burden of proof for safety placed on a vaccine by FDA regulations is also heavier than for many other drugs. This paper goes into depth over the issues that have made vaccines unprofitable.(3)

Because of all of the companies leaving the vaccine business, the paper recommends congressional action to encourage pharmaceutical companies to produce more vaccines, to compensate for the low profit margin. If not, then companies will continue to abandon vaccines in favor of other products. If this happens, it will be the public, not the pharmaceutical companies who suffer.  It goes on to say:

“Among the four large companies still making vaccines… -none has revenue from vaccines that exceeds 10 percent of total revenue.9 All four companies could stop making vaccines tomorrow without much impact on their bottom lines.”

So vaccine promotion is not so  beneficial to“Big Pharma”. Is this really the cash cow that all of that lobbying money is used to promote, while the costs and regulations on FDA drug approval continue to increase?  In fact, you could make a more convincing conspiracy theory that Big Pharma is actually behind the anti vaxx movement, because they would profit far more from a vaccine-preventable outbreak than they would from vaccines. (4) If you look at the charge that their lobbying is buying them decreased FDA scrutiny, the trends mentioned above just don’t bear this out. Claims that the pharmaceutical industry has turned the WHO into a puppet are even more absurd. If they can’t even control the FDA, how can we imagine that they are controlling an internationally funded organization like the WHO?

I’ve heard others claim that it’s the world governments, not private industry, that are pushing vaccines for some nefarious purpose, such as “population control”. It’s hard to respond to a view of the world that is so distorted. If it pleases them to believe that the world is secretly run by cackling comic book villains, there’s nothing I can do to dissuade them.

So what is all of that lobbying money buying? One thing that does correlate with lobbying expenditure, however, is the price of drugs. With the cost of getting new drugs approved rising, this to be expected to some degree. However, it is possible that they are raising prices by an unfair margin, because everyone looks after their own interests after all. Simply flooding the market with unsafe, untested products would serve their interests far less than simply raising prices. Considering that pharmaceutical companies suffer from legal liability all the time, it’s hard to imagine they would want to expose themselves to even more of it. Even if they did decide to secretly purvey unsafe products, it’s hard to imagine why they would go to great lengths to do so with vaccines specifically, when simply switching to other products is often better business anyway.

What about the scientific literature itself though? There does seem to be an overwhelming tendency for papers funded by pharmaceutical companies to give positive reports on pharmaceutical products. However, on many issues, a majority of papers that are not funded by pharmaceutical companies also agree with them. (5) So you can’t exclude every “pro-pharma” source simply on the basis of funding. Additionally, if a paper can be shown to have good methodology and logic, and constructs its arguments from many sources that are not in question, then the funding behind it should not matter.  This sword cuts both ways as well; some anti-vaccine papers are funded by anti-vaccine organizations. Even Andrew Wakefield had his own interests to further, which are well-known. Would an anti-vaxxer be willing to dismiss these papers simply based on their source of funding as well?

Now, with some rightful doubt cast on the conspiracy theory itself, let’s have a look at the science. This is where it gets tricky, because it has long been acknowledged that vaccines can and do have negative side effects in certain instances. Far from being hidden, this knowledge has been the subject of much discussion in the scientific literature for decades.  So the real question is not whether or not complications exist, but whether or not they outweigh the benefits of vaccines.

Origin of Vaccine Fear

Obviously, there has always been some fear of vaccines, simply because all treatments carry risks. One of the first recognized issues with vaccines was that an attenuated virus in a vaccine can mutate to become virulent once more, in what is referred to as a reversion. In this case, the vaccine may cause the very infection that it was meant to protect against.  This is only an issue with attenuated live vaccines. Other vaccines use a killed or inactivated virus, and some don’t even use the entire virus- just a particular subunit or molecule from the virus. Infection from these is virtually impossible. However, even the live virus vaccines use a weakened form of the virus that is normally harmless to someone with a healthy immune system.  Often, it is an impaired immune system that allows an attenuated virus to undergo many rounds of replication. This gives it an opportunity to mutate back to the more harmful genotype. A normal immune system would nip it in the bud, before it had much opportunity to do this. (6)

Much of the current vaccine scare started, however, with the research of Andrew Wakefield in 1998. For reference, I’d like to direct you to this paper called “Vaccines and Autism; A Tale of a Shifting Hypothesis.”(7) I found the title alone very compelling, because it does seem as though the anti-vaccination groups can’t make up their mind about how vaccines cause autism. Andrew Wakefield started it with his gastrointestinal inflammation hypothesis, but since then, people have suggested everything from ethylmercury to autoimmunity. It’s almost as though the anti-vaxxers don’t really care about the mechanism. Any mechanism will do, as long as it can be blamed on vaccines. That doesn’t mean these claims should not be investigated, but the mere fact that the anti-vaccine crowd seems intent on blaming autism on vaccines regardless of the mechanism undermines their credibility. The paper summarizes just how un-compelling the wakefield research was, and goes into depth on the many failed attempts to find any correlation to autism. It also addresses the proposed mechanisms. It’s as good a primer as any.

I honestly don’t see much point in addressing the Wakefield hypothesis, since it seems to have been roundly rejected by everyone, and really didn’t present much in the way of evidence. All he did was basically say “Hey, I found a few kids who had stomach trouble and vaccines sort of around the time they were diagnosed with autism.“ However, it’s not strange that these would happen around the same time. He didn’t even do any control treatment, to compare individuals who received no vaccine. It’s barely worth mentioning. However, the links with autism and mercury and autoimmunity are currently being discussed in the medical literature, so I will address these.

Vaccines, Autism, and Ethyl Mercury

The first claim I heard involving the link between vaccines and autism involved thimerosol, or ethyl mercury, and I don’t think that’s any coincidence. A quick search online shows that this is a pretty popular notion. At first glance, this makes some sense, because mercury is a heavy metal, and like many heavy metals, it is a neurotoxin. However, this is not proof in itself, because there are many differences between mercury poisoning and autism. For instance, mercury poisoning is strongly associated with tremors, muscle spasms, and ataxia, whereas autism is not. The difference doesn’t end there. Even the autopsied brains of autistic individuals are physically unlike those that have suffered from prenatal or postnatal mercury poisoning. For one thing, autistic brains tend to have greater than average volume, whereas mercury poisoning results in atrophy of the brain. (8) So mercury’s neurotoxicity proves very little, since that neurotoxicity doesn’t manifest itself in an autism-like manner.

Not all mercury compounds are of equal toxicity. The preservative mercury compound found in some (though not all)  (9) vaccines is thimerosol, which becomes ethyl mercury in the body. A lot of studies estimating mercury toxicity use methyl mercury.  Consequently, anti-vaccine people also cite papers describing the toxicity of methyl mercury, and then extend the same conclusions to ethyl mercury. However, whereas methyl mercury is eliminated from the blood with a half-life of about 44 days, ethyl mercury is eliminated with a half life of only about 7 days. (10) The paper cited also adds that  the highest level of blood mercury following vaccination in the children they tested was ≤8 ng/mL.

So how much is that? Well, according to the CDC (11) :

In 2000, the National Research Council of the National Academy of Sciences determined that a level of 85 micrograms per liter (µg/L) in cord blood was associated with early neurodevelopmental effects. The lower 95% confidence limit of this estimate was 58 µg/L.

This limit was set by a study using METHYL mercury, so bear in mind, it will tend to overestimate ethyl mercury toxicity. So using this limit for thimerosol is highly conservative. If we use the 95% confidence limit, it becomes even more conservative. So we can safely say that the highest level of mercury concentration observed in the study was less than a seventh of the minimal concentration that *might* be associated with neurological harm in children, and it was in the system for about a sixth of the standard time.

Still, just to be on the safe side, we may want to limit mercury exposure. It is good to know that vaccines intended for small children no longer use thimerosol. Only those vaccines that are typically administered to adults now use this preservative, which raises the safe dosage level considerably. Additionally, with the removal of Thimerosal, autism rates have continued to increase. (12) (13)

There is a paper out there claiming to show that the rate of neurological disease has fallen since thimerosal was withdrawn, by a father and son team of the last name Geier. This paper seems to disagree with pretty much all data found elsewhere, and there have been exhaustive debunkings of the paper, demonstrating how they arrived at a completely different conclusion from everyone else. These techniques include creatively “imputing” (assuming) hypothetical autism cases that would have been uncovered during a longer follow-up period.  Also, whereas other studies have measured the correlation between mercury dose in *individuals* and autism rates, the Geier study simply took the average Hg dose per person for each birth cohortand then compared just seven different cohorts to one another. So their graph has just seven points, and one of their axes is an average. They had no individual data on which people within each cohort had autism and which people got vaccines. With these eccentricities, it’s no big mystery how they arrived at such a unique conclusion.  A series of posts on Epi Wonk go into depth on this, and  the final post supplies links to three studies in California, Denmark, and Japan and showing that autism has continued to rise since thimerosal removal.  (14)

In addition to “Geier”, another name to watch out for is Tomljenovic. In one widely circulated paper, he resorts to ignoring cohort dates of birth and using different sources of data to get different results where necessary. (15) One thing you notice, actually, is that a lot of anti-vaccine studies are put out by just a handful of the same people. Not all scientific papers are equal, and a position that is being argued by a small, specific group is usually the weaker one. At the end of the day though, you really have to read a paper and have an understanding of what is actually being measured in order to know if it really proves what it claims (or what others claim) it  proves. Still, if you do ever get to researching vaccines yourself, pay attention to the names on each paper. I promise you will notice a trend if you do!

Vaccines, Autism, and Autoimmunity

Surprisingly, there is some evidence in medical literature showing possible links between vaccine-triggered autoimmunity and autism. At the very least, there is strong evidence that individuals with autistic-spectrum disorders are immunologically different, either due to environmental or genetic factors.

In particular, autistic individuals seem especially likely to have specific anti-measles antibodies, as well as elevated cytokines and antibodies targeting components of the nervous system. In this review covering the immunological factors behind autism (16), they find it plausible that both measles infections and measles vaccinations may increase the risk of autism in susceptible individuals by triggering an autoimmune response.

However, there might be other reasons that autistic individuals have unique antibodies. A hypothetical autoimmune reaction leading to autism may be dependent on certain genes being present.  HLA genes for instance, are a major factor in autoimmunity risk. These genes code for the receptors that immune cells use to detect foreign antigens, and to a large extent they also determine which autoantibodies you are likely to produce. I googled “HLA types Autism”, and interestingly, I found a paper showing that certain HLA types are also common in autistic individuals. (17)

According to this paper, the autoantibodies against nervous system components may not even be the direct cause of autism, since small amounts of auto-antibodies may be present even in healthy individuals. In this case the presence of unique antibodies in autistic individuals may have more to do with them having common HLA types than with their vaccination history. So in itself, the fact that autistic individuals appear to have peculiar antibodies is not proof positive of causation by vaccines. It would be interesting to see a study screening non-autistic individuals with the same HLA types associated with autism, to see if they had the same auto-antibodies.

Another mechanism mentioned in the review involves a “maternal-fetal immune interaction.” Basically, components of the mother’s own immune system accidentally alter the neurological development of the fetus. In this mechanism, it is the prenatal exposure to a pathogen that triggers the transfer of maternal cytokines (immune system signaling proteins) across the placental barrier. Many studies have found a high correlation between prenatal infection, and autism. (18) This mechanism would also explain the immunological peculiarities seen in autistic individuals. Additionally, this mechanism seems to have been reproduced in animal models (19)

Needless to say, the field of autism spectrum disorder etiology is a large and complex one. It really deserves its own post. If I had to sum up the current scientific view, it seems to involve many different factors, ranging from immunological causes, genetic predisposition (20) and environmental exposure to chemicals (21). Hell, in just the time it has taken me to write this post, another news story has broken claiming that elevated steroid hormones cause autism! (22)

This all makes sense, if you take into account the possibility that autistic spectrum disorders are a collection of similar disorders with separate causes.  If that’s the case, then the list of things that correlate with “autism” may be a very long one.

With current studies showing no correlation between autism and the MMR vaccine (23) (24) . There seems to be plenty of other possible causes out there without having to blame vaccines. Even if there is a correlation, current data show that it may be a very small one that is easily drowned out by other factors in epidemiological studies.

Vaccines have been linked to other autoimmune disorders as well. There does indeed seem to be a phenomenon known as ASIA  (Autoimmune/Inflammatory Syndrome Induced by Adjuvants) in which vaccination causes autoimmunity. Thus far, four disorders have been linked to this vaccination hazard. Far from being stifled by “the conspiracy”, these risks are frankly discussed on pubmed. (25) However, most of the literature out there seems to consider these risks small, and difficult to verify in epidemiological studies.  For instance, studies on an H1N1, HBV, and HPV vaccine have shown little or no increase in risk for autoimmunity. The risk is considered real, but nowhere near large enough to outweigh the collective benefits of all vaccines worldwide. (26)

Even once we accept that vaccines can play role in autoimmune disorders, and even if we include autism among them, we still cannot dismiss vaccines altogether. Vaccines may cause autoimmunity because they stimulate the immune system, but so do the infections prevented by vaccines. If the measles vaccine causes an autoimmune reaction leading to autism, for instance, then what about an actual measles infection? Some claim that vaccines may carry a greater risk of autoimmunity than normal infections, because they often use an aluminum adjuvant to increase immune response. On the other hand, natural infections can also produce a much stronger immune response than vaccines do (27) (28). So natural infections prevented by vaccines may pose an equal autoimmunity risk, if not greater. The link between viral infections and autoimmunity has been explored for some time. To quote a paper from the FASEB journal, (29)

“Others have reported epidemiologic and serologic correlations between viral infections and autoimmune diseases like multiple sclerosis (MS) , insulin-dependent diabetes mellitus (IDDM), bacterial infections, and ankylosing spondylitis (AS).”

How much might these disorders increase in an unvaccinated world? The implications of this may extend to autism risk as well. Even if we accept the most plausible link between autism and vaccines- the autoimmune mechanism- we still end up questioning whether vaccine-preventable natural infections (particularly prenatal infections) pose a greater autism risk than vaccines. If so, then some vaccines may actually reduce autism risk to the population at large! Such a conclusion was put forth by this paper, (30)

They pointed out the link between autism and congenital rubella infection. They estimated that 830 to 6225 cases of autism spectrum disorder have been prevented by the rubella vaccine from 2001 to 2010. Not to mention the prevented 8300-62,250 cases of congenital rubella syndrome. Autoimmunity can be an ugly thing, but it seems that vaccines do as much good as bad in that area.

Benefits and Cost-Benefit Analysis

This brings us to the final part of this paper; the benefits of vaccines. This is what it all comes down to; a cost-benefit analysis. Even accepting the real risks of attenuated vaccine reversion, and autoimmunity, we must ask ourselves how these compare to the diseases prevented by vaccines. Amazingly, some people claim that vaccines don’t even reduce infection rates. This is quite a claim. If you accept that the adaptive immune system exists, you must also accept that immunization is possible, in principle. If immunization is possible, then wouldn’t the pharmaceutical companies find it more profitable and less risky to use the millions of dollars spent on vaccine research on making real vaccines? That’s not even counting the money they would have to be covertly spending on cover-ups, bribes, and employees they have to pretend to need. The amount of trickery involved would be astronomically large, because there is virtually no end of sources showing that vaccines reduce infection rates. You could read them all day.

Just for example, you can see the H. Influenzae type b rate plummet in Scotland immediately following the introduction of a vaccine. (31) Anti-vaccine groups like to try and credit the decrease in disease following the introduction of vaccines to “improved sanitation”, yet the latter example involves an H. Influenzae b vaccine introduced in 1992. Perhaps they are claiming that Scotland did not discover sanitation until 1992?  The examples of successful vaccines are too numerous to be ascribed to any one coincidence. Then of course, there’s the famous example of Poliomyelitis, which decreased from 16,000 infections per year to only 12 following the introduction of the Poliomyelitis vaccine. (32) Some paranoid individuals like to point out that many of the modern polio infections are from vaccines, but considering the trade-off from 16,000 to 12, that seems a small price to pay.

Some will point out that vaccines do not guarantee 100% protection. At first glance, a protection rate of about 70% doesn’t sound all that promising. However, this misses the point entirely. Even with just 70% protection, you can wipe a virus off the map. This is because you don’t need to reduce to infection rate to zero, you just need to reduce the infection rate to below the rate at which individuals leave the infected population. By “leave the infected population” I mean either through death or recovery. If the infection rate isn’t keeping up with the rate at which the infected are dying and recovering, the disease will gradually disappear from the population. So 100% immunity isn’t even necessary.

Other benefits may include cancer reduction. The HPV vaccine, for instance, is associated with a 70% decrease in risk for cervical cancer. This is because the Human Papillomavirus itself is believed to cause about 5% of cancers worldwide! (33)

Furthermore, an estimated 15% of cancers worldwide are caused by viral infections. (34) The potential to save lives by vaccinating against these viruses- based on cancer risk alone- is therefore staggering.

So the final verdict on vaccines is still pretty favorable. Vaccines can cause infection in rare cases, although they significantly reduce the likelihood of infection overall. They can also trigger autoimmunity, and there’s a small chance that some autistic-spectrum disorders can be caused by this mechanism. However, the countless infections prevented by vaccination can also trigger autoimmunity, so it’s hard to say whether there is a net loss or gain there. Also, the mercury-autism link is just pure nonsense.

Cancer and disease prevention, however, are a clear, massive net gain. These benefits would outweigh the risks even under very conservative scenarios. The CDC has attempted to quantify the massive benefits of vaccination of young children alone, since just 1994, and they come out to over 700,000 lives, hundreds of millions of hospitalizations, and hundreds of billions of dollars. (35) Additionally, the cost of allowing a disease to return from the brink of extinction is potentially infinite, as it will accumulate with every generation that could have been spared from infection. Our children and our children’s children may one day ask why we did not eliminate some diseases when we had the chance.  Everything in life carries risks. All decisions in human society come down to weighing costs and benefits. Don’t let every little risk dissuade you from a promising technology. Try to measure the risks for yourselves, based on the best information available, and then ask if they are worth it. This path may be uncertain, but it is far better than allowing yourself to be ruled by gut feelings and fashionable philosophies. These are both poor substitutes for logic.

1)      http://www.fdareview.org/approval_process.shtml

2)      http://www.discoverymedicine.com/Michael-Dickson/2009/06/20/the-cost-of-new-drug-discovery-and-development/

3)      http://content.healthaffairs.org/content/24/3/622.full

4)      http://www.skepticalraptor.com/skepticalraptorblog.php/big-pharma-supports-antivaccine-movement-conspirac-vaccines-maybe-not/

5)      http://www.medscape.com/viewarticle/538044

6)      http://www.immunopaedia.org.za/fileadmin/pdf/Infant_poliovirus_vaccine_paralysis_10MAY12.pdf

7)      http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2908388/

8)      http://pediatrics.aappublications.org/content/111/3/674.long

9)      http://www.fda.gov/biologicsbloodvaccines/safetyavailability/vaccinesafety/ucm096228

10)   http://pediatrics.aappublications.org/content/121/2/e208.long

11)   http://www.cdc.gov/biomonitoring/Mercury_FactSheet.html

12)   http://www.ncbi.nlm.nih.gov/pubmed/12949291

13)   http://www.ncbi.nlm.nih.gov/pubmed/15877763

14)   http://www.sciencebasedmedicine.org/why-the-latest-geier-geier-paper-is-not-evidence-that-mercury-in-vaccines-causes-

15)   http://leftbrainrightbrain.co.uk/2013/07/10/comment-on-do-aluminum-vaccine-adjuvants-contribute-to-the-rising-prevalence-of-autism/

16)   http://www.ncbi.nlm.nih.gov/pubmed/16512356

17)   http://www.hindawi.com/journals/aurt/2012/959073/

18)   http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3068755/

19)   http://www.ncbi.nlm.nih.gov/pubmed/17913903

20)   http://www.pnas.org/content/103/45/16834.short

21)   http://www.ncbi.nlm.nih.gov/pubmed/22537663

22)   http://www.sciencedaily.com/releases/2014/06/140603092428.htm

23)   http://www.ncbi.nlm.nih.gov/pubmed/15366972

24)   http://cel.webofknowledge.com/InboundService.do?product=CEL&SID=2BEvC2fFJ1gZ28IwTLu&UT=000080812200012&SrcApp=Highwire&action=retrieve&Init=Yes&Func=Frame&SrcAuth=Highwire&customersID=Highwire&IsProductCode=Yes&mode=FullRecord

25)   http://www.ncbi.nlm.nih.gov/pubmed/20708902

26)   http://www.ima.org.il/FilesUpload/IMAJ/0/38/19310.pdf

27)   http://jcm.asm.org/content/40/5/1733.full

28)   http://www.ncbi.nlm.nih.gov/pubmed/21411604

29)   http://www.fasebj.org/content/12/13/1255.long

30)   http://www.ncbi.nlm.nih.gov/pubmed/21592401

31)   http://www.hps.scot.nhs.uk/immvax/Hib-Haemophilusinfluenzaetypeb.aspx

32)   http://www.ncbi.nlm.nih.gov/pubmed/6150330?dopt=Abstract

33)   http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2584441/

34)   http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1994798/

35)   http://www.usatoday.com/story/news/nation/2014/04/24/cdc-vaccine-benefits/8094789/

Energy Generation Methods Examined

In my last post, I discussed the debate surrounding global warming and climate science. This was a natural segway into a a discussion on energy, and the various available energy generation methods. I’ve decided to split this discussion into three relevant categories; fossil fuels, nuclear, and renewables. For some time now, I have been researching the costs and benefits of each. As with all of these posts, I must admit that  I’m not the most qualified expert to speak on this subject. However, it’s my experience that it’s very difficult to find a trustworthy expert or source in a politicized debate such as this. Even many people who should be trustworthy scientific experts have shown me evidence of their bias, and disagree very strongly among themselves. These is no single, standard “expert opinion” on some subjects, and this is one of them. I have tried to do my homework, which makes me better than most in these debates. I’ve done my best to try and filter out the half-truths and misconceptions, and hopefully streamline any research you yourself may do in the future.

Now, with all of that said, this post deals with far more uncertainties than previous posts. Reviewing the worlds’ energy options is a very ambitious task. Debunking junk science like creationism is like shooting  fish in a barrel by comparison. The answer to pseudoscience is an easy “NO”, whereas there is no easy answer to the energy question at this time. With that disclaimer, I will get started.


Since fossil fuels are the current main method of power generation in most countries, and the source of carbon emissions which were the subject of the previous post, I’ll start with them. The fossil fuel section may be shorter than others, simply because I had an entire post dedicated to global warming earlier.

If you look at the levelized cost of energy production methods, as predicted for 2018 (1), you’ll notice that the cheapest deployable energy production methods involve fossil fuels of some sort.  You may notice that onshore wind power appears to be cheaper than many of the fossil fuel options, including conventional coal. As we will see when I get to renewables, this is a little misleading. Whatever else you may say about these energy sources, it’s hard to get around the fact that fossil fuels are an economic power house in today’s world, and it’s doubtful that any other method will become more economically competitive in the near future.

I’ve spoken at length about climate change, but another often underappreciated cost of fossil fuels involves air pollution. A 2013 MIT study calculates about 200,000 deaths in the U.S. from air pollution in the U.S. alone. The leading cause was transportation, with 53,000 deaths, but power generation was a close second with 52,000 deaths.  One 2010 study from the WHO calculates that fine particles from air pollution cause about 223,000 deaths from lung cancer world wide. (2) The idea that air pollution increases certain disease risks is nothing new, and has long been an obvious explanation for higher frequencies of lung cancer in urban areas.

Not only do coal plants release chemical carcinogens, they also typically release more radioactive material than nuclear power plants. Coal often contains uranium and thorium, which is released into the environment in the form of coal ash. In 1982 alone, coal plants in the U.S. released an estimated 801 tons of uranium and 1971 tons of thorium into the environment. NCRP Reports have projected that the population-effective dose from radiation is about 100 times greater from coal plants than from nuclear power plants. (3) As with nuclear waste, many of the radionuclides in coal ash remain radioactive for thousands of years. Additionally, the technology required to refine enough uranium from coal ash to construct a bomb is available to most countries.

Coal is currently the predominant fossil fuel in our energy production, which is no doubt a large part of the deaths caused from air pollution. Natural gas, by one comparison, has about a tenth of the projected death toll (4). This is still a significant number of deaths, but could be a justification for switching from coal to natural gas.

I hope I’ve already established the role of fossil fuels in climate change in my previous post. Some have attempted to offer fossil-fuel alternatives that would mitigate CO2 release, such as natural gas. One figure calculates that natural gas releases about 602 tCO2-eq/GWh, whereas coal releases about 1,045 tCO2-eq/GWh.(4)  Regardless, the laws of chemistry demand that there will always be significant CO2 emissions from fossil fuels.

While natural gas produces less harmful emissions, it is a fossil fuel, and is therefore no solution for global warming. It has a number of liabilities that lead to high greenhouse gas emissions, including  mining, transport, and methane leaks. Methane leaks are of particular concern, because methane is many times more potent than CO2 as a greenhouse gas. Furthermore, one of the air pollutants that makes coal more harmful, sulfur dioxide, actually counteracts the greenhouse effect. This means that CO2 released by natural gas would have a larger warming effect than an equal amount of CO2 produced by coal. (5) This casts doubt on any suggested climatological advantage to switching to a more natural-gas heavy economy.

At the end of the day, it’s unlikely that fossil fuels will vanish any time soon. Coal is possibly the dirtiest energy around, but even the cleanest fossil fuels contribute to thousands of deaths.  I’d like to make one final note; the chances that any of us will die because of air pollution is extremely small, and I don’t advise worrying about it on an individual level. However, even very small increases in risk can translate into very real deaths when multiplied across the entire world population.


That brings us to nuclear power, which has historically been the alternative. If you look at the levelized cost (1) cited below, you’ll see that it is generally more expensive than fossil fuels. This is partially due to the large initial investment required to build a new plant, which is also largely why we’re still using many of the older models rather than spending money on new infrastructure.  That said, it has been proven on a large scale as a viable way of supporting most of a first-world nation’s energy needs. It also produces negligible greenhouse gas emissions. So let’s take a look.

The process of nuclear fission is pretty well known; you take an atom with a large, unstable nucleus, such as Uranium-235 or Plutonium-239, and split it. Splitting a Uranium or Plutonium atom will result in two smaller atoms, called fission products. These commonly include Cesium-137 and Iodine-131, which are highly radiotoxic and make up a component of nuclear waste. The other component results from a different process; neutron absorption. Instead of splitting, components of the nuclear fuel will sometimes absorb a neutron and become a slightly heavier element. This produces what is referred to as actinide waste. This includes Plutonium-239, which is also extremely dangerous.  The big difference, however, is in half lives. The half life of Cesium-137 is about 30 years, whereas the half-life of Plutonium-239 is about 20,000 years. So the truly long-lived component of nuclear waste is generally from actinides, not fission products. The current design of most nuclear reactors are light-water, thermal neutron reactors. They fission U-235, and not much else. This results in lots of power with no CO2 emissions or air pollution, but also a build-up of actinide waste. During a meltdown, radioactive materials- most of them fission products- can be released into the environment. As I will discuss later, even these old reactors may not be as terrible as they’re made out to be.

These types of reactor might just be the tip of the ice berg though. Actinide waste like Pu-239 can be burned for additional energy in a fast-neutron reactor, leaving only the shorter-lived fission products. Some models produce more plutonium than they burn, and these are called “breeders.” Even these can be beneficial, because they can burn through the majority of a spent fuel rod, but ultimately they produce a lot of long-lived actinide material. Not all fast-neutron designs are breeders though; some are burners. These are made to burn as much actinide material as possible, leaving only fission products.  The resulting waste is toxic for less than 300 years, rather than the typical tens or hundreds of thousands of years for nuclear waste. (6) Since the decay is logarithmic, most of that reduction will take place in the first hundred years, so you could see the majority of it decay in a human lifetime. Granted, 300 years is still a very long time, but this waste could be produced by consuming already existing waste that’s even longer-lived. So at the end of a fast-neutron burner reactor’s life, we’d have less waste than we  started with, and we would have produced lots of energy in the process.  That’s a win-win, no matter how environmentally conscious you are. Most crucially, we have enough waste to power the world for many generations, so you could argue that the waste issue is essentially solved by fast-neutron reactors- at least to the extent that we wouldn’t be producing any more of it in the foreseeable future.

Unfortunately, there seems to be some disagreement over how economically feasible it is to dispose of our nuclear waste in this manner. One paper demoralizingly puts the levelized cost of such a plant almost exactly at the break-even point, meaning that you’d basically make the same amount of money as you spent. However, they do say that efficiency would increase if such plants were built on a much larger scale. (7) On the other hand, some have calculated that it is both technically and economically feasible to burn away 98% of our nuclear waste using pyroprocessing and integral fast reactors. (8) Economics aside, this process has been shown to safely remove plutonium and other actinides, and is capable of significantly reducing both lifespan and volume of nuclear waste.  (9)

It actually seems to be pretty difficult to get a clear estimate on just how cost-effective an actinide waste-burning fast reactor would be. This appears to be partially due to the fact that many of the newer actinide burner reactors have not yet had an opportunity to prove or disprove themselves commercially, despite being available for some time.  Clearly, someone should give them a shot, but because of the high up-front cost and inherent risk of new technology, no one wants to be the first. That said, part of the high cost of nuclear is due to the fact that it is held to a much higher regulatory standard than fossil fuels. For instance, the fossil fuel industry is not required to store or contain CO2 or air pollution, which would be the equivalent of what is expected of nuclear power. If they were required to do this, they would be far less competitive. Nuclear power also generally does not receive a subsidy to reward it for low-CO2 power production, as many renewable sources do.

Some have suggested small modular reactors (SMR’s) as a solution to economic issues. These smaller reactors would lose out slightly on the benefits of economies of scale, but this could be mitigated by mass production. They would also have a much smaller initial investment and faster payoff. In theory, this would allow more new reactors to be built, since few customers can afford the billions of euro’s (or dollars), not to mention construction time that goes into a full sized nuclear plant. Because of this, SMR’s are projected to be competitive in certain markets (14).  The first country to try and use one of these newer third-generation or fourth-generation plants may be the U.K., which has a huge stockpile of nuclear waste to dispose of. They are currently considering both the PRISM (an SMR) and CANDU models for burning away their actinide waste. (10)

This is the real issue with nuclear power; the long term question of waste. Even so, the often repeated claim that we are simply stuck with this waste for millennia is probably untrue. That’s not to say I know we’ll have efficient actinide burners up and running within a couple of decades, but I think we can safely say that it won’t take thousands of years.

All this waste raises the question; how harmful is all the radiation that gets released by nuclear power? As we have already seen, coal plants generally release more radioactive material into their surroundings than nuclear power plants. Air pollution from fossil fuels can also be linked to many premature deaths, including cancers. How does nuclear stack up? Amazingly well, is the answer.  In fact, the NASA Goddard Institute recently released a paper (4) calculating that 1.8 million lives were saved by nuclear power between 1971 and 2009, simply by reducing the use of fossil fuels. This analysis took infamous meltdowns such as Chernobyl, Three Mile Island, and Fukushima into account, and calculated that only the first had any actual fatalities. They calculated about 47 deaths total from Chernobyl.

They were able to do this in part because they rejected the linear-no-threshold model of radiotoxicity. This model holds that there is a linear relationship between radiation dose and cancer risk. Therefore, even extremely low levels of radiation can have harmful (although proportionally smaller) effects on health.  However, there’s little proof that doses of radiation lower than about 100 mSv cause any decrease in life expectancy, or increase in cancer risk.(11)  This is significant, considering that the vast majority of Fukushima was exposed to less than 5 mSv of radiation. (12)

The NASA paper points out that even in Chernobyl, very few people were exposed to anything above this threshold.  Recent molecular data from a 2012 paper also supports a non-linear relationship between radiation  dose and DNA damage.  The paper concludes that  “…extrapolating risk linearly from high dose as done with the LNT could lead to overestimation of cancer risk at low doses.” (13)

Even if we accept the linear-no-threshold model, then we get something like the WHO estimate of 4000 deaths from Chernobyl (14) and perhaps some hundreds to come from Fukushima. Even if we add this 4000-5000 deaths to the total tabulated in the NASA paper, this barely cuts into the 1.8 million that would have died if fossil fuels had been used in place of nuclear.  Also, if we accept the linear-no-threshold model, then fatalities from the radiation in coal ash must also be accounted for, so estimates for deaths due to coal must also increase. .

Overestimation of the danger of radiation is very high in the general public. Fukushima in particular has been blown out of proportion, with some going so far as to claim that it poses a threat for the American west coast. The news has recently been hyping the water leakages of “tons of radioactive water” into the ocean. This tells us nothing about radioactivity, only the amount of water! Others do slightly better, and mention “trillions” or “quadrillions” of Becquerels. According to one source, “Enenews” (15) 80 quadrillion becquerels of Cesium-137 have been released. “The radioactive plume is already here!” They exclaim in the headline.

Well how much radiation is that exactly? Even if all of that goes into the ocean, it doesn’t sound too impressive. This is far less than the Cesium-137 that was released into the Pacific during nuclear bomb tests, and even accounting for those tests, naturally occurring Uranium-238 and Potassium-40 are the dominant sources of ocean radiation.  According to Idaho State University (16), 7,400 EBq of radioactive Potassium-40 already exist in the Pacific ocean naturally (1 EBq= 10^18 Becquerels) .  Like Cesium-137, Potassium-40 is a gamma ray emitter that is excreted from the body rather than constantly accumulating (17), so becquerels between both isotopes are roughly comparable in terms of radiotoxicity.  If you do the math, this means that the radiotoxic impact of that Cesium-137 is roughly  0.001% that of natural radiation in the Pacific ocean from potassium-40 alone. Even if you refine the calculation, and adjust for subtle differences between the decay processes of the two isotopes, you won’t change the fact that this is incredibly small compared to natural radiation levels.  That’s not even accounting for other natural isotopes in the ocean, like Uranium-238.

The Fukushima Daiichi plant was not representative of all nuclear technology by any means. The plant was commissioned 43 years ago in 1971. Despite this, it was much safer than Chernobyl, which was a generation I design, encapsulated in flammable graphite and lacking a containment unit. Just as Generation II was safer than Generation I, Generation III will be safer still. In fact, the Onagawa power plant, built 13 years later than the Fukushima Daiichi plant , and hit just as hard by the tsunami, weathered the disaster admirably.

Generation IV designs for the even more distant future include something known as the Liquid Fluoride Thorium Reactor, or LFTR. A good primer exists here, although I will summarize (19). This  model is powered by thorium, which is three times as abundant as uranium, and requires no refining. It utilizes the thorium fuel cycle, meaning that it deliberately allows Th-232 to absorb neutrons and transmute into fissile U-233 and U-235, which are then used as fuel. In other words, almost all neutron absorption produces fuel instead of actinide waste. This also means no plutonium to be used for weapons.  The only fissile material available to terrorists would not be ideal for bomb-making, and would only exist in small amounts at any given time

After 300 years, the waste of a LFTR is about 10,000 times less toxic than that of a typical LWR after the same amount of time.  Also, since 83% of this waste would decay into a stable state in only 10 years, only the remaining  17% would have to be stored away for 300 years, effectively reducing the volume, as well as the life span.(20) This relatively fast decay rate means that the demands placed on storage repositories such as Yucca mountain are much smaller. Like fast breeder reactors, an LFTR could also burn the actinide waste of older models, thus reducing the life-span of already existing waste. The waste produced would therefore be very small compared to the waste consumed.

The LFTR also promises to be far more resistant to meltdown events than current models. For example, because the fuel exists in a solution, a meltdown could be halted more easily. In many proposed designs, a fan beneath the reactor keeps a salt plug frozen. If the plant overheats, or the fan stops due to power loss, the plug melts and all of the fuel drains out into a containment unit below.  However, because the fuel is solid at room temperature, it would solidify in the event of a leak.

Even the efficiency with which it transfers thermal energy into electricity is theoretically higher than the usual 33% for a nuclear power plant. It is projected to be as high as 45-50%, due to higher temperature conditions.

While this design is proven in theory, there are still some hurdles to commercialization such as the search for materials that are highly resistant to corrosion and heat. That said, the proof-of-concept model was built decades ago, and there are a number of groups pursuing LFTR research, mostly outside of the United States. China may well beat us to it.(21)


So nuclear is not as bad as you might think, and the future is possibly even brighter. However, even if we could build a fully functioning LFTR tomorrow, why bother with all that if renewables can meet all of our needs? The answer depends on two factors. The first is that renewables are superior to other means of production in terms of ecology and human life.  The second requirement is that producing most of our energy with renewables is both possible, and economically feasible.  As I write this sentence, I am not yet sure what my conclusion will be.

My first question about renewable energy is how much potential energy is out there? For some renewable sources, there is a clear limit. Even then, only a portion of that energy is actually financially worth tapping. For instance, with hydroelectric, there are only a limited amount of water flows we can conceivably dam, and the number actually worth damming is going to be lower still.  The levelized cost table (1) shows that hydroelectric is projected to remain one of the most competitive renewable sources of energy. However, the U.S. Department of Energy has projected that the total hydroelectric potential that we can feasibly tap in the U.S.  is equal to about 170,000 MWa a year (22), roughly 5% of our  energy consumption. However, hydroelectric power accounts for over half of the U.S. renewable energy supply currently. In other countries like Sweden and Brazil, it supplies over half of all total electricity. It enjoys a lot of support from most environmentalists, which is interesting if you consider that it disrupts ecosystems and kills a lot of people. The Banquiao dam collapse alone, in China during the 70’s, killed about 170,000 people. Even lesser accidents, like the Vajont dam in Italy, may kill as many as 2,000. Add it all up, and hydropower is one of the biggest killers after combustibles like coal and biomass Despite this, it plays an important albeit limited role in cheap CO2 reduction.

Wind has far more potential, since it extends throughout the entire atmosphere.  The potential wind energy of the Earth is obviously far more than what is required. However, from a practical standpoint, we can’t cover the entire earth in turbines, so we’ll never get all of it. How much power can we get if we take into account the constraints of space and infrastructure?

As stated earlier, if you consult the 2018 projection of levelized cost for various energy production methods (1)  some forms of renewable energy appear extremely competitive, and others appear very inefficient.  In terms of 2011 $/megawatthour, Onshore Wind scored 86.6, and Hydroelectric scored 90.3. Both lower than Conventional Coal (Scoring 100.1).  Solar PV1, on the other hand, scores 144.3, much worse than coal. However, the link rightly cautions us against comparing so-called “dispatchable” and “non-dispatchable” energy production methods. Dispatchable methods like coal and nuclear produce a fairly reliable, constant stream of energy. Non-dispatchable or intermittent methods produce erratic surges of energy that can be a little unpredictable. Wind, solar, and hydroelectric are all non-dispatchable technologies.

To quote a 2010 MIT paper (23)
“Levelized cost comparisons overvalue intermittent generating technologies compared to dispatchable base load generating technologies. They also overvalue wind generating technologies compared to solar generating technologies.”

This is because many non-dispatchable technologies generate lots of energy at periods of low demand, and fail to produce at periods of high demand. From a business perspective, this means that wind and solar businesses must often sell energy when the price is low, and may have little energy to sell when the price is high, resulting in little or no profit. Also, while solar is intermittent, its energy production often coincides better with demand than wind’s (24), causing it to look poorer by comparison than it actually is. So while onshore wind appears much more efficient than coal, and solar appears much less efficient, both of these impressions are probably exaggerated.

At the present, Germany has shut down all of its’ nuclear power plants, and is attempting to reduce CO2 emissions purely by expanding wind and solar energy. The result is not so pretty. According to the German news source, Der Spiegel (25):

“ Solar panels and wind turbines at times generate huge amounts of electricity, and sometimes none at all. Depending on the weather and the time of day, the country can face absurd states of energy surplus or deficit.

If there is too much power coming from the grid, wind turbines have to be shut down. Nevertheless, consumers are still paying for the “phantom electricity” the turbines are theoretically generating. Occasionally, Germany has to pay fees to dump already subsidized green energy, creating what experts refer to as “negative electricity prices.”

On the other hand, when the wind suddenly stops blowing, and in particular during the cold season, supply becomes scarce. That’s when heavy oil and coal power plants have to be fired up to close the gap, which is why Germany’s energy producers in 2012 actually released more climate-damaging carbon dioxide into the atmosphere than in 2011.”

It seems that Germany still needs a strong dispatchable energy source to “fill in the gaps” of solar and wind. Since they got rid of nuclear, that means fossil fuels. So the result is even higher CO2 emissions, despite all of the renewables they have running.

So the real question for the future is, can this “surplus energy” be effectively stored, and saved for times when energy is scarce or in high demand? There are certainly storage technologies out there. For instance, the Archimede solar plant in Italy can use molten salts to store concentrated  heat energy with a storage capacity of 8 hours. (26) This still raises some questions. Is 8 hours enough storage capacity to meet our needs without any gaps? Also, how hard would it be to implement this sort of technology on a large scale? The calculations that attempt to figure these questions out can get ridiculously complicated.

It seems that battery technology still has a long way to go with wind power. Currently, it is actually more economically sound to “curtail” or shut off wind turbines when they’re generating more energy than required than it is to store it all on a battery. One 2013 paper concludes that the cycle life performance of existing energy storing technology must be increased by a factor of 2-20 in order to overcome this issue.(24) One constraint on the manufacture of these energy storage systems is CO2 emissions, which must be kept low enough to make wind energy worthwhile.  Interestingly, battery storage for solar PV energy fared much better than wind- yet another indication that the projected levelized cost table (1) gives wind an unfair boost over solar.

There are sources out there that will claim that wind can be economically competitive and provide more than enough energy to meet our needs in the near future. However, many of these don’t account for everything. The European Environmental agency, states up-front that they are not accounting for the cost of “major changes to the grid system” that would have to take place if wind was used to generate more than about 25% of total energy. (26)  Clearly, the larger the share of energy that wind provides, the more significant the required investment in infrastructure. Neither do they demonstrate that all of this energy could be stored for periods of high demand. However, as they point out, the potential energy is immense. Assuming that you could economically store all of it for when it was needed, wind could meet the EU’s projected energy requirements several times over and still be economically competitive.

A lot of renewable energy debates boil down to potential “grid penetration” levels, or what percentage of the grid can be powered by renewable sources.  One often touted figure claims that renewables can power as much as 80% of the world by 2050, but this misses some key details. For one, this was the most optimistic of many predictions made by the IPCC. It also includes biomass in this figure, which is probably the least preferable fossil fuel alternative.  Most crucial of all, it assumes we will be able to decrease our energy demands, when we can be fairly sure our demand will increase. So unsurprisingly, other predictions gave a figure closer to 27%.  This is a fair-sized chunk of the power grid, to be sure, but limits clearly exist where it is often claimed they don’t.(27)

These limits may shrink with time. Improvements to renewable energy efficiency have been going on for some time now, with many innovations potentially on the way. One promising development in solar involves using various materials with different light absorption properties to capture a much wider spectrum of sunlight. Currently, we are limited by the inability of any one material to capture the whole spectrum.  We currently use silicon for most photovoltaic cells, which has can only be excited to a higher electron state within the  1.1 eV band gap. This limits it to absorbing around 25% of the solar spectrum. In theory, a multi-junction solar cell using both aluminum gallium arsenide and crystalline silicon could absorb as much as 50% of the solar spectrum. (28)

While efficiency is expected to increase, other factors may also hinder growth of renewables. One of these is inadequate supply of rare earth metals, which are used for batteries in wind turbines and solar panels. With the current green energy boom, demand is expected to outstrip supply. (29)  Additionally, renewables demand a lot of raw materials like iron, aluminum, and copper. In many cases, over ten times the amount required by other energy systems. With rising demand will come rising costs, which will encourage the partial use of other, less metal-intensive power sources.  One estimate (30) says that boosting the world power generation to 40% renewables by the middle of the century would require about 200% of the global annual copper production (as of 2011) and about 150% of annual global aluminum production.  Copper is most problematic, due to declining ore quality.  This 40% mark is probably feasible, and would benefit the environment enormously. However,  the paper makes the point that the demands are “manageable, but not negligible compared to the current production rates for these materials.”

Future research into renewable energy is clearly a worthwhile pursuit, and whatever limits solar and wind may have, they can clearly be expanded beyond where they are now.


Fossil fuels will always contribute to climate change, and their emissions have a measurable impact on life expectancy. A switch to natural gas might be good for human health, but does little for the Earth. However, economics make the world go round, and while we may significantly reduce CO2 emissions, we won’t stop producing them in the near future.

In the short term, nuclear is far safer than is generally feared. In the long term, waste-buildup could eventually be a problem. However, technologies capable of addressing this issue already exist. For the time being, expansion of nuclear power appears to be a crucial element of CO2 reduction, simply because some low-CO2 deployable baseload is necessary to supplement non-deployable resources such as wind.  Additionally, even if renewable energy becomes far more efficient, some of the more advanced models like PRISM may be worth building simply to eliminate existing nuclear waste. If we fail to fund nuclear research, some even more promising designs like the LFTR might also elude us.

Renewables may also become very promising, and can generate lots of cheap energy at certain times. Any attempt to reduce CO2 emissions must take advantage of this. In the future, it may be possible to economically improve the batteries and grids that renewables pump their energy into. It is also possible that renewables may eventually be able to take over most of our energy production.  However, most plans to accomplish this involve very distant deadlines like 2050 (31). Even ignoring the fact that distant deadlines are always tentative,  history still shows that transitioning away from fossil fuels via nuclear can be much faster. France accomplished this transition in about 15 years. (32)  So even if solar, wind, and other renewable sources will eventually render nuclear obsolete in 2050, we can slash CO2 emissions even earlier by adding some nuclear into the mix. For the time being, that leaves us with a dual approach to cutting CO2, using both renewable and nuclear energy. For the long term, great uncertainty exists- too much to pin all of our hopes on just one type of technology, so it makes sense to research more advanced future energy options for all alternatives to fossil fuels.

1)      http://www.eia.gov/forecasts/aeo/electricity_generation.cfm

2)      http://www.iarc.fr/en/publications/books/sp161/index.php

3)      http://web.ornl.gov/info/ornlreview/rev26-34/text/colmain.html

4)      http://pubs.acs.org/doi/abs/10.1021/es3051197?source=cen

5)      http://www.stanford.edu/group/efmh/jacobson/Articles/I/NatGasVsWWS&coal.pdf

6)      http://large.stanford.edu/courses/2013/ph241/waisberg1/docs/archambeau.pdf

7)      http://www.hindawi.com/journals/stni/2013/412349/

8)      http://www.thesciencecouncil.com/pdfs/PyroprocessingBusinessCase.pdf

9)      Shropshire D. Economic viability of small to medium-sized reactors deployed in future European energy markets. Prog Nucl Energ, 2011

10)   http://www.world-nuclear-news.org/WR-Credible-options-for-UK-plutonium-disposal-2101144.html

11)   http://www.ncbi.nlm.nih.gov/pubmed/22134084

12)   http://www.world-nuclear.org/info/Safety-and-Security/Safety-of-Plants/Appendices/Fukushima–Radiation-Exposure/

13)   http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3258602/

14)   http://www.who.int/mediacentre/news/releases/2005/pr38/en/

15)   http://enenews.com/marine-chemist-latest-figures-i-have-say-fukushima-released-80-quadrillion-bq-of-cesium-137-latest-chernobyl-estimate-is-70-quadrillion-the-radioactive-plume-itself-has-actually-arrived-it

16)   http://www.physics.isu.edu/radinf/natural.htm

17)   https://www.whoi.edu/page.do?pid=83397&tid=3622&cid=94989

18)   http://www.hindawi.com/journals/stni/2013/412349/

19)   http://www.aps.org/units/fps/newsletters/201101/hargraves.cfm

20)   http://www.ntech.mw.tum.de/fileadmin/w00bil/www/documents/pdf/lectures/Nuk5Adds/msr.pdf

21)   http://www.telegraph.co.uk/finance/comment/ambroseevans_pritchard/9784044/China-blazes-trail-for-clean-nuclear-power-from-thorium.html

22)   http://hydropower.inl.gov/resourceassessment/pdfs/main_report_appendix_a_final.pdf

23)   http://dspace.mit.edu/handle/1721.1/59468

24)   http://www.energy.eu/publications/a07.pdf

25)   http://www.spiegel.de/international/germany/high-costs-and-errors-of-german-transition-to-renewable-energy-a-920288.html

26)   http://pubs.rsc.org/en/content/articlepdf/2013/ee/c3ee41973h

27)   http://jmkorhonen.net/2013/10/03/graphic-of-the-week-the-great-80-of-worlds-energy-could-be-generated-from-renewables-fallacy/

28)   http://solarenergy.net/News/2010040204-researchers-overcome-bandgap-voltage-limitations-for-solar-photovoltaic/

29) http://web.mit.edu/12.000/www/m2016/finalwebsite/problems/ree.html

30) http://arstechnica.com/science/2014/10/making-lots-of-renewable-energy-equipment-doesnt-boost-pollution/

31)   http://www.nrel.gov/analysis/re_futures/

32)   http://www.pbs.org/wgbh/pages/frontline/shows/reaction/readings/french.html

A Look at Global Warming

I had fun writing about the Bill Nye and Ken Ham debate, and I realized just how much I’ve been wanting to start a sort of series of notebook posts. Since graduation, I’ve been trying to stay sharp by compiling personal research, and I thought I would share some.

I would like to tackle a number of subjects, particularly involving pseudoscience or debates about science.  These would be informative, both for me, and for any readers. While I can’t give a specific time frame, I now intend to make at least four other posts, addressing climate change, energy production methods, GMO’s, and vaccines. Additionally, I might make a larger more comprehensive post on young earth creationism, since that particular brand of pseudoscience seems to be my mortal enemy.

On its face, global warming is pretty simple. Most people can recite the basic mechanism easily enough. CO2 doesn’t stop sunlight, so it allows heat in. However, once this light strikes the Earth, excited molecules scatter infrared light off of the planet’s surface. This long-wave light is absorbed by CO2, preventing the escape of heat.  Some will point out that all infrared light in the CO2 absorbance spectrum is already currently being absorbed by CO2, claiming that this means the greenhouse effect is “saturated.” This supposedly means that adding more CO2 will have no further effect. There is some truth in this. One common info graphic shared by both sides of the debate (1)  appears to show that 100% of the infrared light in the CO2 absorbance range is already being absorbed. So if we add more, what difference will it make?

This argument makes a number of assumptions however. One is that all of the infrared light currently absorbed by CO2 just vanishes. If this is what you visualize, then visualize again. A CO2 molecule absorbs infrared light, then re-emits it. (2) So instead of picturing a solid wall that stops infrared light from escaping, picture a bunch of pegs slowing the progress of little metal pinballs (representing infrared light.) The pegs make the balls bounce around between them, but eventually, they will allow them to escape. So if you add more pegs, then it takes even longer for the balls to bounce their way through them. However, the pegs cluster more close to the ground, and are less dense at high altitudes where the air is thinnner. In effect, this means that as you add more CO2, the height at which infrared can freely escape the atmosphere climbs higher and higher, allowing more heat to build up. Skeptical science actually explains it better than I just did. (3) If the warming region of the atmosphere climbs higher and higher, that effects convection. If there is less cold air in the layer of the atmosphere adjacent to ours, then heat will not be sucked up from ground level as quickly.

For a more credible albeit more difficult source describing these dynamics, see this paper published in 1931. (4) Yes, it’s old news.

Another, more simplistic argument against global warming is that CO2 exists in very small quantities. It’s true that CO2 is currently only about 395 parts-per-million (ppm) of the atmosphere, up from the pre-industrial  280 ppm. However, the major issue with CO2 is that it absorbs light in the infrared spectrum, and small concentrations of things can drastically influence light absorbance. This is the basis for spectrophotometry, which is used in laboratories all across the world to measure compounds at both high and low concentrations. It does this by bombarding a sample with a spectrum of light that is absorbed by the compound of interest.  In fact, if we look at one such device sold by Wilks Enterprise Inc. (5) we can see that it is capable of detecting CO2 levels as low as 3 ppm, purely based on its ability to absorb light at the 4.25 µm wavelength.

However, there is a much simpler example of this phenomenon. This video (6) demonstrates the difference between water containing 0 ppm, 280 ppm, and 560 ppm of ink. Just as CO2 absorbs infrared light, ink absorbs visible light (that’s why it’s black).  You can clearly see that even hundreds of ppm drastically impact the water’s ability to absorb light in the visible spectrum.

Now that I’ve written a bit about the mechanism, I think I should go over the history of global warming, about which many people have major misconceptions. Many will claim that global warming is a recent speculation that came only after the supposed “global-cooling” scare of the 1970’s. I bought into this myself at a younger age, because of the massive misinformation I was exposed to. Later, I found that this was not the case at all. It has long been acknowledged that aerosols can scatter sunlight, and cause “global dimming”, and some have suggested that this might overpower the greenhouse effect and cause net cooling. However, this is not the same as denying the greenhouse effect altogether. It is only a debate over which of two man-made pollutants has more influence. Even in the 1970’s, more scientists favored CO2 as the stronger influence, and predicted warming being the end result. (7)

I was further shocked to learn that the origins of global warming goes back much further than this though, starting with Svante Arrhenius, who predicted warming due to CO2 in his 1895 paper (8), “On the Influence of Carbonic Acid in the Air Upon the Temperature of the Ground.”

This paper was dismissed early on for a number of reasons. One was that it was clear that water vapor already absorbed much more infrared light that CO2 did. At the time, it was believed that CO2 and water vapor both absorbed light around the same frequency, so this would mean that most of the heat energy that could be trapped by CO2 was already being trapped by water vapor. Also, it was understood even then that the greenhouse effect due to CO2 was logarithmic, not linear. In other words, doubling CO2 will not double the warming effect.  Proponents of global warming admit that doubling the CO2 content will only increase the temperature by about 2.3 degrees Celsius, give or take. (9) Because of the geometric rather than linear correlation between CO2 and temperature, some past scientists like Angstrom dismissed this as inconsequential.

However, this is not a claim that can be easily made. The term the “butterfly effect” is very apt when applied to climate and weather, where even a small change can have drastic effects. For instance, one degree on average can mean a 10% difference in tropical rainfall. (10) This is just one of many examples of how volatile and unpredictable climate can be.

By 1938, the engineer Guy Stewart Callendar had begun the movement to reevaluate the role of CO2. (11) In particular, he pointed to newer, more accurate data showing that CO2 and water vapor absorbed different wavelengths of infrared light. Some fraction of the earth’s infrared light cannot be absorbed by water vapor, but can be absorbed by CO2, so even if the atmosphere was totally saturated with water vapor, an increase in CO2 would still have a warming effect.  He also demonstrated a warming trend of about 0.005°C. per year over the past 50 years.

By the 1950’s, global warming was gaining support, and becoming somewhat popularized to the public, as shown by this old propaganda video. (12) This may surprise some people ,who have been lead to believe as I once did, that global warming was something invented fairly recently after the whole “global cooling” scare of the 1970’s didn’t pan out. In fact, the global cooling scare was largely manufactured by the media, not science.

Others have pointed to a number of other cycles that also effect climate. To be sure, there are other factors than CO2 at work, and all climate scientists are no doubt aware of them. Perhaps the largest impact on climate is caused by the sun. There’s an 11 year cycle of solar irradiance. Every 11 years, the sun enters a new cycle, which is generally of a different intensity than the previous one. The current cycle seems to be weaker than the previous one, so if anything, the sun should be causing the climate to be cooler right now. Additionally, the type of warming associated with increased solar activity is supposed to extend to the stratosphere, which the current warming does not.  (13)

In addition, there is the El Nino/ La Nina cycle, associated with hot and cold air over the pacific respectively. These alternate every two to seven years. So if you wanted to show that the Earth is not warming, you could  easily do what climate change skeptic Christopher Monckton (14)  did when he claimed the Earth was cooling: pick out a strong El nino year like 1998 for your start date, and then end it at the weak-end of a solar cycle. Climatologists take these cycles into account when calculating the climate trend, but most people will not.

When someone denies that the Earth is warming, they generally leave out the numerous factors that should be cooling the Earth, such as the cycles previously mentioned. However, if one removes the noise caused by el nino/la nina oscillation, solar activity, and aerosols, one finds an even more pronounced warming trend. (15) This may seem like chicanery at first glance. After all, how do we know that these adjustments for the solar and el nino cycles reflect reality? However, to treat all years equally regardless of other cycles is also doomed to failure, so attempts to account for them must be made. Also, accounting for the solar cycle and el nino/la nina is a double edged sword; it forces you to dismiss periods of abnormal warmth as well as periods of abnormal cold due to these factors, so it is essentially fair.

One might take issue with the way the impact of these cycles were calculated, but thankfully, there is another way to factor them into the climate calculation; focus on multi-decade long cycles. If you look at long term trends rather than short term trends, the “noise” from these additional natural cycles like el nino and solar activity cancels out. If you look at the long-term graphs, measuring from way back in the 19th century, the trend is very clear. (16) Notice that the most recent data points do not appear to be part of a linear trend. If you tried to plot the last five points, you’d get a hopeless zigzag. Only by zooming out, and looking at the big picture, does the trend emerge. This is not preferential data analysis; in almost all statistical analysis, larger sample sizes are less prone to random noise and other forms of natural interference.

A number of misconceptions also arise from the debate over whether or not the warming is statistically significant, and whether or not it is happening. To the average person, no statistically significant warming means no warming, but this is bad science. For one thing, it’s often claimed that warming has not been statistically significant since 1995, but as we’ve established, short intervals don’t really hold as much weight in statistical analysis. Go back to the graphs showing long-term trends (16), and observe how easy it would be to cherry pick a time interval that seems to disprove the obvious long-term warming trend.

Three graphics on skeptical science.com demonstrate this very well. (17) First, they show how easy it is to find a high peak, and simply draw a line to a low point in order to manufacture a “cooling trend”. Secondly, they show that the long term trend is one of warming. Thirdly, and most informatively, they show how you can isolate a series of fairly flat lines from that warming trend, if you slice it up just right.

Another point involves the meaning of “significant”.

First of all, you have to be careful with statistical analysis. In one of my biology classes, a classmate did statistical analysis on the effects of sulfuric acid on plant growth. They arrived at a p-value just beneath the statistical significant threshold. From this, they came to the ridiculous conclusion that sulfuric acid was not harmful of plants. In reality, the concentration of the acid was probably just below what would be necessary produce totally unambiguous harmful effects. The flaw here should be obvious; real world correlations can sometimes fall just below the line of what is generally considered “significant.” If they fall way short of the mark, that’s good evidence against a correlation, but if they fall just short, say p=0.06 instead of 0.05, that’s what any honest scientist would call an inconclusive result. However, when the results are this inconclusive, the context and variables involved can go a long way to influencing the interpretation of the result.

More importantly, time interval is a factor here. It is much more difficult to prove the statistical significance of a short time interval than a long one. If something strange happens once, that could be nothing. If it happens repeatedly for decades, you can claim an anomaly much more easily. As we have seen, a 15 year interval is not that long in terms of climate cycles, so solar activity or a la nina event could easily explain away any trend within this time range.

So don’t be surprised when you hear people claim there has been no statistically significant warming for “X” amount of years. They are just narrowing the time range because they know that subtle trends are less statistically significant over short periods than over long periods. Even if the trend is the same, shortening the time interval will decrease its significance, so cutting the graph into 15 year periods is cheating in a sense.

To be fair, there will always be some uncertainty as to how much of an effect we have on the climate. It is such an insanely complex system that even weather prediction is difficult. Most experts agree the the effect of man-made CO2 is nonzero, but we may not necessarily know how large it is. However, as I hope I have shown, the data is nowhere near as ambiguous as some people would like to claim, and the basic physics behind global warming is pretty hard to deny.

1)    http://upload.wikimedia.org/wikipedia/commons/thumb/7/7c/Atmospheric_Transmission.png/595px-Atmospheric_Transmission.png

2)      https://spark.ucar.edu/carbon-dioxide-absorbs-and-re-emits-infrared-radiation

3)      http://www.skepticalscience.com/saturated-co2-effect.htm

4)      http://journals.aps.org/pr/abstract/10.1103/PhysRev.38.1876

5)      http://wilksir.com/infraran-specific-vapor-analyzer-ordering-information.html

6)      http://www.youtube.com/watch?v=81FHVrXgzuA

7)      http://journals.ametsoc.org/doi/abs/10.1175/2008BAMS2370.1

8)      http://rsclive3.rsc.org/images/Arrhenius1896_tcm18-173546.pdf

9)      http://www.sciencemag.org/content/334/6061/1385.abstract?sid=a7ab01bf-f2e5-413d-bbf0-c2c5ac2362df

10)   http://thinkprogress.org/climate/2012/09/18/868661/mit-study-for-every-1-degree-c-rise-in-temperature-tropical-regions-will-see-10-percent-heavier-rainfall-extremes/

11)   http://onlinelibrary.wiley.com/doi/10.1002/qj.49706427503/abstract;jsessionid=C31A6DD4ACEA76E0FEC5B0FCDB6F3391.f02t01

12)   http://www.youtube.com/watch?v=T6YyvdYPrhY

13)   http://earthobservatory.nasa.gov/Features/GlobalWarming/page4.php

14)   http://www.concordy.com/section/article/on-global-warming-cherry-picking-and-publishing/

15)   http://iopscience.iop.org/1748-9326/6/4/044022

16)   http://data.giss.nasa.gov/gistemp/graphs_v3/

17)   https://www.skepticalscience.com/global-cooling-january-2007-to-january-2008.htm

Things unsaid in the “Bill Nye vs Ken Ham Debate”

As many of you may know, on February 4, Bill Nye debated creationist Ken Ham at his “creation museum”. For anyone with an open mind, I think Bill Nye was clearly the winner. He delivered a few very good points, particularly his calculation that if all of the species existing today were descended from just 7,000 “kinds” on Noah’s Ark 4000 years ago, it would mean that an average of 11 new species had been created each day over the past 4000 years. This is even more problematic if you consider all animals being descended from just one breeding pair 4000 years ago. If that were the case, there would be virtually no genetic diversity among animals today. Also, the bit about kangaroos was fun.

As for whether it was a good idea, that for me depended on how well Bill did. As long as facts are presented relatively well, I am all for bringing publicity to the debate between science and creationism. With over a third of Americans misled, I don’t think we can afford to be silent. It is time to go on the offensive- something that the creationists have been doing for decades. This gives them the advantage of knowing scientific arguments better than many scientists know creationist arguments, which even Bill Nye suffered from a few days ago. We need a public that is inoculated with enough knowledge of creationism to know why it’s wrong.

Ken Ham, for the most part, brought only religion and philosophy to the table, as well as a few token scientists. There was little actual scientific argument. He did get away with a few spurious claims however, which he should not have done. Bill Nye is not a Biologist, and probably more importantly, he hasn’t spoken with enough creationists to know their propaganda techniques. Considering this, as well as the difficulty of a live debate without access to any source material, he is to be forgiven for a missing a few opportunities in his debate. I will now try to address these.

Above all, I feel Ham scored with his arguments about radiometric dating. It’s no surprise that Bill had trouble with these, because these dating techniques are very complex. Because of this, my section on radiometric dating has ended up being a lot longer than I intended. In fact, it takes up the majority. Feel free to skip ahead. First of all, for a little background, there are many types of radiometric dating. There is Uranium-Lead, Samarium-Neodymium, Rubidium-Strontium, Potassium-Argon, Carbon dating, and at least a dozen others. As the talkorigins link below shows, these totally different methods generally produce very similar results when applied correctly. All of these involve the decay of a parent element into a daughter element with a specific half-life that does NOT change under any conditions thus far produced in a laboratory ( and not for lack of trying). After one half life, half of the parent element has decayed into daughter element. After two half lives, three fourths of the parent element has decayed into daughter element, and so on. In other words, the amount of parent element is 0.5^ N, where N is the number of half-lives that have occurred.

Ken Ham mentions two examples of radiometric dating apparently giving false dates. The first of these is a piece of fossilized wood encased in minerals that were dated at millions of years old by Potassium-Argon dating. The fossilized wood itself, however, was carbon dated at 45,000 years old. How can this be?

Well, one thing that creationists frequently take advantage of is the fact that each radiometric dating method has an optimum time range. This is the range at which both parent and daughter elements exist in large enough quantities to easily measure. So for instance, if only 0.0001% of the parent element is left, it’s practically impossible to detect. Secondly, there’s always a chance of minor “background noise” which is a very small amount of a certain element that causes minor contamination. It’s usually a very small number, but when there’s only a small amount of parent or daughter element, these small errors can greatly skew the results. There’s also such a thing as major contamination, where significant amounts of one of the elements are accidentally artificially introduced to the sample or measuring apparatus.

For example, Carbon dating uses C-14, which has a half-life of about 5700 years. In other words, about half of the initial quantity decays in 5700 years. Now, imagine that you have an organic sample containing 100 micrograms of C-14 initially, and imagine that the background noise or natural contamination is about 1 microgram. This can exist for a number of reasons, due to residue from previous samples, contamination from uranium decay, or due to chemical processing of the sample.

If we take a sample that is 11,400 years old (2 half lifes), it should now have 25 micrograms of C-14, plus 1 microgram of contamination, so a total of 26 micrograms of C-14. In this scenario, the contamination will cause the calculated date to be about 11,058 instead of 11,400. Not bad! As we can see, minor contamination doesn’t do much damage when a sample is only a couple of half-lives old, because the contamination is very small compared to the amounts of parent and daughter element being measured. This is significant, because a couple of Carbon-14 half-lives (11,400 years) is still considerably older than young earth creationists claim the earth is. As we can see, Carbon dating has no problem working in this time range, even with background noise. This phenomenon is explained in greater detail in the second link below. Carbon dating has even been cross referenced with dendrochronology (the measurement of time using tree rings). Together, Carbon dating and dendrochronology can be used together to jointly measure back 11,000 years. Guess what? They agree pretty closely. So even with contamination and background noise, there’s little chance of carbon dating being wrong only 11,000 years back. 

Now let’s look at Ken Ham’s cherry-picked carbon dating scenario. The rocks being dated are millions of years old. This is a problem, because virtually all C-14 has decayed out of a sample after about 50,000 years. So if you try to date anything older, you’ll still just get a result of about 50,000 years. In a perfect world, this would mean 0 of 100 micrograms of C-14 are left. However, don’t forget the background contamination, which let’s again say is 1 microgram. This will make it appear as though 1% of the C-14 still hasn’t decayed, and will give the fictitious age of about 37,620 years, when in fact it is millions of years old. Notice how much more difference a 1% contamination makes in this time range. Fortunately, such high levels of contamination are not the norm.

The 45,000 year date given by Ken Ham seems to have been caused by an even smaller amount of contamination, equivalent to about 0.4% the mass of the original C-14 present in the sample. This small amount is easily explainable, even for something that is millions of years old.

While the 0.4% modern Carbon-14 is clearly no problem except for very old samples, it is also clearly an anomaly. So where did it come from? One thing Ken Ham mentioned that caught my ear is that the wood was found by men who were digging in hopes of establishing a site for a coal mine. If you go to his website, you’ll see he also mentions veins of coal in the area. This made me groan, because, to quote the second link below: 

“Coal is notorious for contamination. Uranium is often found in or near coal, releasing neutrons that generate radiocarbon in the coal from nitrogen. Mobile humic acids are almost always present and can transport more recent carbon to the coal. Microbial growth can incorporate modern carbon from groundwater while in situ and from air after sample collection. Coal can easily adsorb atmospheric CO2 after collection.”

So this is an environment in which we might expect a slightly higher than average C-14 contamination. Sure enough, that’s just what we have Any geologist doing carbon dating in a coal mine would probably adjust the age limit downward slightly, from the usual 50,000.  So 45,000 years would be about what you would expect for a sample that has lost all of its original C-14.

To put this all more simply, this is comparable to an hourglass where a few grains of sand occasionally stick to the roof. Let’s say it has enough sand for an hour. If you turn the hourglass over, and leave the room for 30 minutes, it will very reliably tell you that you have been gone for 30 minutes. However, if you leave for a week, it is possible to misinterpret the result. You may see a few grains sticking to the top, and conclude that you have been gone for less than an hour. Someone more knowledgeable of the hourglasses limitations, however, will know that the last few grains don’t always fall, so they will know they were likely gone for over an hour. 

The next example Ken Ham gives of radiometric dating not working on freshly cooled rock from Mount St. Helens is just the opposite. To extend the analogy, Ken Ham is showing us an hourglass where a few grains of sand are sometimes already on the bottom, even before any sand has fallen. However, this date involves a different dating method, called Potassium-Argon dating, which can measure backwards much further. Whereas Carbon-14 has a half-life of just 5,700 years, Potassium-40 has a half life of about 1,250,000,000 years, so each of these grains of sand are worth hundreds of thousands of years. This still involves contamination (this time of the daughter element, not the parent element) of about 0.055 percent the original amount of parent element. Not terribly impressive. For a brand new sample, it may be a significant error, but for a sample a billion years old, this level of error would barely matter. In this case too, the conditions were less than ideal. The equipment used was not intended for samples of that age. Neither was the dating method. The author of the paper he is referring to also mentioned small inclusions of contaminants called xenoliths. While he tried to remove these, they cannot be ruled out as a factor, especially considering the small amounts of contamination involved, which could easily be missed. 

The other arguments made by Ken Ham are much less formidable. I *think* I can deal with them without talking so much. Hopefully. 

Another of his arguments involves the evolution of E.coli to internalize and consume citrate in the presence of oxygen. He says that this is not a new trait, and simply a *switch* that gets turned on to activate already preexisting information. Surprisingly, this is fairly accurate. E.coli already had a gene for citrate metabolism, but was unable to internalize it in the presence of oxygen. A duplication mutation placed a new copy of this gene under a new promoter, allowing it to be activated under different environmental conditions. This is not a novel gene, just a gene rearrangement. That’s still evolution, of course. Many of the genetic differences between chimps and humans have more to do with rearranged sequences than they do with sequences that are actually unique to either species. About 95% is shared, some of it is just in a different place. So mechanisms such as this are very important to evolution. However, if Ken Ham means to imply that new genes do not arise, he’s wrong, and I think he knows better. Nylonase is an excellent example. Before 1930, nylon did not exist, because it is manmade. So obviously there was no nylon-eating bacteria. The enzyme nylonase, which helps bacteria break down nylon, is absolutely a trait that did not exist until very recently, and is proof that mutation can create a totally new protein product. It may resemble an earlier enzyme, but it has gained a new function while losing the previous one. You can read about it in my third link below.

Ken Ham has also implied that there are large amounts of creationist scientists. In fact, the number is very low. You can find some, sure, just as you can find believers in any fringe science. However, about 97% of American Scientists believe in evolution, as shown by my fourth link. This number is higher outside of the U.S., and also higher among geologists and biologists. I imagine it’s also higher among astronomers, considering that there are visible stars that we shouldn’t be able to see if their light had only been traveling 6,000 years… although somehow Ken Ham found one that was a young-earther. On that note, even Bill Nye did say one thing I thought was iffy. He seemed to imply that these stars that are 6,000+ light years away are measured by parallax, but from what I understand, redshift is more useful at these distances. That said, many of the distance measuring methods (listed in the fifth link below) overlap, and have been tested against one another to make sure that they obtain similar results for the same star. Clearly, there are multiple lines of evidence for stars 6,000+ light years away. All in all, it was a good point that Bill raised, even if he skipped some details.

I may watch the video again and come up with some other things, or comment on other young earth creationist distortions in the future. Even if I’m just putting my thoughts down, I have to say, this has been fun.

1) http://www.talkorigins.org/faqs/faq-age-of-earth.html

2) http://www.asa3.org/ASA/education/origins/carbon-kb.htm

3) http://www.ncbi.nlm.nih.gov/pmc/articles/PMC345072/

4) http://www.people-press.org/2009/07/09/section-5-evolution-climate-change-and-other-issues/

5) http://www.astronomynotes.com/galaxy/s16.htm