In my last post, I discussed the debate surrounding global warming and climate science. This was a natural segway into a a discussion on energy, and the various available energy generation methods. I’ve decided to split this discussion into three relevant categories; fossil fuels, nuclear, and renewables. For some time now, I have been researching the costs and benefits of each. As with all of these posts, I must admit that I’m not the most qualified expert to speak on this subject. However, it’s my experience that it’s very difficult to find a trustworthy expert or source in a politicized debate such as this. Even many people who should be trustworthy scientific experts have shown me evidence of their bias, and disagree very strongly among themselves. These is no single, standard “expert opinion” on some subjects, and this is one of them. I have tried to do my homework, which makes me better than most in these debates. I’ve done my best to try and filter out the half-truths and misconceptions, and hopefully streamline any research you yourself may do in the future.
Now, with all of that said, this post deals with far more uncertainties than previous posts. Reviewing the worlds’ energy options is a very ambitious task. Debunking junk science like creationism is like shooting fish in a barrel by comparison. The answer to pseudoscience is an easy “NO”, whereas there is no easy answer to the energy question at this time. With that disclaimer, I will get started.
Since fossil fuels are the current main method of power generation in most countries, and the source of carbon emissions which were the subject of the previous post, I’ll start with them. The fossil fuel section may be shorter than others, simply because I had an entire post dedicated to global warming earlier.
If you look at the levelized cost of energy production methods, as predicted for 2018 (1), you’ll notice that the cheapest deployable energy production methods involve fossil fuels of some sort. You may notice that onshore wind power appears to be cheaper than many of the fossil fuel options, including conventional coal. As we will see when I get to renewables, this is a little misleading. Whatever else you may say about these energy sources, it’s hard to get around the fact that fossil fuels are an economic power house in today’s world, and it’s doubtful that any other method will become more economically competitive in the near future.
I’ve spoken at length about climate change, but another often underappreciated cost of fossil fuels involves air pollution. A 2013 MIT study calculates about 200,000 deaths in the U.S. from air pollution in the U.S. alone. The leading cause was transportation, with 53,000 deaths, but power generation was a close second with 52,000 deaths. One 2010 study from the WHO calculates that fine particles from air pollution cause about 223,000 deaths from lung cancer world wide. (2) The idea that air pollution increases certain disease risks is nothing new, and has long been an obvious explanation for higher frequencies of lung cancer in urban areas.
Not only do coal plants release chemical carcinogens, they also typically release more radioactive material than nuclear power plants. Coal often contains uranium and thorium, which is released into the environment in the form of coal ash. In 1982 alone, coal plants in the U.S. released an estimated 801 tons of uranium and 1971 tons of thorium into the environment. NCRP Reports have projected that the population-effective dose from radiation is about 100 times greater from coal plants than from nuclear power plants. (3) As with nuclear waste, many of the radionuclides in coal ash remain radioactive for thousands of years. Additionally, the technology required to refine enough uranium from coal ash to construct a bomb is available to most countries.
Coal is currently the predominant fossil fuel in our energy production, which is no doubt a large part of the deaths caused from air pollution. Natural gas, by one comparison, has about a tenth of the projected death toll (4). This is still a significant number of deaths, but could be a justification for switching from coal to natural gas.
I hope I’ve already established the role of fossil fuels in climate change in my previous post. Some have attempted to offer fossil-fuel alternatives that would mitigate CO2 release, such as natural gas. One figure calculates that natural gas releases about 602 tCO2-eq/GWh, whereas coal releases about 1,045 tCO2-eq/GWh.(4) Regardless, the laws of chemistry demand that there will always be significant CO2 emissions from fossil fuels.
While natural gas produces less harmful emissions, it is a fossil fuel, and is therefore no solution for global warming. It has a number of liabilities that lead to high greenhouse gas emissions, including mining, transport, and methane leaks. Methane leaks are of particular concern, because methane is many times more potent than CO2 as a greenhouse gas. Furthermore, one of the air pollutants that makes coal more harmful, sulfur dioxide, actually counteracts the greenhouse effect. This means that CO2 released by natural gas would have a larger warming effect than an equal amount of CO2 produced by coal. (5) This casts doubt on any suggested climatological advantage to switching to a more natural-gas heavy economy.
At the end of the day, it’s unlikely that fossil fuels will vanish any time soon. Coal is possibly the dirtiest energy around, but even the cleanest fossil fuels contribute to thousands of deaths. I’d like to make one final note; the chances that any of us will die because of air pollution is extremely small, and I don’t advise worrying about it on an individual level. However, even very small increases in risk can translate into very real deaths when multiplied across the entire world population.
That brings us to nuclear power, which has historically been the alternative. If you look at the levelized cost (1) cited below, you’ll see that it is generally more expensive than fossil fuels. This is partially due to the large initial investment required to build a new plant, which is also largely why we’re still using many of the older models rather than spending money on new infrastructure. That said, it has been proven on a large scale as a viable way of supporting most of a first-world nation’s energy needs. It also produces negligible greenhouse gas emissions. So let’s take a look.
The process of nuclear fission is pretty well known; you take an atom with a large, unstable nucleus, such as Uranium-235 or Plutonium-239, and split it. Splitting a Uranium or Plutonium atom will result in two smaller atoms, called fission products. These commonly include Cesium-137 and Iodine-131, which are highly radiotoxic and make up a component of nuclear waste. The other component results from a different process; neutron absorption. Instead of splitting, components of the nuclear fuel will sometimes absorb a neutron and become a slightly heavier element. This produces what is referred to as actinide waste. This includes Plutonium-239, which is also extremely dangerous. The big difference, however, is in half lives. The half life of Cesium-137 is about 30 years, whereas the half-life of Plutonium-239 is about 20,000 years. So the truly long-lived component of nuclear waste is generally from actinides, not fission products. The current design of most nuclear reactors are light-water, thermal neutron reactors. They fission U-235, and not much else. This results in lots of power with no CO2 emissions or air pollution, but also a build-up of actinide waste. During a meltdown, radioactive materials- most of them fission products- can be released into the environment. As I will discuss later, even these old reactors may not be as terrible as they’re made out to be.
These types of reactor might just be the tip of the ice berg though. Actinide waste like Pu-239 can be burned for additional energy in a fast-neutron reactor, leaving only the shorter-lived fission products. Some models produce more plutonium than they burn, and these are called “breeders.” Even these can be beneficial, because they can burn through the majority of a spent fuel rod, but ultimately they produce a lot of long-lived actinide material. Not all fast-neutron designs are breeders though; some are burners. These are made to burn as much actinide material as possible, leaving only fission products. The resulting waste is toxic for less than 300 years, rather than the typical tens or hundreds of thousands of years for nuclear waste. (6) Since the decay is logarithmic, most of that reduction will take place in the first hundred years, so you could see the majority of it decay in a human lifetime. Granted, 300 years is still a very long time, but this waste could be produced by consuming already existing waste that’s even longer-lived. So at the end of a fast-neutron burner reactor’s life, we’d have less waste than we started with, and we would have produced lots of energy in the process. That’s a win-win, no matter how environmentally conscious you are. Most crucially, we have enough waste to power the world for many generations, so you could argue that the waste issue is essentially solved by fast-neutron reactors- at least to the extent that we wouldn’t be producing any more of it in the foreseeable future.
Unfortunately, there seems to be some disagreement over how economically feasible it is to dispose of our nuclear waste in this manner. One paper demoralizingly puts the levelized cost of such a plant almost exactly at the break-even point, meaning that you’d basically make the same amount of money as you spent. However, they do say that efficiency would increase if such plants were built on a much larger scale. (7) On the other hand, some have calculated that it is both technically and economically feasible to burn away 98% of our nuclear waste using pyroprocessing and integral fast reactors. (8) Economics aside, this process has been shown to safely remove plutonium and other actinides, and is capable of significantly reducing both lifespan and volume of nuclear waste. (9)
It actually seems to be pretty difficult to get a clear estimate on just how cost-effective an actinide waste-burning fast reactor would be. This appears to be partially due to the fact that many of the newer actinide burner reactors have not yet had an opportunity to prove or disprove themselves commercially, despite being available for some time. Clearly, someone should give them a shot, but because of the high up-front cost and inherent risk of new technology, no one wants to be the first. That said, part of the high cost of nuclear is due to the fact that it is held to a much higher regulatory standard than fossil fuels. For instance, the fossil fuel industry is not required to store or contain CO2 or air pollution, which would be the equivalent of what is expected of nuclear power. If they were required to do this, they would be far less competitive. Nuclear power also generally does not receive a subsidy to reward it for low-CO2 power production, as many renewable sources do.
Some have suggested small modular reactors (SMR’s) as a solution to economic issues. These smaller reactors would lose out slightly on the benefits of economies of scale, but this could be mitigated by mass production. They would also have a much smaller initial investment and faster payoff. In theory, this would allow more new reactors to be built, since few customers can afford the billions of euro’s (or dollars), not to mention construction time that goes into a full sized nuclear plant. Because of this, SMR’s are projected to be competitive in certain markets (14). The first country to try and use one of these newer third-generation or fourth-generation plants may be the U.K., which has a huge stockpile of nuclear waste to dispose of. They are currently considering both the PRISM (an SMR) and CANDU models for burning away their actinide waste. (10)
This is the real issue with nuclear power; the long term question of waste. Even so, the often repeated claim that we are simply stuck with this waste for millennia is probably untrue. That’s not to say I know we’ll have efficient actinide burners up and running within a couple of decades, but I think we can safely say that it won’t take thousands of years.
All this waste raises the question; how harmful is all the radiation that gets released by nuclear power? As we have already seen, coal plants generally release more radioactive material into their surroundings than nuclear power plants. Air pollution from fossil fuels can also be linked to many premature deaths, including cancers. How does nuclear stack up? Amazingly well, is the answer. In fact, the NASA Goddard Institute recently released a paper (4) calculating that 1.8 million lives were saved by nuclear power between 1971 and 2009, simply by reducing the use of fossil fuels. This analysis took infamous meltdowns such as Chernobyl, Three Mile Island, and Fukushima into account, and calculated that only the first had any actual fatalities. They calculated about 47 deaths total from Chernobyl.
They were able to do this in part because they rejected the linear-no-threshold model of radiotoxicity. This model holds that there is a linear relationship between radiation dose and cancer risk. Therefore, even extremely low levels of radiation can have harmful (although proportionally smaller) effects on health. However, there’s little proof that doses of radiation lower than about 100 mSv cause any decrease in life expectancy, or increase in cancer risk.(11) This is significant, considering that the vast majority of Fukushima was exposed to less than 5 mSv of radiation. (12)
The NASA paper points out that even in Chernobyl, very few people were exposed to anything above this threshold. Recent molecular data from a 2012 paper also supports a non-linear relationship between radiation dose and DNA damage. The paper concludes that “…extrapolating risk linearly from high dose as done with the LNT could lead to overestimation of cancer risk at low doses.” (13)
Even if we accept the linear-no-threshold model, then we get something like the WHO estimate of 4000 deaths from Chernobyl (14) and perhaps some hundreds to come from Fukushima. Even if we add this 4000-5000 deaths to the total tabulated in the NASA paper, this barely cuts into the 1.8 million that would have died if fossil fuels had been used in place of nuclear. Also, if we accept the linear-no-threshold model, then fatalities from the radiation in coal ash must also be accounted for, so estimates for deaths due to coal must also increase. .
Overestimation of the danger of radiation is very high in the general public. Fukushima in particular has been blown out of proportion, with some going so far as to claim that it poses a threat for the American west coast. The news has recently been hyping the water leakages of “tons of radioactive water” into the ocean. This tells us nothing about radioactivity, only the amount of water! Others do slightly better, and mention “trillions” or “quadrillions” of Becquerels. According to one source, “Enenews” (15) 80 quadrillion becquerels of Cesium-137 have been released. “The radioactive plume is already here!” They exclaim in the headline.
Well how much radiation is that exactly? Even if all of that goes into the ocean, it doesn’t sound too impressive. This is far less than the Cesium-137 that was released into the Pacific during nuclear bomb tests, and even accounting for those tests, naturally occurring Uranium-238 and Potassium-40 are the dominant sources of ocean radiation. According to Idaho State University (16), 7,400 EBq of radioactive Potassium-40 already exist in the Pacific ocean naturally (1 EBq= 10^18 Becquerels) . Like Cesium-137, Potassium-40 is a gamma ray emitter that is excreted from the body rather than constantly accumulating (17), so becquerels between both isotopes are roughly comparable in terms of radiotoxicity. If you do the math, this means that the radiotoxic impact of that Cesium-137 is roughly 0.001% that of natural radiation in the Pacific ocean from potassium-40 alone. Even if you refine the calculation, and adjust for subtle differences between the decay processes of the two isotopes, you won’t change the fact that this is incredibly small compared to natural radiation levels. That’s not even accounting for other natural isotopes in the ocean, like Uranium-238.
The Fukushima Daiichi plant was not representative of all nuclear technology by any means. The plant was commissioned 43 years ago in 1971. Despite this, it was much safer than Chernobyl, which was a generation I design, encapsulated in flammable graphite and lacking a containment unit. Just as Generation II was safer than Generation I, Generation III will be safer still. In fact, the Onagawa power plant, built 13 years later than the Fukushima Daiichi plant , and hit just as hard by the tsunami, weathered the disaster admirably.
Generation IV designs for the even more distant future include something known as the Liquid Fluoride Thorium Reactor, or LFTR. A good primer exists here, although I will summarize (19). This model is powered by thorium, which is three times as abundant as uranium, and requires no refining. It utilizes the thorium fuel cycle, meaning that it deliberately allows Th-232 to absorb neutrons and transmute into fissile U-233 and U-235, which are then used as fuel. In other words, almost all neutron absorption produces fuel instead of actinide waste. This also means no plutonium to be used for weapons. The only fissile material available to terrorists would not be ideal for bomb-making, and would only exist in small amounts at any given time
After 300 years, the waste of a LFTR is about 10,000 times less toxic than that of a typical LWR after the same amount of time. Also, since 83% of this waste would decay into a stable state in only 10 years, only the remaining 17% would have to be stored away for 300 years, effectively reducing the volume, as well as the life span.(20) This relatively fast decay rate means that the demands placed on storage repositories such as Yucca mountain are much smaller. Like fast breeder reactors, an LFTR could also burn the actinide waste of older models, thus reducing the life-span of already existing waste. The waste produced would therefore be very small compared to the waste consumed.
The LFTR also promises to be far more resistant to meltdown events than current models. For example, because the fuel exists in a solution, a meltdown could be halted more easily. In many proposed designs, a fan beneath the reactor keeps a salt plug frozen. If the plant overheats, or the fan stops due to power loss, the plug melts and all of the fuel drains out into a containment unit below. However, because the fuel is solid at room temperature, it would solidify in the event of a leak.
Even the efficiency with which it transfers thermal energy into electricity is theoretically higher than the usual 33% for a nuclear power plant. It is projected to be as high as 45-50%, due to higher temperature conditions.
While this design is proven in theory, there are still some hurdles to commercialization such as the search for materials that are highly resistant to corrosion and heat. That said, the proof-of-concept model was built decades ago, and there are a number of groups pursuing LFTR research, mostly outside of the United States. China may well beat us to it.(21)
So nuclear is not as bad as you might think, and the future is possibly even brighter. However, even if we could build a fully functioning LFTR tomorrow, why bother with all that if renewables can meet all of our needs? The answer depends on two factors. The first is that renewables are superior to other means of production in terms of ecology and human life. The second requirement is that producing most of our energy with renewables is both possible, and economically feasible. As I write this sentence, I am not yet sure what my conclusion will be.
My first question about renewable energy is how much potential energy is out there? For some renewable sources, there is a clear limit. Even then, only a portion of that energy is actually financially worth tapping. For instance, with hydroelectric, there are only a limited amount of water flows we can conceivably dam, and the number actually worth damming is going to be lower still. The levelized cost table (1) shows that hydroelectric is projected to remain one of the most competitive renewable sources of energy. However, the U.S. Department of Energy has projected that the total hydroelectric potential that we can feasibly tap in the U.S. is equal to about 170,000 MWa a year (22), roughly 5% of our energy consumption. However, hydroelectric power accounts for over half of the U.S. renewable energy supply currently. In other countries like Sweden and Brazil, it supplies over half of all total electricity. It enjoys a lot of support from most environmentalists, which is interesting if you consider that it disrupts ecosystems and kills a lot of people. The Banquiao dam collapse alone, in China during the 70’s, killed about 170,000 people. Even lesser accidents, like the Vajont dam in Italy, may kill as many as 2,000. Add it all up, and hydropower is one of the biggest killers after combustibles like coal and biomass Despite this, it plays an important albeit limited role in cheap CO2 reduction.
Wind has far more potential, since it extends throughout the entire atmosphere. The potential wind energy of the Earth is obviously far more than what is required. However, from a practical standpoint, we can’t cover the entire earth in turbines, so we’ll never get all of it. How much power can we get if we take into account the constraints of space and infrastructure?
As stated earlier, if you consult the 2018 projection of levelized cost for various energy production methods (1) some forms of renewable energy appear extremely competitive, and others appear very inefficient. In terms of 2011 $/megawatthour, Onshore Wind scored 86.6, and Hydroelectric scored 90.3. Both lower than Conventional Coal (Scoring 100.1). Solar PV1, on the other hand, scores 144.3, much worse than coal. However, the link rightly cautions us against comparing so-called “dispatchable” and “non-dispatchable” energy production methods. Dispatchable methods like coal and nuclear produce a fairly reliable, constant stream of energy. Non-dispatchable or intermittent methods produce erratic surges of energy that can be a little unpredictable. Wind, solar, and hydroelectric are all non-dispatchable technologies.
To quote a 2010 MIT paper (23)
“Levelized cost comparisons overvalue intermittent generating technologies compared to dispatchable base load generating technologies. They also overvalue wind generating technologies compared to solar generating technologies.”
This is because many non-dispatchable technologies generate lots of energy at periods of low demand, and fail to produce at periods of high demand. From a business perspective, this means that wind and solar businesses must often sell energy when the price is low, and may have little energy to sell when the price is high, resulting in little or no profit. Also, while solar is intermittent, its energy production often coincides better with demand than wind’s (24), causing it to look poorer by comparison than it actually is. So while onshore wind appears much more efficient than coal, and solar appears much less efficient, both of these impressions are probably exaggerated.
At the present, Germany has shut down all of its’ nuclear power plants, and is attempting to reduce CO2 emissions purely by expanding wind and solar energy. The result is not so pretty. According to the German news source, Der Spiegel (25):
“ Solar panels and wind turbines at times generate huge amounts of electricity, and sometimes none at all. Depending on the weather and the time of day, the country can face absurd states of energy surplus or deficit.
If there is too much power coming from the grid, wind turbines have to be shut down. Nevertheless, consumers are still paying for the “phantom electricity” the turbines are theoretically generating. Occasionally, Germany has to pay fees to dump already subsidized green energy, creating what experts refer to as “negative electricity prices.”
On the other hand, when the wind suddenly stops blowing, and in particular during the cold season, supply becomes scarce. That’s when heavy oil and coal power plants have to be fired up to close the gap, which is why Germany’s energy producers in 2012 actually released more climate-damaging carbon dioxide into the atmosphere than in 2011.”
It seems that Germany still needs a strong dispatchable energy source to “fill in the gaps” of solar and wind. Since they got rid of nuclear, that means fossil fuels. So the result is even higher CO2 emissions, despite all of the renewables they have running.
So the real question for the future is, can this “surplus energy” be effectively stored, and saved for times when energy is scarce or in high demand? There are certainly storage technologies out there. For instance, the Archimede solar plant in Italy can use molten salts to store concentrated heat energy with a storage capacity of 8 hours. (26) This still raises some questions. Is 8 hours enough storage capacity to meet our needs without any gaps? Also, how hard would it be to implement this sort of technology on a large scale? The calculations that attempt to figure these questions out can get ridiculously complicated.
It seems that battery technology still has a long way to go with wind power. Currently, it is actually more economically sound to “curtail” or shut off wind turbines when they’re generating more energy than required than it is to store it all on a battery. One 2013 paper concludes that the cycle life performance of existing energy storing technology must be increased by a factor of 2-20 in order to overcome this issue.(24) One constraint on the manufacture of these energy storage systems is CO2 emissions, which must be kept low enough to make wind energy worthwhile. Interestingly, battery storage for solar PV energy fared much better than wind- yet another indication that the projected levelized cost table (1) gives wind an unfair boost over solar.
There are sources out there that will claim that wind can be economically competitive and provide more than enough energy to meet our needs in the near future. However, many of these don’t account for everything. The European Environmental agency, states up-front that they are not accounting for the cost of “major changes to the grid system” that would have to take place if wind was used to generate more than about 25% of total energy. (26) Clearly, the larger the share of energy that wind provides, the more significant the required investment in infrastructure. Neither do they demonstrate that all of this energy could be stored for periods of high demand. However, as they point out, the potential energy is immense. Assuming that you could economically store all of it for when it was needed, wind could meet the EU’s projected energy requirements several times over and still be economically competitive.
A lot of renewable energy debates boil down to potential “grid penetration” levels, or what percentage of the grid can be powered by renewable sources. One often touted figure claims that renewables can power as much as 80% of the world by 2050, but this misses some key details. For one, this was the most optimistic of many predictions made by the IPCC. It also includes biomass in this figure, which is probably the least preferable fossil fuel alternative. Most crucial of all, it assumes we will be able to decrease our energy demands, when we can be fairly sure our demand will increase. So unsurprisingly, other predictions gave a figure closer to 27%. This is a fair-sized chunk of the power grid, to be sure, but limits clearly exist where it is often claimed they don’t.(27)
These limits may shrink with time. Improvements to renewable energy efficiency have been going on for some time now, with many innovations potentially on the way. One promising development in solar involves using various materials with different light absorption properties to capture a much wider spectrum of sunlight. Currently, we are limited by the inability of any one material to capture the whole spectrum. We currently use silicon for most photovoltaic cells, which has can only be excited to a higher electron state within the 1.1 eV band gap. This limits it to absorbing around 25% of the solar spectrum. In theory, a multi-junction solar cell using both aluminum gallium arsenide and crystalline silicon could absorb as much as 50% of the solar spectrum. (28)
While efficiency is expected to increase, other factors may also hinder growth of renewables. One of these is inadequate supply of rare earth metals, which are used for batteries in wind turbines and solar panels. With the current green energy boom, demand is expected to outstrip supply. (29) Additionally, renewables demand a lot of raw materials like iron, aluminum, and copper. In many cases, over ten times the amount required by other energy systems. With rising demand will come rising costs, which will encourage the partial use of other, less metal-intensive power sources. One estimate (30) says that boosting the world power generation to 40% renewables by the middle of the century would require about 200% of the global annual copper production (as of 2011) and about 150% of annual global aluminum production. Copper is most problematic, due to declining ore quality. This 40% mark is probably feasible, and would benefit the environment enormously. However, the paper makes the point that the demands are “manageable, but not negligible compared to the current production rates for these materials.”
Future research into renewable energy is clearly a worthwhile pursuit, and whatever limits solar and wind may have, they can clearly be expanded beyond where they are now.
Fossil fuels will always contribute to climate change, and their emissions have a measurable impact on life expectancy. A switch to natural gas might be good for human health, but does little for the Earth. However, economics make the world go round, and while we may significantly reduce CO2 emissions, we won’t stop producing them in the near future.
In the short term, nuclear is far safer than is generally feared. In the long term, waste-buildup could eventually be a problem. However, technologies capable of addressing this issue already exist. For the time being, expansion of nuclear power appears to be a crucial element of CO2 reduction, simply because some low-CO2 deployable baseload is necessary to supplement non-deployable resources such as wind. Additionally, even if renewable energy becomes far more efficient, some of the more advanced models like PRISM may be worth building simply to eliminate existing nuclear waste. If we fail to fund nuclear research, some even more promising designs like the LFTR might also elude us.
Renewables may also become very promising, and can generate lots of cheap energy at certain times. Any attempt to reduce CO2 emissions must take advantage of this. In the future, it may be possible to economically improve the batteries and grids that renewables pump their energy into. It is also possible that renewables may eventually be able to take over most of our energy production. However, most plans to accomplish this involve very distant deadlines like 2050 (31). Even ignoring the fact that distant deadlines are always tentative, history still shows that transitioning away from fossil fuels via nuclear can be much faster. France accomplished this transition in about 15 years. (32) So even if solar, wind, and other renewable sources will eventually render nuclear obsolete in 2050, we can slash CO2 emissions even earlier by adding some nuclear into the mix. For the time being, that leaves us with a dual approach to cutting CO2, using both renewable and nuclear energy. For the long term, great uncertainty exists- too much to pin all of our hopes on just one type of technology, so it makes sense to research more advanced future energy options for all alternatives to fossil fuels.
9) Shropshire D. Economic viability of small to medium-sized reactors deployed in future European energy markets. Prog Nucl Energ, 2011