Author Archives: stewart henderson

Unknown's avatar

About stewart henderson

Stewart, aka Luigi Funesti Sordido of the USSR, the Urbane Society for Sceptical Romantics. A dilettante, basically.

Animal-friendly meat

some uncooked ‘Impossible’ patties, from plant-based ingredients, with various side dishes. Photographed by Maggie Curson Jurow

I’m not a vegetarian, and my feelings on the issue of meat-eating range from extreme guilt to resentment to irritation, but perhaps my views are of little account:

Some 41% of all arable land…. is used to grow grain for livestock, while one-third of our fresh water consumption goes to meat production. Add in the use of chemicals and fuel, and the meat we consume represents one of the largest contributors to carbon, pesticides and pollutants on the planet.

So writes ethical philosopher Laurie Zoloth in the most recent issue of Cosmos. And of course we must add to that the massive issue of animal exploitation and suffering. But happily, Zoloth’s article is all about promoting a possible solution, which isn’t about convincing 98% of the world’s population, the meat-eaters, to change their ways.

Synthetic meat. It’s been talked about, and produced in small quantities, for a few years now, and I’ve been highly skeptical from the get-go, especially as the first samples were phenomenally expensive and disappointing taste-wise, according to pundits. But Zoloth has introduced to me a new hero in the field, the high-flying biochemist and activist Pat Brown, formerly of Stanford University. Brown is well aware that there are, unfortunately, too many people like me who just can’t wean themselves from meat in spite of the disastrous (but still psychologically remote) consequences of our behaviour. So he and a team of some 80 scientists are committing themselves to creating  palatable meat from entirely plant-based sources, thus transforming our agricultural world.

Food is, of course, chemistry and nothing but. Top-class chefs may disagree, but really they, like expert cocktail mixers, are just top-class chemical manipulators. Even so, most producers of synthetic meat (aka cultured meat, clean meat, in vitro meat) have started with cells from the animals whose meat they’re trying to synthesise. A company called Memphis Meats has already produced clean chicken and duck  from cultured cells of these birds, which have apparently passed taste tests. However, Pat Brown’s new company, Impossible Foods, is going further with a plant-based burger based essentially on the not-so-secret molecular ingredient, haem. Haem is a molecule found in blood, a constituent of the protein haemoglobin, but it’s also found in soybeans, and that’s where Brown’s team gets it from, at least at the genetic level. With a lot of nifty chemical engineering, they’ve created a burger that sizzles, browns and oozes fat, and they’ve got some billionaire investors such as Bill Gates and Vinod Khosla onside. The so-called Impossible Burger follows up the Beyond Burger, from another company called Beyond Meat, also backed by Gates, but it looks like the Impossible Burger has more potential.

Haem (or heme in American) is what makes our blood red. It contains iron and helps in oxygenating the blood. Abundant in muscle tissue, it’s what gives raw meat its pink colour. It also contributes much to the taste of cooked meat. The ‘Impossible’ team transferred the soybean gene encoding the haem protein into yeast, thus ensuring an abundant supply. The associated massive cost reduction is key to Brown’s biosphere-saving ambitions.

Of course, it’s not just cost that will capture the market. Taste, mouthfeel, aroma, je ne sais quoi, so much goes into the meat-munching experience, and the team has apparently worked hard to get it all in there, and will no doubt be willing to tweak well into the future, considering what’s at steak (sorry). If they succeed, it will be something of a slap in the face, perhaps, to those romantics among us who want to believe that food is more than merely chemical.

Yet I fear that the biggest challenge, as with renewable energy, will be to win over, or overcome, those invested in and running the current ‘technology’. That’s the world of people and systems that raise cows, pigs, chooks, and all the rest, for slaughter. It’s an open and shut case from an environmental and ethical perspective, but that doesn’t mean people won’t fight tooth and nail to preserve their bloody businesses and lifestyles. It’s not as if they’re going to be rehired by biotech companies. And as to the religious among us, with their halal and kosher conceptions, that’ll be another headache, but not for me. It will certainly be another scientific stab at the heart of this pre-scientific way of looking at the world and will add to the ever-widening divide between pre-scientific and scientific cultures, with not very foreseeable consequences, but probably not happy ones.

But all that’s still well in the future. It’s unlikely that these new products will hit the market for a few years yet, and it’s likely the inroads will be small at first, in spite of the admirable ambitions of people like Pat Brown and his supporters. In any case I’ll be watching developments with great interest, and hoping to get a not-too costly taste myself some time. Such fun it is to be alive in these days, but to be young, that would be like heaven…

 

solar technology keeps moving toward the centre

thin-film solar modules - a more flexible solution

thin-film solar modules – a more flexible solution

I’ve been hearing that the costs of solar installations are coming down, making the take-up easier and faster, but I haven’t spent the time to research exactly why this is happening, presumably world-wide. So now’s the time to do so. I thought I’d start with something I heard recently on a podcast about revolutionary thin solar cells…

Thin-film solar cells have been around for a while now, and they’re described well here. They’re only one micron thick, compared to traditional 350 microns-thick silicon-wafer cells, and they utilise superconductor materials, usually silicon-based, which are highly efficient absorbers of solar energy. However, according to Wikipedia, this new technology isn’t doing so well in the market-place, with only about 7% of market share, and not rising, though with crystalline silicon being replaced more and more by other materials (such as cadmium telluride, copper indium gallium selenide and amorphous silicon) there’s still hope for its future.

This technology was first utilised on a small scale in pocket calculators quite some time ago but it has been difficult to scale it up to the level of large-scale solar panels. There are problems with both stability and toxicity – cadmium for example is a poison that can accumulate in the food chain like mercury. It doesn’t look like it’s this or any other technological development that’s reducing costs or increasing efficiency, though of course they may do in the future, with graphene looking like a promising material.

So let’s return to the question of why solar has suddenly become much cheaper and is apparently set to get cheaper still. Large manufacturing investment and economies of scale seem to be a large part of the story. This means that the costs of solar modules now make up less than half of the total cost of what Ramez Naam calls ‘complete solar deployments at the utility scale’, and these other costs are also coming down as the industry ‘scales’. His article in Renew Economy from August last year makes projections based on the idea that ‘doubling of cumulative capacity tends to reduce prices by a predictable rate’, though he’s also prepared to heavily qualify such projections based on a multitude of possibly limiting factors. If all goes well, solar electricity costs will become less than half the cost of new coal or natural gas in a generation – without factoring in the climate costs of continuing fossil fuel usage. The extraordinary rise in solar energy usage in China, set to continue well into the future, bolsters the prediction, and India is also keen to incease usage, despite problems with domestic manufacturing and trade rules. Most panels are being imported from China and the USA, while domestic production struggles.

It’s interesting that solar and other renewable technologies are now being spruiked as mainstream by mainstream and even conservative sources, such as Fortune and oilprice.com. Fortune’s article also usefully points out how the cost of different power sources to the consumer is heavily dependent on government policies relating to fossil fuels and their alternatives, as well as to the natural assets of particular regions. Even so, it’s clear that the cost of fossil-fuel based electricity is rising everywhere while wind and solar electricity costs are falling, creating an increasingly clear-cut scenario for governments worldwide to deal with. Some governments are obviously facing it more squarely than others.

US residential solar costs. Beyond 2013, these are estimates, but already out of date it seems

US residential solar costs. Beyond 2013, these are estimates, but already out of date it seems

 

buildings that reduce energy consumption

average energy use in an Australian home, 2011

average energy use in an Australian home, 2011

The energy solutions world has obviously been given a big boost by the decisions in Paris recently, so all the more reason to analyse the success of changes to building designs, and how they can lead to lower emissions worldwide in the future. As I wrote last year, Australia has been consuming less electricity of late, a turnaround which is a historical first, and the main cause has been energy-efficient new buildings and appliances, regulated by government here, no doubt in conformity with other western regulatory systems. So what exactly have these changes been, and how far can we go in creating energy-efficient buildings?

In Australia, all new buildings must comply with the Building Code of Australia, which prescribes national energy efficiency requirements and here in South Australia the government has a comprehensive website outlining those requirements as well as, presumably, state additions. New buildings must achieve a six star rating, though concessions can be made in some circumstances. In South Australia, energy efficiency standards are tied to three distinct climate zones, but the essential particulars are that there should be measures to reduce heating and cooling loads, good all-round thermal insulation, good glazing, sealing and draught-proofing, good ventilation, effective insulation of piping and ductwork, energy efficient lighting and water heating, and usage of renewable energy such as solar.

SA has developed a strategic plan to improve the energy efficiency of dwellings by 15% by 2020, targeting such items as air-conditioners and water heaters, and in particular the energy efficiency of new buildings, as retro-fitting is often problematic. However, the state government reports success with the energy efficiency of its owned and leased buildings, which had improved by 23.8% in 2014, compared to 2001. They are on target for a 30% improvement by 2030.

But energy efficiency for new housing doesn’t end with the buildings themselves. The Bowden housing development, which is currently being constructed in my neighbourhood, aims to reduce energy consumption and emissions through integrated community living and facilities, green spaces, effective public transport and bikeways, convenient shopping, dining and entertainment, and parks and gardens for relaxation and exercise. It all sounds a bit like paradise, and I must admit that, as I grow older, the final picture is still a long from taking full shape, but as we move away from oil, upon which we still rely for transport, this kind of integrated community living could prove a major factor in reducing oil consumption. The national broadband system will of course play a role here, with more effective internet communication making it easier to conference nationally and internationally without consuming so much jet fuel. It’s probably fair to say that this is an area of great waste today, with large amounts of greenhouse gases being emitted for largely unnecessary international junkets.

Recently it was announced that the Tesla Powerwall, the new energy storage technology from Elon Musk’s company, will begin local installation in Australia, with the first installations happening this month (February 2016). There are other battery storage systems on offer too, so this is another burgeoning area in which residential and other buildings can be energy-efficient.

So we’re finally becoming smarter about these things, and it’s making measurable inroads into our overall energy consumption. Other strategies for lightening our environmental footprints include embodied energy and cogeneration. These are described on the Urban Ecology Australia website. Embodied energy is:

The energy expended to create and later remove a building can be minimised by constructing it from locally available, natural materials that are both durable and recyclable, and by designing it to be easy to dismantle, with components easy to recover and reuse.

And cogeneration is defined thus:

Cogeneration involves reusing the waste heat from electricity generation, thus consuming less fuel than would be needed to produce the electricity and heat separately.
Small, natural gas powered electricity generators in industrial or residential areas can supply heat for use by factories, office buildings, and household clusters.
The heat can be used for space heating, hot water, and to run absorption chillers for refrigeration and air-conditioning. It can be used in industry for chemical and biological processes.

Clearly there’s no over-arching technological fix for energy reduction, at least not in the offing, but there are a host of smarter solutions with a combinatorial effect. And governments everywhere can, and should, play a useful, example-setting role.

Australia ranks 10th of these 16 countries for energy efficiency. However, we're 16th for energy-efficient transport, so presumably we're further up the ladder for housing

Australia ranks 10th of these 16 countries for energy efficiency. However, we’re 16th for energy-efficient transport, so presumably we’re further up the ladder for housing

we need to support innovative design in renewables

Merkel tells Obama about the size of the problem (against a 'hey, the climate looks effing good to me' background)

Merkel tells Obama about the size of the problem (against a ‘hey, the climate looks effing good to me’ background)

Unfortunately Australia, or more accurately the Australian government, is rapidly reaching pariah status on the world stage with its inaction on carbon reduction and its clear commitment to the future of the fossil fuel industries, particularly coal. In a recent UN conference in Bonn, Peter Woolcott, a former Liberal Party apparatchik who was appointed our UN ambassador in 2010 and our ‘ambassador for the environment’, a new title, in November 2014, was asked some pointed questions regarding Australia’s commitment to renewable energy and combatting climate change. The government’s cuts to the renewable energy target, its abandonment of a price on carbon, and its weak emission reduction targets all came under fire from a number of more powerful nations. Interestingly, at the same time the coal industry, highly favoured by the Abbott government, is engaged in a battle, both here and on the international front, with its major rival, the oil and gas industry, which clearly regards itself as cleaner and greener. Peter Coleman, the CEO of Woodside Petroleum, has mocked ‘clean coal’ and claimed that natural gas is key to combatting climate change, while in Europe oil companies are calling for the phasing out of coal-powered plants in favour of their own products. In the face of this, the Abbott government has created a $5 billion investment fund for northern Australia, based largely on coal.

So, with minimal interest from the current federal government, the move away from fossil fuels, which will be a good thing for a whole variety of reasons, has to be directed by others. Some state governments, such as South Australia, have subsidised alternative forms of energy, particularly wind, and of course the rooftop solar market was kick-started by feed-in tariffs and rebates, since much reduced – and it should be noted that these subsidies have always been dwarfed by those paid to fossil fuel industries.

The current uptake of rooftop solar has understandably slowed but it’s still happening, together with moves away from the traditional grid to ‘distributed generation’. Two of the country’s major energy suppliers, Origin and AGL, are presenting a future based on renewables to their shareholders. Origin has plans to become the nation’s number one provider of rooftop solar. Currently we have about 1.4 million households on rooftop solar, with potential for about five million more.

Meanwhile, thanks in large part to the persuasive powers of German Chancellor Angela Merkel, who’s been a formidable crusader for alternative energy in recent years, Canada and Japan, both with conservative governments and a reluctance to commit to policies to combat global warming, have been dragged into an agreement on emission reductions. So the top-down pressure continues to build, while bottom-up ingenuity, coming from designers and innovators in far-flung parts of the world and shared with greater immediacy than ever before, is providing plenty of inspiration. Let me look at a couple of examples in the wield of wind power, taken initially from Diane Ackerman’s dazzling book The human age: the world shaped by us.

Recent remarks by Australia’s Treasurer, Joe Hockey, and then our Prime Minister, Tony Abbott, about the ‘ugliness’ of wind farms, together with the PM’s speculations about their negative health effects, give the impression of being orchestrated. Abbott, whose scientific imbecility can hardly be overstated, is naturally unaware that the National Health and Medical Research Council (NHMRC), the Australian government’s own body for presenting the best evidence-based information on health matters that might impact on the public, released two public papers on wind farms and human health in February 2015. Their conclusion, based on the best available international studies, is that there is no consistent evidence of adverse health effects, though they suggest, understandably, that considering public concerns, more high-quality research needs to be done.

the Windstalk concept

the Windstalk concept

As to the aesthetic issue, one has to wonder whether Hockey and Abbott really prefer the intoxicating beauty of coal-fired power stations. More importantly, are they opposed for aesthetic or other reasons to the very concept of harvesting energy from the wind? Because the now-traditional three blade wind turbine is far from being the only design available. One very unusual design was created by a New York firm, Atelier DNA, for the planned city of Masdar, near Abu Dhabi. It’s called Windstalk, and it’s based on a small forest of carbon fibre stalks each almost 60 metres high, which generate energy when they sway in the wind. They’re quieter than three-blade turbines and they’re less dangerous to birds and bats. As to the energy efficiency and long-term viability of the Windstalk concept, that’s still a matter for debate. There’s an interesting Reddit discussion about it here, where it’s also pointed out that the current technology is in fact very sophisticated in design and unlikely to be replaced except by something with proven superiority in all facets.

a wind wheel, using Ewicon technology

a wind wheel, using Ewicon technology

Still there are other concepts. The ‘Ewicon’ wind-converter takes harvesting the wind in a radically new direction, with bladeless turbines that produce energy using charged water droplets. The standard wind turbine captures the kinetic energy of the wind and converts it into the mechanical energy of the moving blades, which drives an electric generator. The Ewicon (which stands for electrostatic wind energy converter) is designed to jump the mechanical step and generate electricity directly from wind, through ‘the displacement of charged [water] particles by the wind in the opposite direction of an electrical field’. The UK’s Wired website has more detail. Still at the conceptual stage, the design needs more input to raise efficiency levels from a current 7% to more like the 20% plus level to be viable, but if these ideas can find needful government and corporate backing, this will result not only in greater and faster improvement of existing concepts, but a greater proliferation of innovative design solutions. 

LED lighting

colourful solutions

colourful solutions

The most recent Nobel Prize for physics was awarded to the developers of the blue light emitting diode (LED), not something I’ve known much about until now, but a recent article or two in Cosmos magazine has more than whetted my appetite about the future of LEDs.

This is an amazing technology that I feel I should be availing myself of, and advertising to others. But first I need to get a handle on how the technology works, which I suspect will be no mean feat. Here goes.

The name of Oleg Losev should be better known. This short-lived Russian (he died of starvation during the Siege of Leningrad in 1942 aged 38) is now recognised among the cognoscenti as the father of LEDs. He did some of the world’s first research into semiconductors. Semiconductors are materials whose electrical properties lie between conductors such as copper and insulators such as glass. While working as a radio technician, Losev noticed that when direct current was passed through a point contact junction containing the semiconductor silicon carbide (carborundum), greenish light was given off at the contact point, thus creating a light-emitting diode. It wasn’t the first observation of electroluminescence, but Losev was the first to thoroughly describe and accurately theorise about the phenomenon.

LED technology continues to develop, but now it seems to have reached the stage where it’s not only commercially viable, but has eclipsed all other forms of lighting. I’m more than a bit interested in promoting this form of lighting for the Housing Association I’m living in, especially as the relatively expensive fluoro bulbs in my own home keep blowing. 

In issue 60 of Cosmos, Australia’s premier popular science mag, Alan Finkel waxed lyrical on the coming of age of LED lighting, which he now has installed in his home:

Our LEDs are brighter than the [halogen] lights they replaced, they use less electricity, they mimic the colour of sunlight, they have not visibly aged since they were installed, they work with dimmers, and they are safer in the ceiling cavity because they do not run nearly as hot as the halogens

It’s only quite recently that LED lighting for homes – and everywhere else that bright sunshine-like light comes in handy – has become available on competitive terms, and to understand why we need to return to the history of LED development.

Oleg Losev’s creation of the first LED in 1927 wasn’t capitalised on for decades, but experiments in the fifties in the USA reported infrared emissions from semiconducting materials such as gallium arsenide, gallium antimonide and indium phosphide. By the early sixties the first practical applications of infrared and visible red LEDs emerged. Ten years later, yellow LEDs were invented, which increased the brightness by a factor of 10. In the mid-seventies, optical fibre telecommunications systems were developed by the creation of semiconductor materials adapted to the fibre transmission wavelengths, further enhancing brightness and efficiency. It was around this period that we started to see patterned LEDs in radio and TV displays, and in calculators and watches. At first these were quite faint, and expensive to manufacture, but many breakthroughs in the field have brought down costs while improving efficiency markedly, and the field of high power LEDs has experienced rapid progress, particularly with the development of high-brightness blue by the Nobel prize winning Japanese researchers in the early nineties. The blue LEDs could be coated with a material which converted some of the blue light to other colours, resulting in the most effective white LED yet created. The blue LED was also the last piece of the puzzle for creating RGB (red, green, blue) LEDS, enabling LEDs to produce every visible form of light.

The future for LEDs is so bright that it’s been called the biggest development in lighting since the electric light bulb, The question for the everyday consumer like me, then, is – should I get on board with it now, or should I wait until the technology becomes even cheaper and more energy-efficient?

As we know, the incandescent bulb is going the way of the trilobite. Hugely sucessful worldwide for decades, it has been outcompeted in recent times by the cheaper and more efficient CFL (compact flourescent lamp), and its extinction has been assured by state energy laws. But the CFL is now recognised as a stop-gap for the far more versatile and revolutionary technology of LED. LEDS are already beginning to outstrip CFLs in terms of life-span, but up-front costs are high. As this American C-net article has it,

The minimal energy savings you get from going from CFL to LEDs reflects that LED bulbs are only slightly more efficient, when measured on lumens per watt. And, of course, CFLs have come way down in price over the past few years, while LEDs are still at the top of a projected downward cost curve. If you have incandescent bulbs, saving $4 a year with an LED is more compelling, but that’s still a long pay back.

So for many of us it’s a matter of waiting and watching those costs diminishing down to the proportions of our meagre bank balance. Meanwhile, it will be fascinating to see where LED technology takes us. It’s very likely that it will outgrow the old light-socket techology, from what i’ve been reading, but that’s still a way off, and will require a real change of mindset for the average consumer.

Current trends in solar

Barak Obama talking up the solar power industry

Barak Obama talking up the solar power industry

i was reading an article recently called how solar power workswhich was quite informative, but it mentioned that some 41,000 homes in Australia had solar PVs on their rooves by the end of 2008, and this was expected to rise substantially by 2009. This sounded like a very small figure, and I wondered if there was more recent data. A quick search turned up a swag of articles charting the rise and rise of rooftop solar installations in recent years. The data in just about every article came from the Australian Clean Energy Regulator (ACER). Australia swept past 1 million domestic solar installations in March 2013 with solar advocates predicting a doubling, at minimum, within the following two years. That hasn’t happened, but still the take-up has been astonishing in the past six or seven years. This article from a month ago claims 1.3 million PVs, with another 170,000 systems going up annually, though it doesn’t quote sources. Others are saying that the industry is now ‘flagging’, due to the retreat of state-based subsidies, though the commercial sector is now getting in on the act, having recently tripled its share of the solar PV market to 15%. The current federal government seems unwilling to make any clear commitment to domestic solar, but the Clean Energy Finance Corp, which was established by the Gillard government, and which the Abbott government wants to axe, is now engaged in a deal with ET Solar, a Chinese company, to help finance the solarisation of shopping centres and other commercial energy users. Shopping centres, which operate all day virtually every day, would seem to be an ideal target for solar PV installation. Presumably these projects will go ahead as the Abbott government seems unable or unwilling to engage in Senate negotiations which will allow its policies, including those of axing the entities of previous governments, to progress.

There’s so much solar news around it’s hard to keep track of, but I’ll start locally, with South Australia. By the end of 2014 some 23% of SA homes had solar PV, a slight increase on the previous year. One effect has been to shift the peak power period from late afternoon to early evening (just after 7PM). South Australia leads the way with the highest proportion of panels, with Queensland close behind. Australia’s rapid adoption of rooftop solar is surpassed only by Japan. The Japanese are now voting decisively against nuclear energy with their panels.

SA-Bozing-day-solar

This graph  (from the Renew Economy website) shows that on Boxing Day last year (2014) rooftop solar in SA (the big yellow peak) reached one third of demand in the middle of the day, and averaged around 30% from 11.30am to 3.30pm. With our heavy reliance on wind power here, this means that these two renewable power sources accounted for some two thirds of demand during that period. Sadly, though, with the proposed reduction of the Renewable Energy Target, wind and solar (small and large scale) are being forced to compete with each other for more limited opportunities.

There are some short-term concerns. Clearly the federal government isn’t being particularly supportive of renewables, but it’s highly likely the conservatives will be out of office after the late 2016 election, after which there may be a little more investment certainty. There’s also clear evidence now that small-scale solar uptake is declining, though it’s still happening. Profit margins for solar companies are suffering in an increasingly competitive marketplace, so large-scale, more inherently profitable projects will likely be the way of the immediate future. Still, the greater affordability of solar PV over the last few years will ensure continued uptake, and a greater proportion of households taking advantage of the technology. According to a recent International Energy Association (IEA) publication:

The cost of PV modules has been divided by five in the last six years; the cost of full PV systems has been divided by almost three. The levelised cost of electricity of decentralised solar PV systems is approaching or falling below the variable portion of retail electricity prices that system owners pay in some markets, across residential and commercial segments.

The 2014 publication was a ‘technology roadmap’, updated from 2010. Based on the unexpectedly high recent uptake of solar PV, the IEA has revised upwards its share of global electricity production from 11% to 16% by 2050. But on the barriers to expansion, the IEA’s remarks in the foreword to this document read like a warning to the Australian government

Like most renewable energy sources and energy efficiency improvements, PV is very capital-intensive: almost all expenditures are made up-front. Keeping the cost of capital low is thus of primary importance for achieving this roadmap’s vision. But investment and finance are very responsive to the quality of policy making. Clear and credible signals from policy makers lower risks and inspire confidence. By contrast, where there is a record of policy incoherence, confusing signals or stop-and-go policy cycles, investors end up paying more for their finance, consumers pay more for their energy, and some projects that are needed simply will not go ahead. 

The four-year gap between each IEA roadmap may be too long, considering the substantial changes that can occur in the energy arena. There was greater growth in solar PV capacity in the 2010-2014 period than there was in the four previous decades. The possibilities of solar energy really began to catch on with the energy crisis of the seventies, and the technology has received a boost more recently due to climate change and the lack of effective leadership on the issue. The charge was led by European countries such as Germany and Italy, but since 2013 China has been leading the pack in solar PV adoption.

What, though, of the long-term future? That’s a subject best left for another post, but clearly solar is here to stay, and its energy share will continue to expand, a continued expansion that is causing problems for industries that have traditionally (though only over the past couple of centuries actually) profited from our expanding energy needs. Our future is bound up in how we can handle transitions that will be necessary if we are to maintain energy needs with a minimum of damage to our biosphere.

anti-matter as rocket fuel?

easy peasy

easy peasy

This post is in response to a request, I’m delighted to report.

I remember learning first about anti-matter back in about 1980 or 81, when I first started reading science magazines, particularly Scientific American. I learned that matter and anti-matter were created in the big bang, but more matter was created than anti-matter. If not for that I suppose we wouldn’t be here, unless we could be made from anti-matter. I’m not sure where that would leave anti-theists, but let’s not get too confused. We’re here, and so is anti-matter. Presumably there are plenty of other universes consisting mostly of anti-matter, though whether that excludes life, or anti-life, I’ve no idea. Confusion again. If you’re curious about why there’s this lack of symmetry, check out baryogenesis, which will feed without satisfying your curiosity – just what the doctor ordered.

The next time I found myself thinking about anti-matter was in reading, again in Scientific American, about positron-emission tomography (PET), a technology for scanning the brain. As the name implies, it involves the emission of positrons, which are anti-electrons, to somehow provide a map of the brain. I was quite amazed to find, from this barely comprehensible concept, that anti-matter was far from being theoretical, that it could be manipulated and put into harness. But can it be used as energy, or as a form of fuel? Due to anti-matter’s antagonism to matter, I wondered if this was feasible, to which my 12-year-old patron replied with one word – magnets.

The physicist Hans Georg Dehmelt received a Nobel Prize for his role in the development of ion traps, devices which capture particles of different kinds and charges, including antiparticles, within magnetic and electrical fields, so clearly my patron was onto something and it’s not just science-fiction (as I initially thought). It’s obvious from a glance through the physics of this field – using ion traps to analyse the properties and behaviour of charged subatomic particles – that it’s incredibly arcane and complex, but also of immense importance for our understanding of the basic stuff of the universe. I won’t be able to do more here than scratch the surface, if there is a surface.

The idea is that antimatter might be used some time in the future as rocket fuel for space travel – though considering the energy released by matter-antimatter annihilation, it could also have domestic use as a source of electricity. To make this possible we’d have to find some way of isolating and storing it. And what kind of antimatter would be best for this purpose? The sources I’m reading mostly take antiprotons and also anti-electrons (positrons) as examples. The potential is enormous because the energy density of proton-antiproton annihilation is very many times that of equivalent fission reactions. However, experts say that the enormous cost of creating antimatter for terrestrial purposes is prohibitive at the moment. Better to think of it for rocket propulsion because only a tiny amount would be required.

Three types of antimatter rocket have already been proposed: one that uses matter-antimatter annihilation directly as a form of propulsion; another that uses the annihilation to heat an intermediate material, such as a fluid, and a third that generates electricity from the annihilation, to feed an electric spacecraft propulsion system. Wikipedia puts it this way:

The propulsion concepts that employ these mechanisms generally fall into four categories: solid core, gaseous core, plasma core, and beamed core configurations. The alternatives to direct antimatter annihilation propulsion offer the possibility of feasible vehicles with, in some cases, vastly smaller amounts of antimatter but require a lot more matter propellant. Then there are hybrid solutions using antimatter to catalyze fission/fusion reactions for propulsion.

A direct or pure anti-matter rocket may use antiproton annihilation or positron annihilation. Antiproton annihilation produces charged and uncharged pions, or pi mesons – unstable particles consisting of a pair of quarks – as well as neutrinos and gamma rays (high energy photons). The ‘pion rocket’ channels this released energy by means of a magnetic nozzle, but because of the complex mix of energy products, not all of which can be harnessed, the technology currently lacks energy efficiency. Positron annihilation, on the other hand, only produces gamma rays. To use gamma rays as a form of propulsive energy has proved problematic, though it’s known that photon energy can be partially transferred to electrons under certain conditions. This is called Compton scattering, and was an early proof of the particulate nature of light. Recent research has found that intense laser beams can produce positrons when fired at high atomic number elements such as gold. This could produce energy on an ongoing basis, eliminating the need for storage.

The more indirect types are called thermal antimatter rockets. As mentioned, these are divided into solid, gaseous and plasma core systems. It would be beyond my capacity to explain these technologies, but the finding so far is that, though plasma and gas systems may have some operational advantages over a solid system, the solid core concept is much more energy efficient, due to the shorter mean free path between energy-generating impacts.

It’s fairly clear even from my minuscule research on the subject that antimatter rocketry and fuel are in their early, speculative stages, though already involving mind-numbing mathematical formulae. The major difficulties are antimatter creation and, where necessary, storage. Current estimates around the technology are that it would take 10 grams of antimatter to get to Mars in a month. So far, storage, involving freezing of antihydrogen pellets (cooled and bound antiprotons and positrons) and maintaining them in ion traps, has only been achieved at the level of single atoms. Upscaling such a system is theoretically possible, though at this stage prohibitively expensive – requiring a storage system billions of times larger than what has so far been achieved.  There are many other problems with the technology too, including high levels of waste heat and extreme radiation. There are relativistic problems too, as the products of annihilation move at relativistic velocities.

All in all, it’s clear that antimatter rockets are not going to be with us for a long time, if ever, but I suspect that the technical issues involved and the solutions that might be nutted out will fascinate physicists and mathematicians for decades to come.

wind power in South Australia

Starfish Hill wind farm, near Cape Jervis, SA

Starfish Hill wind farm, near Cape Jervis, SA

I was unaware, until I recently listened to a forum panel on renewables broadcast by The Science Show, that wind power has really taken off in SA, where I live. Mea culpa. By August last year 27% of the state’s electricity production was from wind, and it’s now well over 30%, thanks to a new facility outside Snowtown, which came on stream in November. That’s half of Australia’s installed capacity, and it compares favourably with wind production in European countries such as Denmark (20%), Spain and Portugal (16%), Ireland (15%) and Germany (7%). It’s one of the great successes of the Mandatory Renewable Energy Target, introduced in a modest form by the conservative federal government in 2001 and expanded under the Labor government in 2009. The RET, like those in other countries, mandates that electricity retailers source a proportion of energy from renewables. South Australia’s renewable energy developers, under the longest-serving Labor government in the country, have been provided with tax incentives and a supportive regulatory framework to build wind farms throughout the state, to take advantage of the powerful Roaring Forties blowing in from the west.

The first wind turbine in SA was a small affair at Coober Pedy, but from 2004 onwards this form of energy generation has taken off here. The Snowtown wind farm mentioned above is the second in the region, and SA’s largest, with 90 turbines giving it an installed capacity of 270MW. We now have some 16 wind farms strategically located around the state, with an installed capacity of almost 1500MW. As far as I’m aware, we’re in fact the world leader in wind power – always remembering that, in population terms, we would be one of the smallest countries in the world, if we were a country.

The direct beneficiaries of these new farms are, of course, regional South Australians. An example is the 46 MW, 23-turbine Canunda wind farm near Millicent in the state’s south-east, which opened in 2005. The farm provides clean electricity generation to the region and has increased the viability of agricultural production. The facility has generated enough interest from the local community for tours to be undertaken.

Of course, one of the principle purposes of utilising renewable energy – apart from the obvious fact that it’s renewable – is the reduction of greenhouse gas emissions. And South Australia’s emissions have indeed declined in spite of increased electricity demand, due to the high penetration of wind power into the market.

This development has of course had its critics, and these are pretty well summed up on Wikipedia – linked to above:

There has been some controversy with respect to the impact of the rising share of wind power and other renewables such as solar on retail electricity prices in South Australia. A 2012 report by The Energy Users Association of Australia claimed that retail electricity prices in South Australia were then the third highest in the developed world behind Germany and Denmark, with prices likely to rise to become the most expensive in the near future.[24] The then South Australian Opposition Leader, Isobel Redmond, linked the state’s high retail prices for electricity to the Government’s policy of promoting development of renewable energy, noting that Germany and Denmark had followed similar policies. On the other hand, it has been noted that the impact of wind power on the merit order effect, where relatively low cost wind power is purchased by retailers before higher cost sources of power, has been credited for a decline in the wholesale electricity price in South Australia. Data compiled by the Australian Energy Market Operator (AEMO) shows South Australian wholesale electricity prices are the 3rd-highest out of Australia’s five mainland states, with the 2013 South Australian Electricity Report noting that increases in prices were “largely driven by transmission and distribution network price increases”.

The issue of cost to the consumer (of energy in general) is without doubt extremely important (and complex), and I’ll try to wade into it, I hope, in another post, but for now I want to look just at the costs for wind, and whether there are any further developments in the offing.

According to this site, which is informative but perhaps not as regularly updated as it could be in such a changing energy environment, SA’s Premier last year renewed his government’s pledge to have 50% of the state’s annual power supplied by renewable energy by 2025, a very realistic target considering that, according to the same site, wind and solar were already at 38% of annual supply, as of December 2013. However he pointed out that this would be difficult if the federal government reduced its RET target, then at 41TWh by 2020. In October federal industry minister Ian Macfarlane and environment minister Greg Hunt proposed a reduction of the RET to 27TWh.

A more recent article on the Renew Economy website argues that, though the government appears to have upped the proposed figure to around 31 or 32TWh, it may be targeting large-scale wind power projects by trying to incorporate rooftop solar, which has been taken up rapidly in recent years, into the large-scale target. The initial target was 45TWh overall, with a projected rooftop solar take-up of 4TWh, leaving 41TWh for large-scale renewable energy projects. We’re currently at 7TWh for rooftop solar, and the Warburton Review expects this to double by 2020. Hints by the government ministers that the take-up of rooftop solar should be reflected in the renewed target are adding to uncertainty in the industry, which is said to be in limbo at present. It may take a change of government to resolve the situation. Meanwhile however, South Australia leads the way with wind, and if the graph on the Renew Economy website is to be believed, we’ve already passed our 50% target for renewables (though the graph appears to fluctuate from moment to moment). The graph shows that we’re currently generating 710MW from wind, 527MW from natural gas and 179MW from brown coal. That makes just on 50% from wind alone. Compare this with Victoria, a much more populous state, which generates almost as much from wind – 592MW. However, that’s only about a tenth of what it currently generates from brown coal, its principle energy source (5670MW).

A new wind farm has been approved for Stony Gap, near Burra, but there may be delays in the project due to industry uncertainty about the RET and the federal government’s plans. Energy Australia, the project’s developers say ominously: We are now re-assessing the project based on current market conditions as well as government policy and legislation.  

And the cost? This is hard to gauge. As with solar, the cost of wind power has come down markedly in recent times. Basically the cost is for initial capital rather than running costs, but some argue that, because wind farms require back-up, presumably from fossil fuels, for those windless days, this should be incorporated into the cost.

nuclear power, part 2 – how it works

PressurizedWaterReactor

There are many tricky questions around nuclear power, and perhaps the most head-scratching one is, why did the most earth-quake prone country in the world embrace this technology so readily? The well-known environmental scientist Amory Lovins was just one to state the bleeding obvious with this remark: “An earthquake-and-tsunami zone crowded with 127 million people is an un-wise place for 54 reactors”. Combine this with a secretive governmental and industry approach to energy production in a cash-strapped economy, and disaster was almost inevitable. There were a number of earthquake-related shut-downs and cover-ups before the Fukushima disaster essentially blew the whistle on the whole industry, turning the majority of Japan’s population against nuclear power almost overnight. After Fukushima, the generation of nuclear power worldwide fell dramatically largely due to the shut-down of Japan’s 48 other nuclear power plants, though facilities in other countries were also affected by the publicity.

Yet it’s reasonable to ask whether other countries, such as Australia, should reject nuclear power outright because of Japan’s bad example. Australia rarely suffers serious earthquakes – South Australia almost never. And there may be safer ways to utilise nuclear fission as energy – now or in the near future – than has been employed in Japan or other countries since the sixties. So, just how do we generate nuclear power, how do we get rid of waste material, and are there any developments in the pipeline that will make generation and storage safer in the future?

How’s the energy produced?

Much of the following comes from How Stuff Works, but for my sake I’m putting it mostly into my own words. We derive energy from nuclear fission in the same way that we derive energy from coal-fired power stations – by turning water into pressurised steam, which drives a turbine generator. The difference, of course, is the source of the heat – uranium rather than carbon-emitting coal. Nuclear reactors create a chain reaction which splits uranium nuclei into radioactive elements, releasing energy in the process. A thorium fuel cycle rather than a uranium one is also possible, though with limited market potential at this point.

Uranium, in the form of isotope U-235, can undergo induced fission relatively easily. However, naturally occurring uranium is over 99% U-238, so the required uranium has to be enriched so that the U-235 content, which is naturally at around 0.7%, is increased to around 3% (weapons-grade uranium enrichment requires over 90% U-235). The enriched uranium is formed into pellets, each about 2.5 cms long and less than 2cms in diameter. These are arranged into bundles of long rods which are immersed in a pressure vessel of water. This is to prevent overheating and melting. Neutron-absorbing control rods are added to or subtracted from the uranium bundle, by raising or lowering, and these control the rate of fission. Completely lowering the control rods into the bundle will shut the reaction down.

The fissioning uranium bundle turns the water into steam, and then it’s just the technology of steam driving the turbine which drives the generator. But then there’s the matter of radio-activity…

Before we get into that, though, I should mention there are different kinds of reactors, which use different systems and different cooling agents. I’ve been rather cursorily describing a Light Water Reactor, the most common type. They use normal or regular water, and there are three varieties: pressurised water reactors, as described; boiling water reactors, and supercritical water reactors. There are also heavy water reactors which use water loaded with more of the heavier hydrogen isotope called deuterium. But whatever is used as a coolant and/or a neutron moderator (a medium that moderates the speed of neutrons, enabling them to sustain a chain reaction), the issue of radio-activity needs to be dealt with.

What are the safeguards against radioactive decay? 

What I previously termed ‘induced fission’ involves firing neutrons at U-235 nuclei. The nucleus absorbs the neutron and then becomes unstable and immediately splits, releasing a great deal of heat and gamma radiation from high energy photons. Among the products of the split are fissile neutrons, which then go on to split more nuclei, a chain reaction which can be controlled with the manipulation of control rods as described above. Uranium 235 and Plutonium 239 are among the very few fissile nuclei – those that lend themselves readily to nuclear chain reactions – that we know of.

The trouble with induced fission is that the products of the reaction are vastly more radioactive than the fissioned material, U-235, and their radioactive properties are long-lasting, leading to the obvious problems of safeguard, storage and elimination.

In standard light water reactors, the pressure vessel is housed in concrete, which is in turn housed in a steel containment vessel to protect the reactor core. Refuelling and maintenance equipment is housed within this vessel. Surrounding this we have a concrete building, a secondary containment structure to prevent leakage and to protect against earthquakes or other natural (or man-made) disasters. There was no such secondary structure at Chernobyl. The nuclear industry argues that, when these safeguards are properly maintained and monitored, a nuclear power plant releases less radioactivity into the atmosphere than a coal-fired power plant.

Even if this wins some people over, there are the really big issues of mining and transportation of uranium and nuclear fuel and storage of radioactive waste. According to the USA’s Nuclear Energy Institute, 2000 metric tons of high-level radioactive waste are produced annually by the world’s nuclear reactors, which is hazardous to all life forms and can’t be easily contained. This radioactive material takes tens of thousands of years to decay. Low-level waste, which contaminates nuclear plants and equipment, can take centuries to reach safe levels.

Storage, or possible recycling, of waste is probably the major issue for the nuclear power industry’s future, in spite of all the understandable current attention given to melt-downs. The How Stuff Works website summarises the present situation:

Currently, the nuclear industry lets waste cool for years before mixing it with glass and storing it in massive cooled, concrete structures. This waste has to be maintained, monitored and guarded to prevent the materials from falling into the wrong hands. All of these services and added materials cost money — on top of the high costs required to build a plant.

In my next, and hopefully last, post on this subject (for a while at least), I’ll focus more on this storage issue, and on other developments in nuclear fuel, such as they are. I’ll be relying particularly on the MIT interdisciplinary study ‘The Future of the Nuclear Fuel Cycle’, which came out in 2011 – just when the Fukushima-Daiichi disaster hit the headlines…  

energy solutions: nuclear power, part one – the problematic past

 

jordan-nuclear-energy-protest2    images

Here in South Australia, our Premier (the leader of the government) has recently announced a major inquiry into the viability of nuclear power for the state, and this is raising a few eyebrows and bringing on a few fevered discussions. The Greens are saying, what need for that old and dangerous technology when we have the prefect solution in renewables? Many scientists are arguing that all options should be on the table, and that our energy future should be flexible with many different technologies in the mix – solar, wind, geothermal but also perhaps clean coal (if that’s not an oxymoron), a new-look nuclear technology, and maybe even a technology of the future, such as fusion – not to mention the harnessing of anti-matter, mentioned to me recently by an enthusiastic 12-year-old.

South Australia already has a great rep for adopting new technologies. According to wind energy advocate Simon Holmes a Court, in a talk podcasted by The Science Show recently, SA gets more than 30% of its energy from wind, and some 5% from solar. If SA was a country, it would be at the top of the table for wind power use, a fact which certainly blew me away when I heard it.

Of course, South Australia also has a lot of uranium, a fact which has presumably influenced our young Premier’s thinking on nuclear energy. I recall being part of the movement against nuclear energy in the eighties, and reading at least one book about the potential hazards, the catastrophic effects of meltdowns, the impossibility of safe storage of nuclear waste and so forth, but I’ve also been aware in recent years of new safer types of fuel rods, cooling systems and the like, without having really focused on these developments. So now’s the time to do so.

But first I’m going to focus on the nuclear power industry’s troubled past, which will help to understand the passion of those opposed to it.

No doubt there have been a number of incidents and close things associated with the industry, but the general public are mostly aware of three disturbing events, Three Mile Island (1979), Chernobyl (1986), and Fukushima (2011). I won’t go into too much detail about these, as you’ll find plenty of information about them here, here and here, and in the links attached to those sites, but here’s a very brief summary.

The Three Mile Island accident was the result of a number of system and human failures, which certainly raised questions about complex systems and the possibility/inevitability of an accident occurring, but the real controversy was about the effects, or after-effects, of the partial melt-down. It’s inevitable that anti-nuclear activists would play up the impact, and nuclear proponents would play them down, but the evidence does suggest that, for all the publicity the accident garnered, the effects on the health of workers and residents of the area were minor and, where strongly claimed, largely unsubstantiated. Anti-nuclear activists have claimed widespread death and disease among animals and livestock in the region, while the local (Pennsylvania) Department of Agriculture denied any link. Research is still ongoing, but with so much heat being generated it’s hard to make sense of any light. One thing is certain, though. When an accident does happen, the costs of a clean-up, one that will satisfy everyone, including many of the nay-sayers, is astronomical.

Two reactors were built at the Three Mile Island site in 1974, and they were state-of-the art at the time. The second reactor, TMI-2, was destroyed by the accident, but TMI-1 is still functioning, and ‘remains one of the best-performing units in USA’, according to the World Nuclear Association, which, unsurprisingly, claims that ‘there were no injuries or adverse health effects from the accident’.

A much more serious accident occurred at Chernobyl in the Ukraine, then part of the Soviet Union. It has received a level 7 classification on the International Nuclear Event Scale, the highest possible classification (Fukushima is the only other accident with this classification; Three Mile Island was classified level 5). Thirty-one people died as a direct result, and long-term radiation effects are still under investigation. The figures on cancer-related deaths are enormously varied, not necessarily due to ideological thinking, but due to different methodologies employed by different agencies in different studies. The difficulties in distinguishing the thousands of cancers resulting from the radiation and the millions of cancers suffered by people in the region over the 20 years since the accident can hardly be underestimated. Most analysts agree, however that the human death toll is well into the thousands.

The Chernobyl disaster is notorious, of course, for the response of the Soviet government. No announcement was made to the general public until two days afterwards. When it came, it was as brief as possible. Workers and emergency services personnel who attempted to extinguish the fire were exposed to very high (that’s to say fatal) levels of radiation. Others involved in the massive clean-up were also heavily exposed. The cost of the clean-up, and of building a new containment structure (the largest civil engineering task in history) amounted to some 18 billion roubles. A half a million workers were involved.

The Fukushima disaster was caused by a tsunami triggered by a 9 magnitude earthquake, and the destruction caused (a meltdown of 3 of 6 of the plant’s reactors and the consequent release of radioactive material) was complicated by the damage from the tsunami itself. It was a disaster waiting to happen, for a number of reasons, the most obvious of which was the location of the reactors in the Pacific Rim, the most active seismic area on the planet. Some of the older reactors were not designed to withstand more than magnitude 7 or 8 quakes, but the most significant design failure, as it turned out, was a gross under-estimate of the height required for the sea-wall, the fundamental protection against tsunamis. To read about the levels of complacency, the unheeded warnings, the degree of ‘regulatory capture’ (where the regulators are mostly superannuated nuclear industry heavyweights with vested interests in downplaying problems and overlooking failures) and the outright corruption within and between TEPCO (the Tokyo Electric Power Company) and government, is to be alerted to a whole new perspective on human folly. It is also to be convinced that, if the industry is to have any future whatsoever, tight regulation, sensible, scientific and long-term decision-making, and complete openness to scrutiny by the residents of the area, consumers and the general public must be paramount.

Though there’s ongoing debate about the number of fatalities and injuries caused by the nuclear power industry, that number is lower than the numbers (also hotly debated of course) caused by other major energy-generating industries. Commercial nuclear power plants were first built in the early seventies and 31 countries have taken up the technology. There are now more than 400 operational reactors worldwide. The Fukushima disaster has naturally dampened enthusiasm for the technology; Germany has decided to close all its reactors by 2020, and Italy has banned nuclear power outright. However, countries such as China, whose government is rather more shielded against public opinion, are continuing apace – building almost half of the 68 reactors under construction worldwide as of 2012-13.

It’s probably fair to say that Fukushima and Chernobyl represent two outliers in terms of operating nuclear power plants, both in terms of accident prevention and crisis management, and the upside of these disasters is the many lessons learned. I presume modern reactors are built very differently from those of the seventies, So I’m interested to find out what those differences are and what ongoing innovations, if any, will make nuclear fission a safer and more viable clean energy option for the future. That’ll mean going into some technical detail, for my education’s sake, into how this energy-generating process works. So that’ll be next up, in part 2 of this series.