Modern societies have many concerns about their energy supply. Above all, it should be affordable, reliable and convenient. Affordable in order to drive economic development and improvements in quality of life. Reliable in order to be available on demand in its various forms, most of all as an uninterruptible electricity. Convenient to give consumers virtually effortless access to preferred household, industrial and transport energies. During the closing decades of the 20th century two other concerns became prominent: energy supplies should be also environmentally benign and, preferably, renewable. Since the 1970s –- that is once energy matters began to receive, belatedly, an unprecedented amount of attention and public attention –- all of these concerns have been addressed and analyzed in hundreds of books and thousands of papers, to say nothing about instant expertise proffered by mass media.
Any modestly informed person knows that energy prices matter (think of OPEC’s crude oil price fixing), as does reliable electricity supply (think of blackouts) and the finiteness of fossil fuel resources (think of the claims about imminent peak of oil extraction), and that environmental impacts of energy use can be not only local but worldwide (think of global warming). But does power density matter? Power is simply energy flow per unit of time (in scientific units J/s = W), spatial density is a quotient of a variable and area, and hence power density is W/m2, that is joules per second per square meter. Why should we care about how much power is produced, or used, per unit of the Earth’s surface? Some say explicitly that we should not bother at all.
Amory Lovins spent a lifetime of making unrealistic (and repeatedly failed) forecasts about the speed with which renewable energy conversions (and other energy-related innovations) will be adopted by modern energy systems (for these claims see Lovins 1977 and 2011; for their critique see Smil 2010). In 2011 he dismissed any need to consider power densities, concluding that ‘’land footprint seems an odd criterion for choosing energy systems: the amounts of land at issue are not large, because global renewable energy flows are so vast that only a tiny fraction of them need be captured’’ (Lovins, 2011a, 40). In contrast, nearly two generations ago Wolf Häfele and Wolfgang Sassin, two leaders of energy research at the International Institute for Systems Analysis, wrote that “the density of energy operations is one of the most crucial parameters that predetermine the structure of the energy systems” (Häfele and Sassin 1977, 18).
Even so, most of the modern energy literature has simply ignored the subject. Energy economists have been worrying about prices, their elasticities, oligopolies, taxation and links between energy use and economic growth. As interest in energy matters began to rise, Malcolm Slesser offered an explanation of this omission:
Land has always had a firm place in classical economics, where attention focused upon the agrarian economy, and land as a factor of production. Ricardo and Malthus founded their ideas around the land factor, in marked contrast to more recent economic thought in which land virtually dropped out of the scene, and production was viewed essentially as a synergy of labour and capital (Slesser 1978, 86).
Similarly, modern engineering analyses of energy systems look at fuel qualities, mass throughputs, rated power of energy converters and annual and peak production. Of course, design of specific energy extraction or conversion facilities must, inevitably, consider land requirements and land qualities in requisite detail but space is not a common analytical denominator used to assess their performance. Still relatively uncommon interdisciplinary inquiries into the nature and linkages of modern energy systems look at per capita energy use, energy’s role in economic performance and impacts of energy conversions on environmental quality, but only very few recent publications have looked at power densities.
I chose the measure as a key analytical variable in General Energetics (Smil 1991) and in its second, expanded edition renamed Energy in Nature and Society (Smil 2008). David McKay’s Sustainable Energy – Without the Hot Air (McKay 2008) also uses the rate as an essential indicator that offers revealing insights into unfolding energy transitions. And Cruz and Taylor offer a rare example of economists who have explicitly incorporated energy density and power density in their new analysis, concluding that “perhaps the most important attribute of an energy source is its density: its ability to deliver substantial power relative to its weight or physical dimensions” (Cruz and Taylor 2011, 2) and that “the very density of the energy resource we seek, fuels our efforts to obtain more. Therefore differences in power density across energy resources create large differences in energy supply” (Cruz and Taylor 2011, 48).
In this book I will demonstrate that power density is a key determinant of the nature and the dynamics of energy systems. Its careful quantifications, their critical appraisals, and their revealing comparisons (in historical terms and with plausible alternatives) bring a deeper understanding of past, present and future ways of harnessing, converting and using energies. Careful assessment of power densities is particularly revealing when contrasting our dominant fossil fuel-based energy system with renewable energy conversions. But before I start laying the foundation for my systematic power density assessments (by introducing principal variables and reviewing different power density concepts, their complexities and limits), I will tell a story of early modern charcoal-based iron smelting and of its replacement by coke-based production in modern blast furnaces. This historical appraisal offers a highly compelling (yet curiously overlooked) example of why power densities matter, why energy’s rate of flow per unit of surface is a key determinant of the structure of energy systems and how it confines their performance.
HOW POWER DENSITY MATTERS
This chapter starts with a 1548 King’s commission, proceeds to a great English invention of the 18th century (not James Watt’s steam engine!), and ends with modern blast furnaces, Amazonian eucalyptus plantations and the consequences of smelting more than a billion tonnes of iron by using charcoal.
In November 1548 the King’s commission, given to 20 men in Sussex, was to examine “the hurts done by iron mills and furnaces made for the same”on the Weald, the region of the southeastern England between South and North Downs that was the center of English iron-making during the 16th century. The commission’s most pressing questions to witnesses were these (Straker 1969, 115):
5. If the said iron mills and furnaces be suffered to continue, then whether thereby there shall be great lack and scarcity of timber and wood in parts near the mills . . . .
6. What number of towns are like to decay if the iron mills and furnaces be suffered to continue?
The cause of these hardships was obvious as
the iron mills and furnaces do spend yearly . . . above 500 loads of coals, allowing to every load of coals at the least three loads of wood; that is every iron mill spendth at the least yearly 1,500 loads of great wood made into coals.
The petitioners went to a great length to enumerate how a greater scarcity of timber would make it impossible to build houses, water mills or windmills, bridges, sluices, ships, boats, wheels, arrows, hogsheads, barrels, buckets, saddletrees, bowls, dishes . . . and especially timber ‘’for the King’s Majesty’s towns and pieces on the other side of the sea” (Straker 1969, 118). As long as the deforestation caused by rising demand for charcoal was limited to a few counties, local smelting would simply decline or cease and the production would move elsewhere. Data assembled by King (2005) show that the production of pig iron from Weald’s charcoal furnaces (about 4,000 t at the time of 1548 Sussex petition) peaked by 1590 (at 14,040 t) and that by 1660 the region made less iron than in 1550 while the metal’s output kept on increasing in the rest of country.
English iron smelting By 1620 England produced more than 26,000 t of pig iron but then the production began to fall and it was only about 18,000 t by the end of the 17th century. We can make a fairly reliable reconstruction of what this meant in terms of energy demand and environmental impact. In the early decades of the 18th century charcoal-fueled blast furnace campaigns usually extended from October to May and during those eight months a typical furnace produced 300-340 t of pig iron (Hyde 1977). Efficiencies of wood conversion to charcoal and charcoal requirements for pig iron smelting and for subsequent conversion of the metal to bar iron varied considerably (and about a third of the metal was wasted in the conversion), but the best available data (Hammersley 1973) indicate that at the beginning of the 18th century at least 32 t of wood were needed to produce a tonne of bar iron.
This means that a typical furnace and an associated bar forge (usually located closer to the market) would have required at least 9,600-10,900 t of wood –- and in the early 17th century, with less efficient conversions, the total would have been easily twice as much. The preferred wood supply came from coppiced plantings of beech or oak cut in 20-year rotation that would yield an annual increment of about 5 m3/ha; with average wood density at 0.75 g/cm3 that would be 3.75 t/ha, and operating a furnace and a forge would have required harvesting about 2,700 ha/year. In 1700 British furnaces produced about 12,000 t of bar iron and hence they consumed on the order of 400,000 t of charcoaling wood. With an average productivity of 4 t/ha this would have required at least 100,000 ha of coppiced growth, a square of nearly 32x32 km.
Obviously, availability of suitable wood was a limiting factor in further expansion of English iron industry. Hammersley (1973) estimated that the maximum countrywide harvest would have been on the order of 1 Mt/year. Not surprisingly, during the 18th century England became highly dependent on iron imports: Swedish exports rose rapidly after 1650 and Russian exports began do dominate after 1750 and by the 1770s England covered two-thirds of its iron demand by imports (King 2005). End of the charcoal ceiling on English pig iron production, elimination of the British dependence on Swedish an Russian imports and massive expansion of iron production (from only about 25,000 t in 1750 to 100 times that much, 2.5 Mt, 100 years later) were possible because of the switch from charcoal to coke, a solid, light-weight (apparent density, including the porous volume, of just 0.8-1 g/cm3) but energy-dense fuel (29 MJ/kg) produced from coal by pyrolysis, heating in the absence of oxygen.
In England the first use of coke was for drying malt in the 1640s, Shadrach Fox was the first industrialist to use it on a small scale in a blast furnace during 1690s, and, starting in 1709, Abraham Darby became the fuel’s best-known promoter (Harris 1988). Coke’s initially high production costs limited its rapid adoption and the two fuels coexisted in England for most of the 18th century, and in the US well into second half of the 19th century when coke was the only, or overwhelmingly dominant, energizer of iron smelting throughout the Western world. This shift from metallurgical charcoal to coke was a part of a much larger energy transition from traditional biomass fuels (woody phytomass, charcoal, and also crop residues, mainly cereal straws) to fossil fuels, first to coals, and then to hydrocarbons (Smil 2010a).
This transition introduced fuels that had generally higher energy content -- a kilogram of high-quality coal had nearly twice as much energy as kg of air-dry wood, a kg of liquid fuels refined from crude oil had three times as much energy as straw used for cooking -– and hence were cheaper to transport and easier to store. But the replacement of charcoal with coke did not bring in a fuel with superior energy content as both charcoal and coke are essentially pure carbon with virtually identical energy content of about 30 MJ/kg. But power densities of their production are orders of magnitude apart.
From charcoal to coke Harvesting coppiced beech or oak would yield –- at 5 m3 or 3.75 t/ha and energy density of 19 GJ/t – annual phytomass harvest of about 0.22 W/m2. In contrast, a typical late-18th century deep mine –- based on detailed data for the Toft Moor Colliery of the 1770s (Hausman 1980) -– would produce about 15,000 t/year and all of that coal would be hauled through a single narrow shaft. With the mine’s surface structures occupying no more than a hectare (10,000 m2) of land that extraction would have produced fuel with power density of nearly 1,200 W/m2, more than 5,000 times higher than producing charcoal.
But more land was needed near the pithead because the fuel was usually sorted at the site in order to remove associated rocks and to improve coal’s quality. Assuming that the incombustible material (amounting to 10% of the total mass, with density of 2.5 t/m3) was deposited nearby in a conical heap just 20 m tall, the area claimed after 50 years of mining operation would have been no larger than 4,500 m2 and the overall power density of coal production would have been no less than 800 W/m2, a 4,000 times higher than the harvesting of wood.
And we have to make the adjustments for conversion to, respectively, charcoal and coke (both calculation disregard relatively small areas needed for charcoaling or coke ovens). In the early 18th century typical charcoal:wood ratio was 1:5 by weight and (with 29 GJ/t of charcoal and 19 GJ/t of wood) it was about 1:3.2 in energy terms, and power density of charcoal production would have been only about 0.07 W/m2 (0.18/2.6). Early coking in simple beehive ovens was also inefficient, with coke:coal ratios no higher than 1:1.7. Overall power density of mid-18th century English coke production was thus roughly 500 W/m2, approximately 7,000 times higher than making charcoal.
This shift from charcoal to coal and the huge difference in accompanying power densities of the two products had many economic, social and environmental repercussions. England could rapidly reduce, and soon eliminate, its dependence on iron imports from Sweden and Russia, removal of one of the greatest claims on the country’s forests opened the way to reforestation, and high power density of coal and coke production made it possible to concentrate their output in progressively smaller number of facilities (large coal mines and coking plants, often attached to large blast furnaces) from which they could be distributed not only nationwide but also exported abroad.
And the shift made no smaller difference even countries that were initially exceedingly rich in natural forests because they too were not immune to the charcoal limit on iron making. In the early 19th century American iron makers had no problem to harvest the needed wood from the country’s rich Appalachian forests but 100 years later it became impossible. Although the smelting in blast furnace became much more efficient during the course of the 19th century (by 1900 it required just 5 kg of wood for every kg of hot metal) but by 1906 the US pig iron output surpassed 25 Mt and maintaining that level of production alone (excluding all charcoal need for further processing) would have needed (even when assuming high average increment of 7 t/ha in natural forests) annual wood harvest of about 180,000 km2 of forest (Smil 1994).
That is an area the size of entire states of Missouri or Oklahoma (or a third of France), equal to a square with a side equivalent to the distance between Philadelphia and Boston, or Paris and Frankfurt -– to be harvested annually! Obviously, power density of producing metallurgical charcoal would have been far too low to enable even forest-rich America to industrialize on that renewable energy basis. And what would charcoal-based iron making require today? Could the combination of high-yielding clones of fast-growing trees planted in the tropics, higher efficiencies of the best charcoaling techniques and lower specific energy requirements of metal smelting provide a land-sparing renewable solution for modern iron industry?
I will address these questions in the book’s closing chapter -– but anybody familiar who has basic familiarity with the advances of modern ferrous metallurgy and with the growth of global pig iron production can anticipate the key answer. Combined effect of the listed innovations has not brought an overall order of magnitude reduction in specific charcoal demand (t/t of hot metal) while the worldwide iron smelting has grown by and order of magnitude during the 20th century and it has yet to reach its global peak. Theoretical calculations may show that we could do it, realistic appraisals say otherwise: power densities matter!
Quantitative keys to understanding energy
After noting a poor state of understanding energy challenges I will extoll the explanatory utility of rates, sort out the various meanings of power densities used by scientists and engineers, provide a clear definition of the key measure used in this book, offer its brief typology and explain some of its inherent complications and uncertainties.
By 2014 energy –- its resources, consumption and future supplies, its economic importance and trade and strategic implications, and environmental impacts of its use –- has been a matter of intense public interest, policy-making attention and expanded scientific inquiries for four decades. That sudden elevation of energy matters into worldwide prominence had its proximate cause in the surprising move by the Organization of Petroleum Exporting Countries (OPEC): in 1973-1974 OPEC had quintupled the price of crude oil sold by its (at that time) 13 member countries and, in addition, to embargo temporarily all oil shipments to the US and the Netherlands (Smil 1987).
As these events were taking place the public, even in the affluent countries with relatively well-educated populations, was largely ignorant of basic energy matters, with only a small number of individuals able to explain the background and the importance of unfolding changes, and to supply essential framework needed to assess realistic options by referring to numbers others than the constantly repeated references to new record levels of crude oil prices. By 1978 those prices had steadied but in 1979 they began to rise again and had promptly doubles as the Iranian monarchy fell and the fundamentalist ayatollahs took over Iran, at that time the world’s fifth largest oil producer. But that price spike was also relatively short-lived and by 1985 OPEC’s oil price fell by two-thirds from its 1981 peak and even Saddam Hussain’s invasion of Kuwait in August 1990 resulted in only a brief rise followed promptly by a decade of stable and low oil prices: during the closing years of the 20th century they were (in constant monies) almost as low as in 1975 (BP 2013).
As a result, the anguish about high oil prices and security of energy supply to the global economy receded. Academic and engineering studies of energy sources that had so suddenly flourished during the 1970s continued and even expanded but they did little for a deeper understanding of real challenges. At the same time, concerns about the pace and extent of future global warming, rather than worries about access to energy, assumed a leading place in public discussion of global energy use, the consequence of a growing realization of the role played by carbon dioxide emissions from the combustion of fossil fuels in the process of anthropogenic global warming (IPCC 1995). But understanding planetary energy balance, physics of greenhouse gases and the complex atmosphere-hydrosphere-biosphere interactions governing the global biogeochemical carbon cycle is a challenge no easier than appreciating many intricacies of global fuel an electricity supply and demand.
During the first decade of the 21st century –- as oil prices began to soar once again, as some catastrophist forecast resource shortages, and as concerns about global warming reached new intensity –- anxieties about our energy futures were on the rise once again, but the quality of discourse has not improved as the public discussion of all energy-related matters has remained at overwhelmingly qualitative level. Most people paying even the slightest attention to post-2000 news heard about the claims of an imminent peak in global oil production, but had no idea about crude oil’s energy density, about the share of products coming out of a typical US refinery or about the actual dynamics of global hydrocarbon reserves.
Similarly, most people who have heard who have heard about global warming that would be unprecedented in its rapidity have no idea of actual CO2 emission factors or the unfolding relative decarbonization of global energy supply. Understanding complex energy matters, formulating informed arguments and making sensible choices can be done only on the basis of quantitative understanding that must be both relatively broad and sufficiently deep. There is a natural progression in this understanding, from simple quantities to rates that relate those variables to basic physical attributes of our universe, to time and space.
Power of rates
Most phenomena are best understood when they are quantified as rates, that is as specific ratios relating two variables. In scientific sense all rates are
derived quantities that are defined in terms of the seven base units of the Système international d'unités(SI) that include length (SI base unit is meter, m), mass (kilogram, kg), time (second, s), electric current (ampere, A), temperature (kelvin, K), amount of substance (mole, mol) and luminous intensity (candela, cd). Speed (velocity, m/s) is perhaps the most commonly used rate in everyday affairs while the rates frequently encountered in scientific and technical inquiries include mass density (kg/m3), amount of substance (mol/m3) and luminance (cd/m2) –- as well as energy and power./
Those energy-related derivations start with force (newton, N, is m·kg/s2), the energy unit, joule (J) is a newton-meter (m2·kg/s2), and the unit of power (watts, W) is simply the rate of energy flow (J/s or m2·kg/s3). In turn, these units can be used in specific rates as they are related to base variables of length, mass, time, substance and current, or to individuals or groups of people current in order to give fundamental insight into the nature and dynamics of energy systems: only when the absolute values are seen in relative terms we can truly appreciate their import and make revealing historical and international comparisons.
Certainly the most common class of these higher-order derivatives are quantities prorated per an individual in a given data set, and average national per capita rates make up the most frequent use of this measure. When they are used to quantify natural endowment (water resources, cropland, standing forest phytomass, fossil fuel reserves) they just refer to a particular year, but when used to express average supply or consumption they become double rates prorated not only per capita but also over a specific time period. In the case of food supply (in kcal/capita or in MJ/capita) it is per day, and -– as illustrated by contrasting the US with Japan –- those rates alone tell us a great deal about a nation’s food supply, dietary habits, overeating and excessive food waste.
US food per capita availability (total at the retail level) now averages about 3,700 kcal/capita a day, and as that mean include babies and octogenerians (population categories whose normal daily food intake should be either below or barely above 1,1000 kcal) it implies supply of more than 4,000 kcal a day for adults (FAO 2014; USDA 2013). Obviously, if that was an actual average consumption Americans would be even more obese than they already area. America’s food consumption surveys show actual daily intake averaging only about 2,000 kcal/capita; these surveys are based on individuals’ recall of food intake in a day and hence are not highly accurate, but even after adding 10% to their mean there is still a gap of 1,500 kcal/day which means that the US wastes 40% of all food it produces and imports.
An excellent confirmation of this loss comes from the modelling of metabolic and activity requirements of the US population by Hall et al. (2009): they found that between 1974 and 2003 that rate was between 2,100 and 2,300 kcal/day while the average food supply rose from about 3,000 to 3,700 kcal/day resulting in food waste rising from 28% of the retail supply in 1974 to about 40% by 2004. In contrast, no other affluent economy has been wasting so little food as Japan: the country’s recent average per capita food supply has been only between 2,500-2,600 kcal/day while annual studies of dietary intake show consumption of just over 1,800 kcal/capita resulting in food waste of less than 30% (Smil and Kobayashi 2012).
In the case of numerous raw materials and finished products, and for such key financial indicators as GDP or disposable income, per capita rates are usually given for a calendar year, while availability of such essential quality-of-life indicators as number of doctors or hospital beds is expressed, obviously, per 1,000 people rather than per capita. But all of these indicators also illustrate a common problem with average per capita rates: their simplistic international comparisons –- ignoring differences in the quality of statistics and, even more importantly, in the qualitative differences and in wider socio-economic setting of specific variables –- may mislead and confuse rather than reveal and explain. Similar caveats apply, to a higher or lesser extent, even to seemingly straightforward energy-related variables I will do my best to point out such problems whenever such need will arise.