Science and knowledge advance by the systematic opposition of ideas, of models, of paradigms, by confronting theories with real world data: it is replete with controversies. While most of these are only of specialist interest, occasionally they threaten dominant views of the world and of our place in it. Perhaps the most famous example is heliocentrism which lead to Copernicus’ works being banned and in 1633, to Galileo being shown the instruments of torture. While heliocentrism powerfully conflicted with Church doctrine, there was more than just religious ideology at stake. Writers and poets of the time spoke of heliocentricism as being “ridiculous” (Jean Bodin, 1592) or of conveying a deep sense of despair (John Donne, 1611, “the sun is lost…”). The church simply reinforced and officialized the “common sense” earth-centred view and condemned heliocentrism as heretical ([Sherwood, 2011]).
Several centuries later, we are witnessing a similar clash: climate as an imposed externality to which we can only passively adapt versus the reality of its modification by human action. Rather than being a benevolent – even nurturing part of mother earth – it is revealed to be malleable and potentially malevolent. Perhaps more to the point in as much as solutions will require massive collective effort and planning, since the 1980’s it has also powerfully clashed with triumphant neoliberal ideology [Klein, 2014] and with enormous vested interests. Unfortunately, the parallel with heliocentrism ends here: the consequences of false consciousness about anthropogenic warming are far from just ideological: failure to act to reduce climate change and to mitigate its consequences may have dire implications for the future of humanity. In the following, I summarize some recent advances in understanding the climate and anthropogenic climate change and argue that from a purely intellectual, scientific standpoint, we have (finally!) arrived at a moment of closure: there is no longer a rational debate, the remaining climate skeptics are no more than climate deniers. I then discuss some consequences and implications for action, in particular for humanists.
What is climate?
The dictum says “the climate is what you expect, the weather is what you get”: the climate is a kind of average weather. It turns out that this is a purely theoretical view, detached from the real atmospheric environment [Lovejoy, 2013]. Empirical analysis shows that up until time periods of about 10 days, fluctuations in temperature, wind and other quantities tend to get larger and larger as the time interval grows (e.g. the temperature difference between today and next week is typically (on average) larger than between today and tomorrow). However, this relationship is inverted for periods from about 10 days to 30 years: rather than growing with the time interval, successive fluctuations tend to cancel each other out. Averages over longer and longer periods thus tend to converge to stable “expected” results. This is the regime that satisfies the idea “the climate is what you expect”. Yet after about 30 years (in the industrial epoch) and after about 100 years (pre-industrial), this average itself begins to drift again. For example, the climate “normal” (defined as a thirty year average) tends to fluctuate becoming more and more variable right up to tens of thousands of years, the ice ages. There are therefore three different regimes, not two, I have proposed that the intermediate regime – which is really a kind of “slow” (long time scale) weather – be called “macroweather” with the term “climate” being reserved for the longer period (unstable) variations. An equivalent way of understanding these two basic behaviours is that in the stable macroweather regime, negative feedbacks are dominant, at the unstable larger scales, on the contrary, positive feedbacks are dominant. If we apply the same approach to the Phanerozoic eon (the last 550 Myrs, the epoch of animals) we find that for scales of about a million years and longer – the “mega-climate” regime – that temperature fluctuations tend to get larger and larger at longer time scales ([Lovejoy, 2014e]). This implies that over this period, that the dominant temperature feedbacks are again positive rather than negative as required homeostasis: it contradicts Lovelock’s Gaia hypothesis (see below).
The weather – macroweather – climate distinction is helpful for understanding anthropogenic warming: whereas in pre-industrial times, slowly acting natural processes eventually – at scales of a century or more – begin to dominate the (cancelling, diminishing) macroweather processes, in the industrial era (here, taken as post 1880), the anthropogenic effects are stronger than the natural climate processes and they begin to dominate macroweather after only about 30 years. This is important because it means that the (roughly century long) variation since 1880 is mostly due to anthropogenic rather than natural variability: it allows us to fairly accurately separate out the anthropogenic from the natural variations.
A brief History of anthropogenic warming
Why is the earth warming? In 1896, in an attempt to understand the causes of the ice ages, the Swedish chemist Svante Arrhenius estimated that if the concentration of carbon dioxide (CO2) in the atmosphere was doubled, that global temperatures would rise by 5 – 6o C and in 1938, Guy Stewart Callander revised this estimate to 2o C. These are both surprisingly modern estimates; from a scientific point of view, the basic result is straightforward: CO2 is a “Greenhouse Gas”: it lets visible light from the sun through to the surface while absorbing part of the earth’s outgoing heat radiation, thereby raising the overall temperature. The main difficulty is that there are complicated feedbacks between CO2 and water vapour and clouds, these are the main effects responsible for the uncertainty that we discuss below. The next milestone for anthropogenic warming was the development by Charles Keeling of a new technique for estimating CO2 concentrations which allowed –starting in 1957 – the commencement of regular measurements from two observatories remote from direct agricultural or industrial perturbations: Mauna Loa and Antarctica. These measurements showed unequivocally that the annually averaged level of CO2 was constantly increasing and it indicated that roughly half of the human CO2 emissions ended up in the atmosphere, the other half being taken up by the oceans. Finally, starting in the 1970’s with the development of huge supercomputer models (Global Circulation Models or GCM’s), estimates of climate sensitivity of 1.5 – 4.5o C for a CO2 doubling were made (the US National Academy of Sciences, 1979); this is the temperature rise expected if the CO2 concentration was doubled overnight and we waited for a new equilibrium to be reached. Indeed – underscoring the limits of this approach – in spite of huge advances in computer power and algorithms, this range was re-iterated in the most recent report of the International Panel on Climate Change (IPCC, the Fifth Assessment Report, AR5, 2013).
Legitimate and illegitimate climate skepticism
When confronting a new set of facts or a new theory, the correct scientific attitude is one of disinterested scepticism so that in its historical development, anthropogenic warming had to overcome several criticisms (see [Weart, 2008] for a history). If one ignores water vapour and cloud feedbacks, then anthropogenic warming is fairly straightforward to establish: the basic mechanism itself (warming due to the Greenhouse effect) was never seriously doubted. The main difficulty – which necessitated the development of numerical models – was to estimate the feedbacks. As mentioned, even today, they are poorly quantified, leading to warming estimates in the rather broad range 1.5-4.5o C/ CO2 doubling.
Even in the 1960’s after Keeling’s measurements were well known, the main resistance to the theory of CO2 warming was – as discussed in the introduction – largely ideological: the warming was claimed to be very small, the possibility of a dominant human contribution to climate change was repugnant – and finally – the official meteorological societies of the time failed to endorse it. Nor was Keeling’s timing helpful: from fig. 1 we see that the 1960’s were part of a long period of “post-war” natural cooling.
After the first GCM calculations by [Manabe and Wetherald, 1975], criticism shifted to the unrealism of the models and the inadequacy of the data: indeed it wasn’t until the 1980’s that the evidence for the warming became convincing. As for model, criticism is particularly easy: no model is perfect and today’s GCMs are complex team-built constructs and it is probably fair to say that no single individual fully understands even “their own” GCM. However the modellers are aware of many of the limitations: legitimate GCM criticism is therefore an integral part of routine GCM development and involves batteries of tests against real world data. From my own perspective as a GCM outsider (coming from the field of nonlinear geophysics), over years of study, I found that by analysing the model outputs over wide ranges of space and time scales that the model variability was indeed reasonably realistic.
Even if one discounts the models, the temperature really was rising so that some explanation was required. The skeptics therefore began a long battle questioning the reliability of the data with extremists doubting the reality of any warming. For example, starting in the 1990’s the “heat island effect” in which temperatures are spuriously augmented by the development of urban areas surrounding originally pristine measurement sites was used to dismiss global temperature estimates as biased. Also, starting in 1990, satellite measurements of atmospheric (not surface) temperatures initially failed to support the warming trends displayed by the surface stations. The main protagonists of this saga were the evangelical Christian scientists J. Christy and R. Spencer who collaborated in the analysis. Both were upfront about their religion with the latter publically justifying his scripture-based skepticism: “Earth and its ecosystems – created by God’s intelligent design and infinite power and sustained by His faithful providence – are robust, resilient, self-regulating, and self-correcting”. Over the course of the next fifteen years, no fewer than four subtle errors were discovered in their interpretations of the satellite measurements (reviewed in [Mann, 2012]). By 2005, it was clear that contrary to the initial claims, that the satellites did indeed support the observed surface warming trends. A year later, signally the end of legitimate scepticism, Michael Shermer, editor of the Skeptical Inquirer, changed his position saying that due to the advances in climate science, that continuing to oppose anthropogenic warming on the grounds of skepticism would be antiscientific.
The problem is that when there are vested interests at stake, the normal rules of intellectual discourse or of scientific skepticism no longer apply. This is particularly true in the US where right-wing think tanks such as the Marshall Institute and the Science and Environment Policy Project were generously funded by the fossil fuel industry. For example in 1998, Exxon-Mobil gave them a grant of $20 million in order spread doubt about anthropogenic warming, launch a publicity campaign claiming that the warming was neither real nor harmful, and to lobby congress. In 2007, the American Enterprise Institute offered $10,000 plus expenses to scientists who would go on lecture tours criticizing the fourth IPCC report that had just been published. In 2012, the Heartland Institute paid individuals to intervene on the internet to denigrate anthropogenic warming, and paid scientists to write reports expressing skeptical views. It also began a campaign to have climate skepticism introduced into school curricula. To put all this in perspective, according to a recent study [Brulle, 2014], in the US there are currently 91 different organizations with combined funding of over $900 million (think tanks, advocacy groups, and trade associations) that collectively comprise a “climate change counter-movement.”
The climate skeptic tactics were hardly those compatible with disinterested debate. Indeed often the tactics were the same as those used by the tobacco industry to deny the deleterious effects of cigarette smoking. In some cases, exactly the same individuals who had make “skeptic” careers with the tobacco industry simply added climate scepticism to their professional portfolios. A typical skeptic tactic is to “mine” reputable publications so as to juxtapose quotes from several different sources with each quote taken out of context. With this tactic, they can easily amplify disagreements between scientists and exaggerate the significance of any errors – no matter how small – thereby denigrating the entire scientific enterprise.
Finally, the sociology and politics of climate scepticism are significant: at least in the US under the aegis of religious fundamentalism, there has been a convergence of creationist and climate-skeptic organisations and both exploit the same media outlets such as Fox News and the same mouthpieces such as Glenn Beck or Rush Limbaugh. The political right wing clearly sees climate change as a threat to its core mission of furthering unbridled capitalistic economic growth ([Klein, 2014]; see also [Mann, 2012] for an excellent overview by an active participant).
A new, simple, GCM – free demonstration of anthropogenic warming
No scientific theory can ever proven in a mathematically rigorous way, there is always room for “reasonable doubt”. In the case of anthropogenic warming, we have seen that the demonstrations have almost all involved massive computer models whose outputs are often difficult even for specialists to understand. Since no models are perfect, this has comforted the skeptics whose fundamental argument is that the models are wrong and that the warming is natural. In this section, I would therefore like to outline some new work that essentially demolishes this smug position. The starting point was a challenge by the Quebec Skeptic Society in 2012: “If the warming is so strong then why does it take a supercomputer to demonstrate it?” (the resulting presentation answering this question can be found on my web site
http://www.physics.mcgill.ca/~gang/Lovejoy.htm and is summarized in [Dubé, 2013]). As we shall see, the resulting GCM-free demonstration allows for a more powerful result: the rejection of the skeptics’ hypothesis that the warming is no more than a giant century long natural fluctuation. Note that due to a fundamental asymmetry in scientific methodology, it is much easier to disprove a theory than to prove one, so that this result effectively closes the scientific, intellectual debate, [Lovejoy, 2014d], [Lovejoy, 2014a], [Lovejoy, 2014b].
We have discussed anthropogenic emissions of the Greenhouse Gas CO2 as responsible for a positive climate forcing (a “forcing” is something that effectively changes the earth’s heat uptake) there are however other significant anthropogenic forcings, the main one being other Green House Gases (GHG’s, notably methane), but also aerosols (particulate pollution) that tend to “brown” the atmosphere, cooling it by reflecting solar radiation back into space. A third (smaller) forcing arises from changes in land use, notably the conversion of natural habitat to agricultural tracts including the destruction of the tropical rain forests. However, many of these effects are very difficult to quantify (especially the aerosol cooling), this in addition to the water vapour and cloud feedbacks are responsible for the current large uncertainties (1.5- 4.5o C per CO2 doubling). However, even without understanding each of these forcings in detail, they are all strongly tied together by economics: to a good approximation, double the world economy, double the CO2, double the methane and aerosol outputs, double the land use changes and double the effects. This is the justification for using the relatively well measured global CO2 forcing since 1880 as a linear surrogate for all the anthropogenic forcings. For example, historic data on all the GHG concentrations shows that the total of all their forcings is almost exactly 79% larger than just the CO2 forcing taken alone.
Figure 1 shows the result when the global annual temperature is plotted not in the usual way – as a function of the date – but rather as a function of the CO2 forcing. Without fancy statistics or special knowledge it is easy to see that the temperature is very nearly increasing as a straight line (a linear relation) with superposed fluctuations; these represent the natural variability. The natural variability arises not only as a consequence of complex nonlinear interactions that are internal to the climate system (especially within the atmosphere – ocean system), but also as a response to natural external forcings, especially due to solar variability and volcanic eruptions.
From fig. 1 we can estimate the total anthropogenic warming since 1880 from the total vertical range of the straight line; we find that there has been about 1o C warming and the slope of the line – the “effective climate sensitivity” is 2.33o C per CO2 doubling. This represents the actual historical increase in temperature due to the observed increase in CO2 taking into account (implicitly) the other anthropogenic effects. This is easily in the range of the IPCC 1.5 – 4.5o C range (as mentioned, the latter is for the slightly different quantity: the “equilibrium climate sensitivity”). The weakness of this estimate is that it assumes that it is this year’s CO2 concentration that determines this year’s anthropogenic temperature increase whereas in actual fact, much of the excess heat goes into warming the ocean and there is a lag before this heats the atmosphere (although due to feedbacks, things are even more complicated!). We can partially take this into account by redoing the calculation relating the temperature to the 20 years earlier CO2 forcing. When this is done, the basic conclusions are barely modified although the slope of the line is a bit steeper indicating a sensitivity of 3.73o C per CO2 doubling; detailed statistical analysis on the two estimates leads to an overall estimate of the “effective climate sensitivity” 1.9 – 4.2o C per CO2 doubling (with 95% confidence) which is in the middle of the IPCC range (1.5 – 4.5o C) yet somewhat more precise.
Fig. 1: Global temperature anomalies (NASA, 1880-2013) as functions of radiative forcing using the CO2 forcing as a linear surrogate. The line has a slope of 2.33o C per CO2 doubling. Some of the dates and corresponding annually, globally averaged CO2 concentrations are indicated for reference; the dashed vertical lines indicate the beginning and end of the events discussed in the text (1944, 1976, 1992, 1998). Adapted from Fig. 1a, [Lovejoy, 2014c].
The difference between the black straight line in fig. 1 that represents the anthropogenic contribution to the temperature and the actual temperature (the variable green line) is an estimate of the natural variability: it is shown directly in fig. 2. In this plot we can directly see the large natural variations of the global temperature. For example, consider the period 1998- 2013: the “pause”. In recent years, climate skeptics have trumpeted this hiatus in the warming as evidence that the warming is stopped so that the warming cannot be anthropogenic. In figure 1 the period since 1998 appears as a relatively stationary (flat) fluctuating line; fig. 2 shows that actually it is a natural cooling event that is sufficiently large (≈ 0.3o C) that it has masked the more or less equal anthropogenic warming over the period (the black line in fig. 1). While this cooling is somewhat unusual, it is not rare, analysis shows that similar 15 year coolings recur naturally every 20 – 50 years [Lovejoy, 2014c]. But when one takes the context into account – i.e. that it immediately follows the even larger pre-pause warming event (1992-1998) – the pause can be understood as simply as a “return to the mean” type behaviour. From fig. 1 we see that it is only since 2012 that the global temperature has crossed below the long-term anthropogenic trend line. Indeed, the theory of anthropogenic warming predicts that there will a pause starting at about 1998: without the pause, the warming would have been so strong so as to invalidate the theory! Further analysis shows that if the emissions continue to increase at their current rates that the pause would have to persist until 2019-2020 before the anthropogenic warming hypothesis could be rejected with 95% confidence.
Fig. 2: The residuals from the straight line in the above figure; these are the estimates of the natural variability. The vertical dashed lines are the same as in the previous. The arrows indicate the events discussed in the paper. Adapted from Fig. 1c, [Lovejoy, 2014c].
Finally, further statistical analysis with the help of estimates of pre-industrial temperatures (from 1500- 1900 using proxy data such as tree rings, lake sediments, ice cores etc.) allows us to estimate the probability that the warming since 1880 was simply a giant fluctuation; it was shown that this natural warming hypothesis may be rejected at 99 to 99.9% confidence ([Lovejoy, 2014d]).
The skeptics reaction
The conclusions that the pause is natural but that the industrial warming is not – provoked a strong reaction from the skeptics. Within hours of the McGill press release announcing the [Lovejoy, 2014d] paper, the skeptic major domo Viscount Lord Christopher Monkton of Brenchely set the tone by qualifying the paper as an “mephitically ectoplasmic emanation of the forces of darkness”. Over the next few weeks I was fed a steady stream of abuse via emails and twitter feed. There was even an attempt by the Calgary based group with the Orwellian name the “Friends of Science” to bully McGill University into removing the press release from its site. Yet scientific criticism of the work has been feeble. A common reaction has been to use historical information – especially on the relatively warm temperatures in Medieval Europe – to argue that 800 years ago global temperatures might have been warmer than today and if so, this would allegedly contradict the analysis. But this misses the key point that it is the probabilities of large temperature changes over 125 year periods, that has a low probability of occurrence – there is nothing to prevent the same changes occurring much more slowly (i.e. over much longer periods). A related error is to draw conclusions from paleo or instrumental sources displaying large centennial scale temperature changes but only representative of a small region. For example, on their site, the “Friends” trumpet a 0.90 oC temperature change in central England from 1663-1762, claiming that this debunks [Lovejoy, 2014d]. But even England in its entirety covers only 0.04% of the earth’s surface – it is hardly representative of the globe. In actual fact, for the same period, the global scale temperature change was only 0.21±0.12 oC which is indeed far smaller than the global industrial warming (≈ 0.9 oC).
What is to be done?
I have attempted to make the anthropogenic warming hypothesis as widely accessible and as convincing as possible. This research may still be a “work in progress”, yet it is robust enough to deprive climate skeptics of their remaining arguments that the models are wrong and the variability is natural. While honest scientific scepticism is fundamental for the advancement of science, for anthropogenic warming, the science has now reached a point of closure where remaining areas of doubt have become sufficiently insignificant that it is time to move on. Those who persist in affirming that the warming is natural should no longer be qualified by the respectable term “skeptic”, they are no more than deniers.
But what are the implications? To start with, there are some general consequences of which only the amplitudes are uncertain. For example, increasing temperatures will cause the oceans to expand and the ice caps to melt, contributing to the general rise of sea levels, the flooding of low lying regions. Similarly, as CO2 levels increase, the oceans will continue to take up part of the excess. The dissolved CO2 is carbonic acid, it will increasingly attack (by dissolving their hard carbonate parts) corals and shellfish. In addition, the warming is rapid by biological time scales so that not all organisms will have time to adapt, there will be extinctions of many species. General health impacts include a higher mortality rate directly due to higher extreme temperatures, and also a greater overall burden of disease permitted by higher temperatures, particularly malaria. There will also be some benefits: increasing CO2 effectively fertilizes plant growth leading to higher agricultural productivity (although see below). Also some northern areas (such as Quebec) could benefit from more clement conditions, indeed the “paradox of northern biodiversity” is that – even as it leads polar species to extinction – that the warming encourages southern dwelling species to move further north, locally increasing biodiversity.
These general consequences will occur to varying degrees depending on the extent of the warming. But a big problem is their uncertain magnitudes: exactly how much warming should we expect? In the words of Tim Palmer president of the Royal Meteorological Society (2012): “…due to profound uncertainties, primarily with the hydrological cycle, we are still unable to rule out the possibility that anthropogenic climate change will be catastrophic for humanity over the coming century, or something to which we can adapt relatively easily…” [Palmer, 2012].
The problem is that many of the consequences of warming are not simply incremental, quantitative, many are qualitative, involve “tipping points” and are potentially catastrophic. For example, in addition to the general effects mentioned above – starting at the low end (≈ 1.5o C of warming) and somewhat paradoxically – there will be an increased water stress (droughts) simultaneous with increased damage from floods. At levels between 2 – 3o C of warming, the consequences enumerated in the previous general list of become stronger, but in addition there begins to be risks “tipping points” such as the complete melting of the ice caps, huge sea level rises and irreversible climate change. Indeed, an overall assessment of the situation – taking into account that the current CO2 concentration is ≈400 parts per million (ppm), which is higher than at any time in the last 650,000 years – led to the famous conclusion that levels in excess of only 350 ppm are “not compatible with the planet on which civilization developed or to which life on earth is adapted” [Hansen et al., 2008]. The risk of dire consequences is why twenty years ago, the international community agreed that temperatures should not rise more than 2o C above the pre-industrial level (the basis for example of the Kyoto accords to limit emissions to 1990 levels, we are already half way there). Ironically, based on new science, the [Hansen et al., 2008] update concludes that this 1990’s goal is already too high for safety.
What is to be done?
In the context of environmental change we often hear the innocent sounding slogan “save the planet”. While this may be well intentioned, no matter what we do, the planet will survive. The humanist position is rather: “save the humans”. This difference is more than a nuance: “save the planet” corresponds more closely to the explicitly anti-humanist “Gaian” ideology of James Lovelock for whom the earth is (literally) a “superorganism” and humanity is no more than a single microscopic component. His Gaian goal is to save Gaia i.e. with or without humans to maintain this hypothetical “living” planet.
If emissions stopped tomorrow, the CO2 levels would stay high and temperatures would continue to rise while the heat already stored in the oceans continues to heat the atmosphere, and then – assuming there are no catastrophic and irreversible changes – it would slowly diminish over centuries. Therefore – no matter what else is done – we must mitigate the consequences of the warming. But how can we keep temperatures below the – admittedly somewhat arbitrary level – of 2o C? Barring an unlikely technological breakthrough that allows us to economically sequester (remove) CO2 from the atmosphere and harmlessly bury it, emissions must decrease, and this primarily means reducing CO2 from fossil fuel burning. The trouble is that we’re addicted to fossil fuels: over the last 100 years, economic growth and fossil fuel consumption have grown hand in hand and today, they account for 80% of global energy usage (see [Evans, 2007]). The IPCC working group on mitigation and adaptation – largely the work of economists – has developed several “emission scenarios” for global economic development up until the year 2100. These involve either 2%/yr or 3%/yr of overall economic growth while simultaneously cutting back on emissions sufficiently so as to keep the temperature under the 2o C limit. In spite of the upbeat tone of their reports, the fact is that even the 2%/yr scenarios imply that in the year 2100, roughly 90% of our energy consumption will come from technologies that don’t yet exist (see especially [Pielke et al., 2008] for a critique). Indeed, the only currently existing technology that could conceivably underwrite these scenarios are nuclear reactors and even these – in their conventional forms – are expected to run out of fuel before the turn of the turn century. This option would require the transition to breeder reactor technology which has barely been proven and is very expensive (see [Hoffert et al., 2002]). Finally, as attractive as they might first appear, existing solar and wind energy sources can’t step in at the required level – at least not anytime soon. They are highly intermittent so that without storage, they can only supply roughly 10% of the electricity on national grids [Evans, 2007]. Adding in storage greatly increases the cost and doesn’t obviate the fact that due to their low energy densities, huge tracts of land (in some scenarios, millions of square kilometers) will be needed if they are used to fully replace existing fossil fuels.
But neoliberal economics teaches that – if the price is right – forget the laws of nature, the magic of the market has no limits! Just send the right “price signals” by creating a carbon market – or if you’re uncouth enough – by imposing a carbon tax and the market will respond by spontaneously conjuring forth carbon free, green energy technologies. Worried that the new technologies will be too expensive? In the year 2100, the economy will be 5 or more times more productive so don’t stress: borrow now, easy payments.
The above is barely a caricature. Consider the position of leading Yale economist William Nordhaus who in Science (1993) stated: “Agriculture, the part of the economy the most sensitive to climate change, accounts for just 3% of national output. That means that there is no way to get a very large effect on the US economy”. This was echoed by Oxford economist Wilfred Beckerman in Small is Stupid (1995) “even if the net output of US agriculture fell by 50% by the end of the next century, this is only a 1.5% cut in GNP” (cited in [Foster et al., 2010]), a position parroted by economics Nobel prize winner Thomas Schelling in Foreign Affairs (1997). One might be forgiven for concluding that if climate change made all agriculture impossible, that the economy would contract by a mere 3%…
Since then, Nordhaus’ position has evolved so that in a recent book [Nordhaus, 2008], his estimate of the reduction of global economic output in the year 2100 due to climate change was increased from 1% to 3% of GDP. Counterpose this to the “fear mongering” (Wall Street Journal) report by former chief economist at the World Bank, Nicolas Stern (“The Stern review”, 2007). The key difference between the Nordhaus and Stern analyses are the discount rates. The discount rate quantifies how much future benefits are worth today. It is an attempt to take into account the fact that if economy continues to grow that the costs will take up a smaller and smaller fraction. For example, with a discount rate of 10%, a catastrophe affecting humanity fifty years from now would have a present value of less than one percent of the future cost. Using the Nordhaus and Stern discount rates of respectively 6% and 1.4%, in order to avert a trillion dollars of climate damage (reduced output) in the year 2100, today to avoid it,it is only worth spending respectively $2.5 billion or $247 billion (see the excellent discussion in [Foster et al., 2010]). Clearly when it comes to climate change, the value of the discount rate is largely an ethical issue about how much future generations should pay in order to remedy the damage caused by today’s emissions, while taking into account projections of future growth rates. The main difference between Nordhaus and Stern’s is thus the cost and – for policy – how much to tax carbon. To summarize: the mainstream economists believe that continued exponential economic growth is needed – if only so that the consequences of mitigating and adapting to climate change will be easily affordable.
Consider some of the standard assumptions a little more closely. To start with, they do not take into account the consequences of possible tipping point catastrophes, but neither do they seriously take into account the more predictable threats. This is highlighted by Stern’s recommendation that CO2 levels should be stabilized at 550 ppm – even though this would almost certainly lead to the 2o C (and likely 3o C) thresholds being crossed. Why? The high level will avoid potential economic disruption. Economics trumps humanity. But even more fundamental than ignoring possible catastrophic change is the assumption that continued exponential growth is possible. Arguing on the basis of the debt crisis combined with the high price of fossil fuels, and other raw materials, several authors (e.g. Robert Heinberg ([Heinberg, 2012]), Jeff Rubin ([Rubin, 2012])) have argued that real economic growth – at least for the developed countries is already over.
But even if there are no tipping points, even if the consequences of 3o or even 4o C temperature increases could be mitigated, even if the necessary low carbon growth is indeed possible without destroying the environment and the climate, humanists must ask: is continued quantitative growth desirable? It is ironic that today must pose this question since for generations – certainly since 19th socialist humanism – they have taken for granted the desirability and need of quantitative economic growth in order to better the human condition. Classical economics – whether liberal or Marxian – was formulated in an epoch where climate change and resource depletion were either undreamt of or were so remote as to be of only academic relevance.
Certainly the world’s poor countries justifiably want to grow in order to bring themselves out of their relative misery, but what of the developed world? The case of Canada is typical. Since the 1980, the per capita Gross Domestic Product (GDP) has roughly doubled, yet median family income has stagnated, virtually all the increase in economic activity has gone into the pockets of the top 20%, mostly to the top 1% (statistics Canada). That this trend has been largely repeated throughout the “globalized” world economy, clearly showing that since roughly 1980 it isn’t population growth that is responsible for devouring resources and degrading the environment ([Monbiot, 2011]). In his seminal book “Capital in the Twenty First Century”, Thomas Piketty [Piketty, 2014] puts the recent explosion of inequality into a broad historical context using an original analysis of historical economic data. His main point is simply that in its historically “normal” development, the rate of return on capital is larger than the growth of the economy as a whole so that those owning capital systematically increase their share of the overall economy; the period 1930 – 1970 turns out to be quite exceptional in its more meritocratic character. The twenty first century is thus set to resemble the nineteenth in which inherited access to capital is (more than ever) the determinant of social class, wealth, income and power.
But GDP is a very poor indicator of economic well being. Dig a hole, the GDP goes up, fill the hole in, the GDP goes up. Destroy a country (Iraq), the GDP goes up, rebuild the country, the GDP goes up. Build a factory, the GDP goes up, clean up its pollution, the GDP goes up. And so it goes. The inadequacy of GDP is the reason why some economists have developed other indicators of economic well being especially the Genuine Progress Indicator (GPI). Instead of always adding (as in the examples above), the GPI attempts to subtract where there should a subtraction. It is therefore quite possible that building a factory could end up decreasing rather than increasing the GPI. This would occur if its impact on the environment and the social costs imposed on present and future generations are appropriately factored in – what mainstream economists call “externalities”. When this is done, some very interesting trends emerge. For example, estimates of the evolution of GPI since the 1960’s of over twenty diverse economies has shown that up until about $7000/yr/capita that the GPI and GDP are closely related. Yet after this $7000/yr/capita, level there is an inverse relation so that increasing GDP implies a lower GPI. On this reckoning, since about 1975, the developed countries become increasingly poor, not increasingly prosperous (see the review of international studies in [Kubiszewski et al., 2013], and details for Quebec in [Meade, 2011]). Part of the poverty is hidden in the sense that it reflects the costs of environmental destruction and resource depletion that will be bourn by future generations. A somewhat different way to understand this through the related idea of carrying capacity, quantified by the human “ecological footprint” (see: http://www.footprintnetwork.org/en/index.php/GFN/page/footprint_basics_overview/). For example, it has been estimated that since 1978, the global per capita ecological footprint has exceeded the per capita biocapacity, and today it exceeds this by more than 50%.
From a humanistic point of view, stopping economic growth – at least for the developed countries would thus seem to be both a sine qua non for stopping emissions growth as well as for preventing our further collective impoverishment by free market driven GDP growth. Similar arguments against continuing growth have been made by Robert Heinberg ([Heinberg, 2012]), Hervé Kemp ([Kempf, 2011]) and Jeff Rubin ([Rubin, 2012]). Even ending growth – which in of itself will not solve the climate problem – will be incredibly difficult to achieve since the end of exponential economic growth over an extended period spells the end of the system. It is enough to recall that just recently – the 2008 – 2009 crash – we witnessed the immediate consequences of stopping growth if only for one to two years. Longer term halts will require alternative economic models and fundamental political and social change. We have already hinted at the difficulty of such a transition with reference to the 1% who have swallowed up much of the growth over the last 30 – 40 years. A more accurate measure of the difficulty can be gathered by considering not the concentration of wealth by rather the concentration of economic control and hence power. A recent pioneering study by [Vitali et al., 2011] using new tools of network analysis borrowed from statistical physics, analyzed a data base with 37 million private companies and discovered that the most powerful 147 controlled 40% of them. This is closer to 0.004% than 1%. Such concentrated power – vested in organizations that only exist by extracting a nonzero return on investment – will likely to be refractory to rational discourse.
Brulle, R. J. (2014), Institutionalizing delay: foundation funding and the creation of U.S. climate change counter-movement organizations, Climatic Change 122, 681-694.
Dubé, L. (2013), Qu’est ce que c’est le climat: un regard sceptique sur les climato-sceptiques Quebec Sceptique, 81, 57-64.
Evans, R. L. (2007), Fueling our Future: an introduction to sustainable energy, Cambridge Univ. Press, Cambridge, U.K.
Foster, J. B., B. Clark, and R. York (2010), The eoclogical Rift, Monthly Review Press.
Hansen, J., M. Sato, P. Kharecha, D. Beerling, R. Berner, V. Masson-Delmotte, M. Pagani, M. Raymo, D. L. Royer, and J. C. Zachos (2008), Target Atmospheric CO2: Where Should Humanity Aim?, The Open Atmospheric Science Journal, 2 217-231 doi: 10.2174/1874282300802010217.
Heinberg, R. (2012), The end of Growth, The Post Carbon Institute.
Hoffert, M. I., et al. (2002), Advanced Technology Paths to Global Climate Stability: Energy for a Greenhouse Planet, Science, 298, 981- 987.
Kempf, H. (2011), Pour Sauver la planète, sortez du capitalisme, Editions Seuil.
Klein, N. (2014), This changes everything: Capitalism versus the Climate, Random House.
Kubiszewski, I., R. Constanza, C. Franco, P. Lawn, J. Talberth, T. Jackson, and C. Aylmer (2013), Beyond GDP: Measuring and achieving global genuine progress, , Ecological economics, 93, 57-68.
Lovejoy, S. (2013), What is climate?, EOS, 94, (1), 1 January, p1-2.
Lovejoy, S. (2014a), Climate Closure: Game over for climate skeptics, EOS (submitted).
Lovejoy, S. (2014b), Opinion: Research shows that global warming isn’t natural, Op-Ed, in The Gazette, edited, Montreal.
Lovejoy, S. (2014c), Return periods of global climate fluctuations and the pause, Geophys. Res. Lett., 41, 4704-4710 doi: doi: 10.1002/2014GL060478.
Lovejoy, S. (2014d), Scaling fluctuation analysis and statistical hypothesis testing of anthropogenic warming, Climate Dynamics, 42, 2339-2351 doi: 10.1007/s00382-014-2128-2.
Lovejoy, S. (2014e), A voyage through scales, a missing quadrillion and why the climate is not what ou expect, Climate Dyn., in press.
Manabe, S., and R. T. Wetherald (1975), The effects of doubling the CO2 concentration on the climate of a general circulation model, J. Atmos, Sci. , 32, 3.
Mann, M. E. (2012), The Hockey Stick and the Climate Wars: Dispatches from the Front Lines, Columbia University Press.
Meade, H. (2011), L’indice du progrès véritable du Québec: Quand l’économie dépasse l’écologie, Editions MultiMonde.
Monbiot, G. (2011), The Population Myth, in The Global Warming Reader, edited by B. McKibben, pp. 269-273, Penguin Books, N.Y., N.Y., USA.
Nordhaus, W. D. ( 2008), A Question of Balance: Weighing the Options on Global Warming Policies, Yale University Press.
Palmer, T. N. (2012), Towards the probabilistic Earth-system simulator: a vision for the future of climate and weather prediction, Q.J.R. Meteorol. Soc., in press.
Pielke, R. J., T. Wigley, and C. Green (2008), Dangerous assumptions, Nature, 452, 531-532.
Piketty, T. (2014), Capital in the 21st Century, Harvard University Press.
Rubin, J. (2012), The end of Growth, Random House Canada.
Sherwood, S. (2011), Science controversies past and present, Physics Today, Oct., 39-44.
Vitali, S., J. B. Glattfelder, and S. Battiston (2011), The Network of Global Corporate Control, PLOS One doi: DOI: 10.1371/journal.pone.0025995.
Weart, S. R. (2008), The discovery of Global Warming, Harvard Univ. Press.