Sunday, May 26, 2019

Climate Sensitivity and the Carbon Budget

Sensitivity and the Carbon Budget 
The Ultimate Challenge of Climate Science 
David Wasdell (Director of the Apollo-Gaia Project*
June 2014.
The Presentation was introduced on 4th March at the Climate Challenge 2014 Conference, Convened by Climate Change Solutions, and held in the I-Max Theatre of the Millennium Point complex in Birmingham during International Climate Week. It has subsequently been completely revised and up-dated for wider circulation. For ease of access, the visual basis of the analysis has been retained, and the commentary kept in the style of spoken English. Essential references are provided as active links within the text. 
I am most grateful to Tony McNally, Director of Climate Change Solutions for his invitation to introduce this presentation during the session on the Science Challenge of Climate Change. 
If Climate Change Solutions are to be fit for purpose, they must deal with the reality of Climate Change Problems. The role of Climate Science is to delineate those problems. My subject of “Sensitivity and the Carbon Budget” raises two of the most fundamental issues that determine our strategy, against which we must judge the effectiveness of our solutions, and in the light of which we can assess the appropriateness of our policy. 

Brief Introduction to the Apollo-Gaia Project 
Before I turn to my main subject, a few words of introduction about the Apollo-Gaia Project: 


The project was conceived in 2005 to focus on the energy interface between the sun and the earth in the light of human modification of the composition of the terrestrial atmosphere. Of particular concern was the impact of the complex system of feedback processes that were being brought into play by the increase in concentration of atmospheric carbon dioxide and by the consequent rise in average surface temperature of the planet. We sought definitive answers to two critical questions: 
1.           By how much does the natural planetary climate system amplify the greenhouse effect of the COemissions? In other words, how sensitive is the climate to human disturbance? 
2.           Is there a critical threshold beyond which the world system moves into self-amplification, or runaway behaviour, and if so, then what boundary conditions are involved? 
Our initial approach was to build an inclusive conceptual model of the global climate system involving all the drivers and feedback processes. The next step was to develop a topology, or three-dimensional map, that combined the dynamics of the natural system with the capacity of the human response. 


Increase in average surface temperature (and to some extent increase in CO2concentration) activated clusters of feedback mechanisms that enhanced the rate of global heating, slowly driving up the temperature and accelerating the feedback system, and so on. 
The ridge of the “wave” topology illustrated the tipping-point between equilibrating and runaway dynamics, initiated when the strength of the amplifying (positive) feedback system exceeded that of the damping (negative) feedback. Human resources required to stabilise the system escalate to a critical threshold beyond which no recovery is possible, exacerbating the sixth major extinction event in the history of planet earth. The blue pathway mapped the progress of environmental change since before the start of the industrial revolution. It reached a “trifurcation” where the survival pathway diverged from business as usual and its modification by the Kyoto Protocol. Intuitively the survival pathway just avoided the onset of runaway behaviour and re-stabilised the climate system after recovery from a period of “overshoot” in the drivers of global heating. 
That initial analysis led to an urgent invitation from the then President of the Club of Rome to attend their next annual conference at which he introduced our seminal paper “The Feedback Crisis in Climate Change” as “being worthy of note”. I had the privilege to make a brief presentation (hailed as “a call to arms to the Club of Rome”) which concluded with the following proposal: 
One year later the Apollo-Gaia Project was launched in Washington, Brussels and London, and co-ordinated around a radical mission statement: 
The rest is history, the details of which can be explored at: http://www.apollo-gaia.org/A- GProjectDevelopment.pdf. In September 2013 I was again invited to the annual assembly of the Club of Rome, this time to give the conference keynote presentation based on our last eight years of work. It was entitled “Sensitivity, Non-Linearity and Self-Amplification in the Global Climate System” and forms the basis for today’s subject to which we must now return. 

Two Forms of Sensitivity 
Let us distinguish between two distinct forms of system sensitivity. If you look very carefully in the seat you occupy you will discover an excellent example. Your own body demonstrates dual sensitivity. Firstly, your whole system is very sensitive to changes in its core temperature. If your temperature changes by more than about +/-3°C, vital functions begin to degrade. Ultimately your life is put at risk. Secondly, and in addition, your core temperature is itself more or less sensitive to the effects of certain pathogens or infections. As with your body, so with the planet. 
The first form of sensitivity concerns the dynamic response of the planetary climate to small changes in average surface temperature. 
For instance a change of about 5°C makes all the difference between current conditions and having a mile of ice stacked over the Birmingham Bull-ring. Today we experience an increase of only 0.85°C above the pre-industrial bench-mark, but already we see profound changes in our planetary climate: changes in rainfall patterns; flooding; droughts; wild-fires; heat-waves; unpredictability; disruption of food-production; sea-level rise; storm surges; intensity of hurricanes (or cyclones); coastal erosion and threat to low-lying coastal areas; sharp rise in the rate of species extinction; migration of habitat of fauna and flora in both latitude and altitude; glaciers are in retreat all around the world; frost intensity is lower and less frequent; snow-fall and duration are diminished; spring comes several weeks earlier and autumn sets in later; permafrost is melting and ice-caps are losing mass. 
As a sub-system of the global dynamic, the Arctic is showing the most extreme form of sensitivity to small changes in average surface temperature. 
While the global average has increased by 0.85°C, the Arctic temperature has risen by about 3°C. Area and volume of end-of-summer floating sea-ice are collapsing. Resulting change in reflectivity has already contributed to global heating at a level of some 25% of the impact of rising COconcentrations. Local effects are much stronger. Raised Arctic temperatures drive increased water-vapour concentration, a strong greenhouse gas feedback. Warmer run-off from northward flowing rivers enhances rate of ice-melt. As floating sea-ice gives way to open water in the shallow coastal seas, heat is transmitted down to the ocean floor, melting sub- marine permafrost. Previously trapped methane is released to the atmosphere, adding to that already emanating from thawing land-based permafrost and accelerating the local greenhouse effect. As air and ocean surface temperature increases, the surface melt and mass-loss from the Greenland ice-cap accelerate their contribution to rising sea-level. Arctic pressure systems and weather patterns are disrupted. 
The Arctic acts as the canary in the coal-mine of the Global Climate system. The canary just died. It is time to get out of the coal-mine! (And you can take that comment in several different ways!) 
Another unexpected consequence of Arctic warming is the disruption of the circum-polar jet stream. As the difference between polar and mid-latitude temperature shrinks, so the jet-stream relaxes, its meanders become more and more pronounced and they progress more slowly round the globe. At times the pattern blocks, giving rise to long periods of intensely cold Arctic weather carried on the south-bound air-stream, alternating with patterns of warmer weather and intense rainfall as mid-latitude air-masses are driven northward. The energy-exchange enhances Arctic warming and accelerates the system disturbance. The southern loops of the meanders disrupt the mid-latitude jet-stream with knock-on effects for monsoon-patterns across Asia. (For extended treatment see:http://www.apollo-gaia.org/ArcticDynamics.html)
So we see that the Global Climate System is extremely sensitive to small changes in average surface temperature, much more so than had been understood when the policy target of not exceeding a rise of 2°C was proposed and accepted by the international community. Dangerous climate change is already with us as a consequence of a change of only 0.85°C. Treating the 2°C target as the boundary of “safe climate change” is an illusion which must now be abandoned and replaced with a more realistic ceiling of not more than 1°C above the pre- industrial level. 
Now we can move on to explore the second form of sensitivity, namely the sensitivity of temperature itself to changes in the concentration of atmospheric carbon-dioxide. 
If we are to answer the question: “By how much does the natural system amplify the greenhouse effects of changes in atmospheric concentration of carbon dioxide?” then we must firstly determine what those effects are. 
This next slide illustrates the radiation characteristics of a set of planets whose surface temperature ranges between 200°K and 300°K. The vertical axis marks the energy output, while the horizontal axis shows the wavelength of the infra-red radiation. The smooth white curves are characteristic of planets with either no atmosphere at all, or whose atmosphere has no greenhouse gas content and is therefore completely transparent to radiation at all wavelengths. 
The yellow trace maps the radiative output of our own planet. Without greenhouse gasses the surface temperature of the earth would average some 255°K (-18°C), and would be inhospitable to life as we know it. Our atmosphere is made up almost entirely of nitrogen and oxygen, neither of which block radiation at any wavelength. However the presence of trace greenhouse gasses in the form of ozone, carbon-dioxide, water vapour, and methane (effects not labelled) blocks significant amounts of radiation at specific wavelengths. In order to maintain the dynamic equilibrium between energy received from the sun and energy radiated back into space, the surface temperature has to increase by about 33°C (to 288°K or +15°C). That drives radiation outward through the un-blocked wavelength windows at which no greenhouse gasses are operating. 
The Forcing effects of change in COConcentration
Now consider the effect of increasing the atmospheric concentration of carbon dioxide. The yellow trace is pushed further down the scale at the wavelength specific to COabsorption. Less energy is radiated to space, so the surface temperature has to increase to compensate and re-balance the equilibrium. As temperature increases, more water-vapour is held in the atmosphere and the methane concentration rises, so blocking even more radiative output. Surface temperature has to increase even further to compensate. That illustrates the basic mechanism by which the natural system develops feedback processes that amplify the effect of increased COconcentration. 
A second consequence of adding to COconcentration concerns the increasing saturation of the wave-band at which COblocks infra-red radiation. The higher the concentration, the less effective is its function as a greenhouse gas. The behaviour is well understood and conforms to the simple law that each doubling of the CO2concentration drives a constant change in the greenhouse effect. It can be treated as constant across the three doublings between 140 ppm and 1120 ppm, and is demonstrated here: 
So for the first doubling of COconcentration from 140 ppm to 280 ppm, about 3.7wm-2 of additional outgoing infra-red radiation is blocked. That requires an increase in average surface temperature of some 0.97°C to restore the radiative equilibrium. The second doubling from 280 ppm to 560 ppm also blocks an additional 3.7wm-2 and leads to a further temperature rise of 0.97°C. The same holds for the third doubling from 560 ppm to 1120 ppm. For the mathematicians among us, the relationship between concentration and efficiency is logarithmic. (See Table 1, p35) 
One of the neat consequences is that if we compress the horizontal axis on a log(base 2) scale the same information is displayed as a straight line like this: 
Each doubling in concentration is represented by the same distance along the horizontal axis. The temperature change illustrates the impact of change in COconcentration without any of the amplifying effects of the feedback system. The 280 ppm point marks the pre-industrial benchmark. Temperature variation is displayed on the vertical axis. (See Table 2, p36) 
Two concentration markers have been inserted. The first, at 180 ppm, represents the COconcentration at the coldest point of the most recent ice-ages. The second, at 440 ppm, marks the concentration beyond which (or so it is asserted) global warming would exceed the 2°C target ceiling and set off dangerous climate change. 
It is clear from the symmetry that the forcing effect of increasing COconcentration from 180 ppm to 280 ppm is virtually identical to the forcing effect of increasing COconcentration from 280 ppm to 440 ppm. Historically, temperature change and CO2forcing have been closely correlated, so you would expect the 5°C change between the coldest point of the last ice-age and the pre-industrial benchmark, would be mirrored by a 5°C increase in equilibrium temperature when COconcentration reaches 440 ppm. However, the set of coupled computer models on which the IPCC Assessment Reports are based consistently predicts a temperature rise of only 2°C at this concentration. 
The difference is fundamental and raises the critical question of climate sensitivity, namely: 
“By how much do the feedbacks of the planetary climate system amplify the forcing effects of changes in COconcentration?” 
Our initial approach was to build a systems dynamics model of the complex feedback system, inclusive of all the feedback mechanisms, their drivers, interactions, time-frames and relationship to global heating, thermal inertia and eventual temperature trajectory. It proved to be a dead end. Specific quantification of each mechanism was subject to such uncertainty that the modelled outcome would itself have been subject to so much compound uncertainty as to make it unusable as a strategic basis for policymaking. We had to go back to basics and develop a new, inter-disciplinary methodology that went beyond the field of climate modelling. 
Feedback Dynamics and the Amplification of CO2Forcing 
The first attempted answer was given under the chairmanship of Prof. Jule Charney back in 1979. James Hansen and the UK Met. Office were among the contributors. 

They based their assessment on the effects of the known fast feedbacks like increase in water vapour with rising temperature, reduction in reflection from shrinking areas of sea-ice, and changes in cloud effects. They estimated that the global feedback system amplified the effects of COon its own by some 3.1 times. “Climate Sensitivity” is defined as the change in equilibrium temperature correlated with a doubling in concentration of atmospheric CO2, so their calculations yielded a sensitivity value of around 3°C. The current ensemble of computer models on which the IPCC reports are based still uses this “fast-feedback sensitivity” to predict future change in temperature. It is immediately clear that this sensitivity value is consistent with the prediction of a 2°C rise at a COconcentration of 440 ppm. 

As other feedback processes were identified and their effects measured or estimated, so they began to be incorporated into the climate models. One of the best examples is the inclusion of some of the feedbacks of the global carbon cycle in the work of the UK Met. Office centre at Hadley

That increased the amplification factor to around 4.64 and raised the “Charney” sensitivity by some 50% to 4.5°C for an equilibrium response to a doubling of the concentration of atmospheric CO2. Note that with this sensitivity value, the temperature increase correlated with the “safeguard” value of 440 ppm has risen to about 2.9°C. 
Although many other feedback mechanisms have now been identified, their quantification is extremely difficult, rendering their inclusion in coupled climate models virtually impossible. Some of the mechanisms are more local, others more global in effect. Some are weaker while others are stronger. Some are slow and long term in action while yet others are comparatively fast. Most of them are driven by change in surface temperature. They interrelate with and reinforce each other’s behaviour in complex ways. Most are amplifying (i.e. “positive”) in their effect. Just a few are damping (i.e. “negative”) feedbacks. The net effect of this complex feedback system is to amplify still further the impact of change in the COconcentration. 
In an attempt to move beyond the limitation of climate modelling, James Hansen and his team at NASA used paleo data to calculate the feedback effects of slow change in the reflectivity of the land-based ice-sheets as they shrank or expanded in response to shifts in global temperature. Historically, the pace of this feedback was dictated by the so-called “Milankovic” cycles of change in the shape of the earth’s orbit, and the tilt and wobble around its axis. Today, the rate of change is driven by human carbon emissions and is some 300 times faster than at any time in the last 1,000,000 years. No-one really knows how fast the ice-sheets will respond under these conditions, but the pace is likely to be much greater than in the paleo-records. The implications are not limited to climate sensitivity in terms of temperature, but also control the amount and rate of change in sea-level. 
Taking the ice-sheet dynamics into account in addition to the carbon-cycle and vegetative feedbacks and the basic fast feedbacks, Hansen and his team concluded that the effect of COon its own would be amplified by about 6.2 times, leading to an equilibrium sensitivity of 6°C in response to a doubling of the atmospheric concentration of CO2
It is important to note that with the Hansen Sensitivity, the expected temperature response to a COconcentration of 440 ppm is increased to 4°C. That is double that predicted in the current IPCC Fifth Assessment Report. 
Evaluation of Modelled Values of Climate Sensitivity 
We are now in a position to evaluate these various approaches to climate sensitivity against the actual behaviour of the planetary climate system since the coldest point of the last ice-age. 
When change is slow, the energy balance of the earth maintains a dynamic equilibrium. Any change of radiation because of shift in the greenhouse effect is balanced by an adjustment in average surface temperature to compensate. Numerically the blocking of some 3.8wm-2 requires an increase of 1°C in surface temperature. The figure is known as the “Radiative Damping Coefficient” of the planet. 
The temperature difference between the coldest point of the last ice-age and the conditions just prior to the start of the industrial revolution is around 5°C. At 3.8wm-2 per degree, that adjustment represents a shift in the radiative budget of around 19wm-2. Ignoring all feedback dynamics, the forcing due to the change in COconcentration on its own (from 180 ppm in the depth of the ice-age to 280 ppm at the pre-industrial benchmark) is some 63.8% of that provided by a doubling of the COconcentration, namely 2.36wm-2
Since we know the amplification factor associated with each of the estimates of climate sensitivity, it is a simple matter to show how close they come to representing the actual behaviour of the planetary climate. 
For example, on its own, the forcing from additional COaccounts for only 12.4% of the observed change in average surface temperature of the planet, a shortfall of 16.6wm-2

Paleo 
Mathematical Assessment of Sensitivity Values 

Taking into account some of the fast feedbacks, the “Charney Sensitivity” raises that to 38.6%, but still leaves 11.7wm-2 unaccounted for. Including the effects of some of the carbon-cycle feedbacks, the “Hadley Sensitivity” drives the modelled projection to 57.7% of the earth system behaviour. Even adding in the contribution from the long-term dynamics of land-based ice- caps (“Hansen Sensitivity”) still leaves the solution at 77.2%, with a short-fall of 4.3wm-2
It is absolutely clear that the IPCC AR5 proposal that policy decisions should be based on the projections of the CMIP5 ensemble of coupled climate models that only include the fast- feedbacks, is totally inadequate. Policy must take as its basis the full Earth System Sensitivity that provides a 100% match to the behaviour of the planetary climate system. That conclusion was endorsed by the august group of twelve leading climate scientists who co-authored a Review Article entitled “Climate Sensitivity in the Anthropocene”. It was published in the July 2013 edition of the Quarterly Journal of the Royal Meteorological Society, from which the following quotation is taken: 
“Based on evidence from Earth’s history, we suggest here that the relevant form of climate sensitivity in the Anthropocene (e.g. from which to base future greenhouse (GHG) stabilization targets) is the Earth System sensitivity, including fast feedbacks from changes in water vapour, natural aerosols, clouds and sea ice, slower surface albedo feedbacks from changes in continental ice sheets and vegetation, and climate- GHG feedbacks from changes in natural (land and ocean) carbon sinks. Traditionally, only fast feedbacks have been considered (with the other feedbacks either ignored or treated as forcings), which has led to estimates of the climate sensitivity for doubled COconcentrations of about 3°C. The 2xCOEarth System sensitivity is higher than this, being ~4-6°C if the ice sheet/vegetation albedo feedback is included in addition to the fast feedbacks, and higher still if climate-GHG feedbacks are also included. The inclusion of climate-GHG feedbacks due to changes in the natural carbon sinks has the advantage of directly linking anthropogenic GHG emissions with the ensuing global temperature increase, thus providing a truer indication of the climate sensitivity to human perturbations.” [Q.J.R. Meteorol. Soc. 139: 1121-1131, July 2013 A
Towards a Robust value for Earth System Sensitivity 
The fundamental question is therefore “Can we ascertain a robust value for the Earth System Sensitivity, and if so what would that value be?” 
The answer to the first part of that question is a clear “Yes we can!”, but only if we allow ourselves to move beyond the domain of climate modelling. We take as basic data the values of average surface temperature and COconcentration during the last 20,000 years covering the transition from the coldest point of the last glacial maximum to the conditions just prior to the start of the industrial revolution. Because the calculations are based on the observational data concerning the change in planetary climate, the Earth System Sensitivity includes, by its very definition, the effects of all feedbacks, known and unknown, as well as all their complex interrelationships. It therefore avoids the methodological inadequacy of a model-based approach as well as dramatically reducing the uncertainty surrounding its value. Two complementary approaches to the question are now introduced. The first is purely mathematical in nature. The second employs the graphical simulator as a user interface. 
Earth System Sensitivity 1: the Mathematical Derivation 
Based on the Radiative Damping Coefficient of the planet, a 1°C change in the average surface temperature represents compensation for a shift of 3.8wm-2 in the radiative budget. The 5°C change from the last glacial maximum to the pre-industrial benchmark therefore represents compensation for a change of 19wm-2
The contribution to this figure from COon its own comes from the change in forcing generated by the increase in concentration from 180 ppm to 280 ppm. Using the log(base 2) correction for decrease in greenhouse efficiency with rising concentration, this represents 63.8% of the effect of a doubling of COconcentration, namely 2.36wm-2
To find by how much the earth system amplifies the effect of COon its own we take the total change in radiative budget (19wm-2) and divide it by the COforcing (2.36wm-2). 
The Earth System Amplification Factor is therefore 8.0 


The value of climate sensitivity (the change at equilibrium in average surface temperature of the planet in response to a doubling in the concentration of atmospheric CO2) is derived from the amplification factor by multiplying the latter by the temperature increase required to compensate for a doubling of COconcentration on its own, namely 0.97°C. 
The Earth System Sensitivity is therefore 7.8°C. (See summary box below)
Earth System Sensitivity 2: the Graphical Simulator 
Another, more “right-brained” approach is to map the information onto the graphic simulator we have already assembled. 



By definition the slope of the line representing the earth system sensitivity must pass through the pre-industrial benchmark (280 ppm COand 0°C for the temperature anomaly). It must also pass through the point representing conditions at the last glacial maximum, namely 180 ppm CO2, and -5°C. When we draw the line through those two points and project it forward over the next doubling of COconcentration it shows a temperature increase of 7.8°C. That is the value of the Earth System Sensitivity. If we divide that number by the temperature compensation for a doubling of COon its own, we reach a value for the Amplification Factor of 8.0 
Three further check points help to consolidate the result. 
1.           Analysis of the COcorrelation with temperature based on ocean sediment cores reaching back over some 65 million years, was conducted by Mark Pagani and his team. It yielded a value of climate sensitivity of around 8°C which is entered on the visual at 8°C above the last glacial maximum and at a COconcentration double that at the time, namely the point [360, 3.0] 
2.           A regression analysis of correlation between temperature and COconcentration over the period covering the last four ice-ages, was conducted by Ferdinand Englebeen. Taking the most accurate point from his work (i.e. least spread in the scatter) and applying it to a doubled value for COconcentration provided the point [534, 7.1] 
3.           The proxy correlation between average surface temperature and COconcentration during the Eocene gave a temperature anomaly some 15°C above the pre-industrial benchmark with a COconcentration of around 1000 ppm. Although there is some uncertainty concerning the temperature value, this point is mapped onto the simulator which is expanded to include a further doubling of COconcentration to the level of 1120 ppm. 
1. Variation in the value of the Earth System Sensitivity. Both the mathematical derivation and the graphic simulator generalise results to give a constant value for the ESS. However, it is clear that sensitivity 

Four technical working notes: 

depending on the physical state of the planetary climate and the nature of the feedback dynamics in play. Recognising that the radiative budget must remain balanced, then the sensitivity must average 7.8°C. If for any reason the value dips below this figure for some time, then it must also rise above it at other times to compensate. For instance ice-albedo feedbacks are active during the glacial/inter-glacial cycle but appear to be replaced by cloud feedbacks during the warmer, more humid and ice-free conditions of the Eocene. There is no evidence to support a significant reduction in sensitivity in the current conditions of the climate system. 
2. Residual uncertainty in the value of the Earth System Sensitivity. The main source of uncertainty in the gradient of the red line comes from the interpretation of data from Vostok and other ice-core analyses. Here uncertainty focusses on the change in average surface temperature between the last glacial maximum and the pre-industrial benchmark. While the best estimate remains at 5°C, there is a spread of +/- 1°C. That gives a range of value for the ESS from 6.24°C to 9.36°C with the highest probability around 7.8°C. The uncertainty in the gradient is further constrained by the probability spreads around the four further check points. The red line must pass through the pre-industrial origin and seeks the best path through the probability distribution hoops at concentration values of 180 ppm, 360 ppm, 564 ppm and 1000 ppm. That set of conditions reduces still further the uncertainty in the value of the ESS. (The range is way below the uncertainty spread generated by the ensemble of coupled climate models.) For all practical and policy- making purposes the value of 7.8°C for the full Earth System Sensitivity may be treated as robust. 
3. Up-date on some key variables. In the light of the most recent literature, values of some of the key variables used in this analysis have been up-dated from those used in previous editions. (The values are based on Previdi et al. 2013, Appendix B, where further references and citations are given in full.) In particular, the value of the Radiative Damping Coefficient has been corrected from 3.3wm-2°C-1 to 3.8wm-2°C-1. In addition the forcing due to doubling COconcentration on its own has been adjusted from 4wm-2 to 3.7wm-2, requiring a compensatory change in average surface temperature of 0.97°C rather than 1.2°C. These variables are inter-dependent and when reapplied to the Mathematical Derivation or to the Graphic Simulator, still yield an unchanged value of 7.8°C for the Earth System Sensitivity, although the Amplification Factor has to increase from 6.5 to 8.0. 
4. Confusion around the meaning of “fast feedbacks”. The feedbacks concerned (water vapour, albedo from the area of floating sea-ice, certain cloud effects) all respond quickly to changes in the average surface temperature of the planet. In that sense they may be designated as “fast feedbacks”, though they are not alone in deserving that description. They are fast in contributing to the forcing or energy imbalance of the planet. However, their effect on temperature change is not fast. Temperature change depends on the immense thermal inertia of the earth system. It can take centuries or even millennia to reach eventual equilibrium. It is therefore not appropriate to limit attention to the so-called “fast” feedbacks as if their effect on temperature produces change on short time-scales that fit human political policymaking. Short time-scale or “transient” temperature responses fall far short of the eventual equilibrium anomaly consistent with the anthropogenic disturbance of the climate system. Policymaking must resist the pressure to focus on short time-scale temperature responses, economically tempting though that may be. It is the eventual equilibrium response that determines climate sensitivity, demonstrates the full implications of our actions, and must serve as the ground for our strategic response. 

Correlations and Consequences of the ESS 
We are now in a position to examine some of the implications, correlations and consequences of replacing the “Charney” or fast-feedback sensitivity with the full Earth System Sensitivity. 
Correlations and Consequences 
1.           Equilibrium increase in average surface temperature correlated with a 440 ppm concentration of atmospheric COnow stands at 5°C not 2°C. The so-called “safe guide- line” of 440 ppm and 2°C can no longer be taken as the basis for policy-making. 
2.           Temperature increase as a consequence of sustaining levels of COconcentration at the present (2014) value of 398 ppm is 4°C. With current observed increase of 0.85°C that leave us with some 3.15°C “in the pipeline”. That replaces the conservative prediction of only another 0.65°C based on the fast-feedback sensitivity. The increase in COforcing driven by the change in concentration from the pre-industrial value of 280 ppm to today’s 398 ppm is equivalent to 80% of the climate shift between the last glacial maximum and the pre-industrial benchmark. 
3.           When we include the anthropogenic forcing from other, non-CO2, greenhouse gasses the CO2e concentration now stands at around 450 ppm giving an implied equilibrium temperature increase of 5°C as a result of atmospheric changes already made. That is equivalent to 100% of the climate change between the last glacial maximum and the pre- industrial benchmark. It leaves us facing an expected rise in average surface temperature of 4.15°C above the present. 
4.           The internationally agreed goal of limiting equilibrium temperature increase to not more than 2°C above the pre-industrial level was already broken when the atmospheric concentration of greenhouse gasses passed the 330 ppm mark (CO2e). That happened somewhere around 1965. 
5. A set of promises, commitments or “contributions” to the global task of emissions reduction has been made by some 80 countries following the “Copenhagen Accord” and its subsequent development. Based on the “Charney” fast-feedback sensitivity, those promises are being predicted (if implemented) to give rise to an increase in temperature of around 4°C by the end of the 21st century. That would be equivalent to an equilibrium increase of some 5.7°C. As soon as we replace the “Charney” value with that of the full Earth System Sensitivity it is immediately clear that the implications are of an end-of- century increase of 10°C rising, at eventual equilibrium, to more like 15°C. It should be noted that currently unrestrained emissions of atmospheric greenhouse gasses are running significantly in excess of the promised levels. 
Collapse of the Carbon Budget 
Undoubtedly the most strategically important consequence of replacing the fast-feedback sensitivity with the full Earth System Sensitivity is the collapse of the “Carbon Budget”. 
The concept of “available carbon budget” was introduced prior to the Copenhagen climate summit of 2009 (COP 15). It was used to quantify the amount of future anthropogenic emissions (measured either in gigatons of carbon or of CO2) that could be allowed before a given probability of passing the 2°C safe threshold would be exceeded. That in turn informed the terms of strategic policymaking and set the scene for the international negotiations (wrangling would be a better term!) to determine which nations had the right to a given slice of the emissions budget. 
In 2009 the atmospheric concentration of COstood at around 385 ppm. Using the fast- feedbacks sensitivity, calculations indicated that the 2°C target would be breached as concentrations passed the 440 ppm mark. The “available carbon budget” was therefore determined by the emissions equivalent to an increase of around 55 ppm in the atmospheric loading. That translates to around 266 GtC or 974 GtCO2. The figure was rounded up slightly to one trillion tonnes of COand the participating countries were invited to submit pledges of emissions reduction to ensure that the budget was not over-spent. 


Sadly the set of promises came nowhere near the required reduction in emissions and there remains an almost unbridgeable gap between pledged action and target budget. 

By 2014 the atmospheric concentration of COhas risen to 398 ppm. The “safe” ceiling has been eased back to 450 ppm (essentially because the temperature response has been changed from eventual equilibrium to a “transient” or shorter term prediction). Based on the output of the ensemble of coupled climate models using the fast-feedback sensitivity, the resulting spacein the sky-fill site of some 52 ppm can accommodate around 250 GtC (or 920 GtCO2) of future emissions before facing a 50% chance of remaining below the 2°C target. Budget estimates vary around these figures depending on the different methodologies employed. 

However, if we move from the conservative convenience of computer space to the harsh reality of the whole earth system response, a very different picture emerges. 

Zooming in on the relevant section of the graphical simulator it is immediately obvious, from the red line of the Earth System Sensitivity, that the 2°C ceiling was passed as the COconcentration exceeded around 330 ppm. With today’s concentration standing at 398 ppm there is no available carbon budget. On the contrary, the global climate system is massively overdrawn. The overshoot now stands at 68 ppm. In other words we have already emitted around 328 GtC (or 1204 GtCO2) beyond the level at which the 2°C ceiling is overwhelmed.All further emissions simply add to the carbon debt rather than taking up a share of some hypothetical carbon budget. 
With the recognition that the 2°C target is set far too high to avoid dangerous climate change comes the realisation that increase in average surface temperature should be kept to no more than 1°C above the pre-industrial benchmark. The equivalent COconcentration stands at around 310 ppm, so the overdraft would stand at 88 ppm, i.e. 425 GtC or 1,560 GtCO2. (The required concentration increases to 320 ppm if the temperature constraint is relaxed to 1.5°C. In that case, the overdraft would be reduced to 78 ppm which is equivalent to 377 GtC or 1,380 GtCO2
The collapse of the carbon budget and the recognition of a massive carbon debt have fundamental implications for the strategic global response to the climate crisis. They completely re-draw the parameters of policy negotiations as we approach the Paris climate summit of 2015. 

In order to prepare the ground for that agenda, several further issues must be addressed. They concern: 
1.           The implications of moving beyond the stable state of the Holocene to the conditions of rapid change that characterise the Anthropocene. 
2.           The non-linear relationship between sensitivity and the feedback system which sets the boundary conditions of self-amplification (“runaway” behaviour) for the global climate. 
3.           The inadequate treatment of climate sensitivity in the Summary for Policymakers of Workgroup 1 (the scientific basis) of the 5th Assessment Report of the IPCC, with its fundamental consequences for the subsequent presentation in Workgroups 2 and  
Beyond the Stable State of the Holocene 
As we turn to address the first of those issues, we note that all the calculations, simulations and projection of climate sensitivity have been based on the comparatively stable conditions of dynamic equilibrium during the Holocene. 
Those conditions no longer apply in the Anthropocene, the period when major changes in the earth system are increasingly driven by the cumulative and collective activity of the human species. In this context the value of the Earth System Sensitivity, derived from the stable conditions of the Holocene, is driven even higher by factors which are brought into play in the conditions of rapid change and far-from-equilibrium behaviour of the Anthroocene. 
The discontinuity is evident when we examine the fourfold interdependent sequence of population growth, energy use, pollution output and surface temperature. 


· Population Growth 


Natural constraints on population-growth broke down with the discovery and use of virtually unlimited energy resources in the form of fossilised hydrocarbons. That allowed the species to move into “swarm” mode. 
· Energy Use
The energy used per head of population has also been increasing significantly during this period, so the graph of energy use is growing even more steeply than that of the population. 
· Pollution Output 
One of the initially ignored by-products of the oxidation of fossil hydrocarbons is the release of carbon-dioxide to the atmosphere from which it was biologically sequestered many millions of years ago. As energy required to retrieve unit energy for consumption has also been increasing, so the growth in output and accumulation of greenhouse gas has outstripped the energy curve which has in turn outstripped the population curve. 
· Temperature Change 
The rapid and accelerating change in greenhouse effect blocked outgoing infra-red radiation, causes a shift away from the historical dynamic equilibrium of the planet. The slow adjustment in average surface temperature required to re-balance the system is damped by the massive thermal inertia of the earth. As a result there is an increasing time-delay or lag in the system response between cause and effect. During this extending time-delay, the forcing from anthropogenic emissions and other activity continues to widen the gap between incoming and outgoing radiation. Some 90% of the consequent planetary heating is taken up by the oceans and only a small amount of the energy is invested in change in the average surface temperature. Temperature change is the primary driver of the amplifying feedback system, so the effects of anthropogenic forcing are compounded with the feedback dynamics to enhance the energy imbalance of the system. 

Atmospheric Carbon-dioxide 

In spite of the inertia, the currently observed increase of 0.85°C already shows as a dramatic discontinuity against the backdrop of temperature change during the dynamic equilibrium of the Holocene. Even if all anthropogenic disturbance of the planetary climate were to cease immediately (and the atmospheric composition were to remain in its current condition), the recovery of dynamic equilibrium would drive a further five-fold increase in average surface temperature. 
· Anthropocene Rapidation and the increase in ESS 
The present rate of change is around 100 times greater than the fastest transition detectable in the ice-core and sediment records. It is only surpassed in planetary history by the response to massive asteroidal impact. The pace of change overrides the natural adaptation response of many of the earth’s interconnected systems and sets off a set of responses which themselves drive the value of the earth system sensitivity even higher than that derived from the slowly changing dynamic equilibrium conditions that characterised the Holocene period. 
The cumulative impact of these five factors underpins the conclusion that the value of the ESS derived from the stable conditions of the Holocene should be treated as a conservative minimal value during the current rapidation of the Anthropocene. 
Now we can move on to explore the second of our four issues, namely the non-linear relationship between sensitivity and the feedback system which sets the boundary conditions of self-amplification (“runaway” behaviour) for the global climate. 
Runaway behaviour in system performance occurs when the net effect of the damping feedback mechanisms (the net negative feedback) is overwhelmed by the net effect of the amplifying feedback mechanisms (net positive feedback). That critical threshold or “tipping point” marks the onset of a period of exponential change in system performance. 

Non-linearity & the Boundary Conditions of Self- Amplification 


The graphical presentation is complex, so let me introduce it one element at a time: 
         · Up the vertical axis we plot the value of the Amplification Factor (AF). 
         · Along the horizontal axis are the values of the Feedback Factor (FF), i.e. the number of watts per square metre added to the greenhouse effect by the feedback system for an increase of 1°C. 
         · The Radiative Damping Coefficient (λo) is entered as a vertical purple line at its value of 3.8wm-2°C-1. This is the amount by which the infra-red radiation to space increases for each degree rise in average surface temperature. Alternatively it represents the decrease in radiative imbalance of the system for each 1°C rise in average surface temperature. It acts as a negative (damping) feedback that constrains climate forcing and reduces it back to an equilibrium over time. Significantly for our purposes it marks the critical threshold in the value of the Feedback Factor beyond which the planetary system moves into a temporary phase of self-amplification (or runaway behaviour). 
         · Now we can enter five key points on the graph: (See Table 3, p35) 
1.                  For COon its own, the Feedback Factor is zero, but the Amplification Factor is just 1. 
2.                 Taking account of the fast feedbacks (“Charney” value), FF= 2.59 and. AF = 3.1 
3.                 Introducing the Carbon cycle feedbacks (“Hadley” value), FF= =2. 99 and. AF = 4.64 
4.                 Adding the land-based ice-sheet feedbacks (“Hansen” value), FF= 3.21 and. AF = 6.2 
5.                  For the Earth System Sensitivity (Holocene), FF = 3.32 and. AF = 8.0 
         It is immediately obvious that these special points lie on the more general curve relating the Amplification Factor to the Feedback Factor. The value of the Amplification Factor is given by the ratio between the value of the Radiative Damping Coefficient (λo) and the difference between the Radiative Damping Coefficient and the Feedback Factor. This latter is known as the Net Damping Coefficient and is designated by Î»(See Table 4, p36) 


The relationship is strongly non-linear. The closer the Feedback Factor gets to the Radiative Damping Coefficient, the greater is the change in the corresponding value of the Amplification Factor. 
Provided the Feedback Factor is less than the Radiative Damping Coefficient, (where Î» > 0, i.e. to the left of the critical threshold) there remains a positive net damping effect on the system and the temperature tends towards an eventual equilibrium. 
In the special case when the Feedback Factor just equals the Radiative Damping Coefficient (i.e. Î» = 0) we hit the critical threshold and temperature goes on increasing indefinitely at a constant rate governed by the initial forcing. This marks the “tipping point” in the global system between equilibrating and self-amplifying behaviour. 
To the right of the Critical Threshold, where the Feedback Factor is greater than the Radiative Damping Coefficient (i.e. Î» < 0), the system moves into self-amplification or runaway behaviour. The larger the Feedback Factor, the greater the rate of acceleration of the runaway condition. 
In actual physical systems, no runaway episode can continue to infinity, as new factors come into play that eventually bring the self-amplification to a halt. The earth’s climate system is no exception, and we can identify several naturally occurring processes that damp the behaviour in the long term: 
         · Snow and ice-field albedo and phase-change feedbacks degrade with rising temperature as we move towards an ice-free world. 
         · There is a finite limit to the amount of carbon stored in bio-mass and available for release to the atmosphere. 
         · There is also a finite limit to the amount of methane (stored in frozen tundra or as sea-bed clathrate) available for release to the atmosphere. 
         · Efficiency of the greenhouse effect of atmospheric COcontinues to degrade with rising concentration. We are now in a position to explore the set of temperature trajectories that result from our analysis of the feedback dynamics of the planetary climate system. 
In the left-hand diagram the time-scale is marked out in decades. The rising set of equilibrium curves shows the temperature projections consistent with differing values of sensitivity and following stabilisation of atmospheric concentration of COat twice the pre-industrial value. The tipping-point line and the set of self-amplification curves reflect the possible outcomes as the feedback factor is driven beyond the Holocene value in the current conditions of the Anthropocene. 
In the right-hand diagram the time-scale is compressed to represent centuries shading into millennia. It illustrates the damping and containment of the temporary runaway behaviours and the long slow return towards cooler surface temperatures as the greenhouse gasses are gradually removed from the atmosphere by long-term geological processes. 
We conclude this section dealing with the non-linear relationship between the feedback and amplification factors by exploring some of its implications: 
1.           First we note that the current computer ensemble of coupled climate models (on which the IPCC reports are based) deals only with the consequences of the fast feedbacks with a value for the feedback factor of around 2.6. This sits in an area of the curve where quite large changes in feedback behaviour have comparatively small effect on the outcome temperature. 
2.           Next we observe that the strength of the feedback factor of the Earth System Sensitivity (based on the stable conditions of the Holocene) puts the amplification factor at a point on the curve where small changes in feedback dynamics drive major shifts in the outcome temperature. 
3.           Thirdly, the Anthropocene conditions of rapid change and far-from equilibrium behaviour place the feedback factor in an area of the curve where consequential temperature change is even higher, and could be driven into a temporary period of self-amplification. 
4.           Finally we conclude that the non-linear relationship between feedback and amplification factors adds the imperative of urgency to our strategic response to global heating. 
Now we are in a position to make a critical examination of the inadequate treatment of climate sensitivity in the Summary for Policymakers of Workgroup 1 (the scientific basis) of the 5th Assessment Report of the IPCC, with its fundamental consequences for the subsequent presentation in Workgroups 2 and 3. 
The Summary for Policymakers of the IPCC AR5 WG1 
The Summary for Policymakers of the IPCC AR5 WG1, was published on 27th September 2013 in Stockholm after four days of intense scrutiny by agents representing the governments of all participating countries. Every word and line of the text previously submitted by the scientific community was examined and amended until it could be endorsed unanimously by the political representatives. The most intense debate appears to have focused around Figure 10. This diagram provides the basis from which to determine the available budget of carbon emissions still permitted to the international community before exceeding a given risk of temperature increase passing the policy target of 2°C. 
This central issue is of the most fundamental importance as the international community seeks to formulate a legally binding agreement on the mitigation of climate change. 
[Note that a fuller treatment of this issue has been published here: http://www.apollo-gaia.org/AR5SPM.html
The graphic reproduced below is taken from the final published version of the SPM. It was based on the submitted scientific draft which had been subject to some minor editorial amendments during the political scrutiny. The graphic display is complex and is explained in the supporting text. On the grounds that “Cumulative emissions of COlargely determine global mean surface warming by the late 21st Century and beyond”, the cumulative total human emissions in gigatons of carbon is plotted along the horizontal axis, beginning at the start of the industrial revolution. With the same starting point, the modelled change in average global surface temperature, driven by the carbon accumulation, is plotted up the vertical axis. The equivalent value of the total emitted mass of COis indicated along the top of the image. 


The four coloured lines with date markers represent the modelled change in future temperature corresponding to the accumulation of carbon emissions for the four “Representative Concentration Pathways” (RCPs) associated with differing rates of emissions to the year 2100. The thick black line portrays the modelled temperature change corresponding to total emissions over the historical period up to 2010. The thin black line demonstrates the temperature change driven by carbon accumulation at the rate of 1% per year (a “compound interest” or exponential rate of change that compensates for the exponential decay in efficiency of the greenhouse gas effect of COas the wavelength at which it absorbs infra-red radiation becomes more and more saturated). The grey shading represents the uncertainty spread around the thin black line, generated by the array of climate models involved. The coloured plume does the same for the coloured lines. 
· Exposition and Analysis 
For clarity of analysis we now reproduce the submitted scientific version of the graphic, to which we have added a set of modifications. Along the top, the total emitted mass of COhas been replaced by the atmospheric concentration of COin ppm. The estimate of the total carbon emitted by 2011 has been retained in the lower left corner. 
The gradient of the “Charney” (or fast feedback) sensitivity has been added as the thick blue line. It has also been corrected for the decay in the greenhouse effect with rising concentration (the thick purple curve). 
It is immediately clear that the near-linear relationship between total cumulative carbon emissions and projected “transient” temperature response coincides with the uncorrected outcome of the Charney sensitivity. This is only to be expected since the “Climate Modelling Inter-comparison Project Phase 5” (CMIP5) on which the IPCC AR5 WG1 is based, restricts its modelling of the feedback system to the fast feedbacks. 
The “transient” value of temperature projection has replaced the previous “final equilibrium” figure, though this change is not noted in the text of the SPM. Higher values of sensitivity are deemed to be non-significant on the (incorrect) assumption that they have no effect on the short term temperature response. 


The adoption of the “transient temperature response” originated as an attempt to overcome difficulties in sensitivity modelling, and to avoid the high degree of uncertainty in sensitivity value, stemming historically from the model ensemble. The understandable simplification restricts modelled behaviour to certain fast feedback dynamics, but grossly misrepresents the response of the climate system to anthropogenic disturbance. 
Based on the presentation in the Summary for Policymakers, we can now derive a value for the “Available Carbon Budget” (amount of cumulative carbon emissions still available if the policy ceiling of a 2°C temperature anomaly is not to be exceeded). A horizontal line is drawn from the 2°C point until it intersects the lines representing the linear function and its corrected curve. Dropping vertical lines from the intersection points we note that the 2°C anomaly would be exceeded when the cumulative carbon emissions passed c.903 GtC on the linear function. Taking the total cumulative emissions at 2011 (as in the text of the SPM) as 531 GtC, and adding a further 31 GtC to represent emissions over the following three years, we derive a value of total cumulative emissions in 2014 as 562 GtC. Subtracting this figure from the value at which the 2°C ceiling is breached gives a value for the available budget of carbon emissions of 341 GtC using the linear function. It is on this basis that the international community is attempting to negotiate the equitable sharing out of the available carbon budget, while recognising that the budget varies if the temperature target is changed, if the risk of passing the 2°C marker is altered, or if the forcing from other non-COgreenhouse gasses is included. 
Beginning with the “Copenhagen Accord” of 2009 and continuing through the subsequent “Conferences of the Parties” to the UNFCCC, some 80 participating countries have made promises, pledges, commitments, or, since Warsaw, “contributions” towards reducing their emissions of CO2. As was pointed out in the 2013 UNEP Report, even the full outcome of action on such pledges would still be an emission of carbon that would take the cumulative total to some 2000 GtC by the year 2100. Plotting that figure onto the new metric shows it to be equivalent to a concentration of some 694 ppm of atmospheric CO2, which yields a transient increase in temperature of around 4°C. There is a massive gap between the pledged reduction in emissions and that required to keep within the available budget if the agreed policy target of a 2°C ceiling is not to be broken. 
· Critical Dependence on Climate Sensitivity 
Because it is based solely on fast feedback amplification, the transient temperature response is essentially independent of the value of climate sensitivity. In contrast, as affirmed in the main body of the Report, the function of equilibrium temperature response to cumulative carbon emissions depends critically on the value of climate sensitivity. The time-scale may be longer than typical political horizons, but nevertheless, it is this response that must now be taken into account in strategic executive decision-making at all levels of our world community. It is the basis on which to calculate values of greenhouse gas concentration that lead to climate stabilisation at a temperature consistent with the commitment to avoid dangerous climate change. The omission from the Summary for Policymakers of all recognition that temperature response depends critically on climate sensitivity is an unacceptable methodology that deprives policymakers of vital information. It strikes at the very heart of our global capacity to take effective action in the face of dangerous climate change 
The implications of including the equilibrium projections using other values for climate sensitivity are well illustrated in our next graphic. 

In the SPM, the gradient of transient temperature response to cumulative total emissions of carbon approximates to the “Charney” or fast feedback value of climate sensitivity. The gradient steepens dramatically as more comprehensive treatments of the feedback system are included. 
The steep gradients of the lines representing the Hansen and Earth System sensitivities mean that the temperature anomalies associated with a doubling of the concentration of atmospheric CO(let alone those representing the response to a total accumulation of emitted carbon of some 2000 GtC) are right off the top of the scale of SPM 10. 
In this next figure, the temperature axis has been compressed by a factor of 2.5 to accommodate the full range of temperature anomaly associated with the Earth System Sensitivity (ESS). 



· Replacing the Fast-feedback Sensitivity with the ESS 
The SPM 10 representation of the new metric has been compressed to fit the new scale. Both the linear and corrected non-linear versions of the ESS have been included, as has the vertical line showing the doubling of atmospheric concentration of CObeyond the pre-industrial benchmark. The temperature anomaly of 3°C, predicted by the transient climate response to a doubling of the concentration of atmospheric CO2, has been complemented by the inclusion of the equivalent predicted temperature anomaly of 7.8°C based on the application of the full value of the Earth System Sensitivity. 
[Please note that while the Holocene value of the ESS is used throughout the rest of this paper, it should be treated as a conservative baseline. The actual response of the earth’s climate system is expected to be even higher in the conditions of rapid change and far-from-equilibrium behaviour of the Anthropocene.] 
· Re-evaluation of the “available carbon budget” 
As before, the 2°C marker line is extended horizontally. It crosses the red line (non-linear corrected curve of the ESS) as the cumulative total of anthropogenic COemissions passes 174 GtC. Moving further to the intersection with the green line (uncorrected linear version of the ESS), even this point is passed as the cumulative emissions reaches the 234 GtC level. As was the case with the SPM, adopting the linear approximation allows some 60 GtC extra carbon emissions (or about six years’ worth at current rates). 
Extending the 2°C line even further till it crosses the linear un-corrected line of the transient temperature response (the “Charney” sensitivity) embedded in the Summary for Policymakers, we recall that the appropriate cumulative carbon emissions stood at 903 GtC, a discrepancy of 729 GtC beyond the threshold derived from the Earth System Sensitivity. 
Given the “policy target” of restraining increase in global surface temperature to below 2°C, the Summary for Policymakers supports the impression that there is still slack in the system. With current (2014) cumulative total anthropogenic carbon emissions standing at c. 562 GtC, and a ceiling target of 903 GtC, there is an apparent available carbon budget of some 341 GtC. 



However, when we apply the full Earth System Sensitivity, the equilibrium planetary response to anthropogenic emissions can be seen to have exceeded the policy target (of a maximum increase of 2°C) as the cumulative emissions passed 174 GtC. 
It is therefore clear that there is no available carbon budget. In fact the account is already massively overdrawn by a total of 388 GtC. In other words, there is no surplus in the account to be shared out (equitably or otherwise) across the international community. Civilization is deeply in debt to the planetary environment, and every extra tonne of emitted carbon simply adds to that debt. Sadly there are no bankruptcy arrangements in place between human civilisation and its planetary environment. 
Some Concluding Comments 
One of the most profound implications of replacing the transient temperature response of the SPM with the full value of the Earth System Sensitivity, is the dramatic change in predicted temperature. Where the “Charney” sensitivity indicated that a 903 GtC level of total cumulative anthropogenic emissions would lead to a 2°C rise in temperature, that same total can now be seen to give rise to an equilibrium temperature response of 5.4°C. It is starting to become clear why the “New Metric” of the SPM is so politically and economically attractive, and why the pressure not to base GHG stabilization targets on the Earth System Sensitivity is so intense. 
Another outcome of replacing the fast feedback sensitivity with the whole Earth System Sensitivity concerns the projected end-of-century temperature response to the current set of international commitments to reduction in COemissions. With an expected total cumulative carbon emission of around 2000 GtC, the IPCC SPM indicates a transient temperature response of around 4°C. The ESS corrects this to around 10°C, with the extension to full equilibrium response of more like 15°C. An ice-free world and a sea-level rise of around 120 metres are in prospect. 
In the light of the above, and taking into consideration the following facts: 
1.           That the Earth System Sensitivity in current conditions of the Anthropocene will be higher than the value used in this exposition 
2.           That the contributions from other non-COgreenhouse gasses have not been taken into account 
3.           That the 2°C target is now known to be set too high to avoid dangerous climate change 
4.           That equilibrium temperature increase predicted as a result of current concentration of atmospheric greenhouse gasses is already over 5°C 
We note that climate stabilization at a level close to the required policy target of not more than 2°C above the pre-industrial benchmark, (let alone the essential reduction in that target to no more than 1°C above the pre-industrial benchmark) cannot be achieved simply by a programme of emissions reduction on its own. That is a necessary but not sufficient intervention.  
The gap between current and target concentration requires urgent and aggressive reduction in the airborne concentration of CO2, in concert with a termination of emissions from fossil hydrocarbon sources and a rejection of all other activity that increases the net radiative imbalance of the planet or that profits therefrom. 
The inadequacies imbedded in the Summary for Policymakers of the IPCC AR5 WG1 clearly render it unfit for the purpose of policymaking. 
The subsequent reports of Workgroups 2 and 3 of the IPCC AR5, depend on the output from Workgroup 1 for their scientific basis. As a consequence, their analysis of likely impacts, intensity, time-frame and proposed mitigation requirements are all subject to the limitations exposed above. 

Substitution of the value of the Earth System Sensitivity in place of the limited fast- feedback sensitivity of the CMIP5 model ensemble amplifies all temperature predictions by a factor of 2.5. Major revision of the Reports from Workgroups 2 and 3 will therefore be essential if strategic policymaking by the international community is to achieve Climate Change Solutions that deal with the reality of Climate Change Problems.