Showing posts with label underestimation. Show all posts
Showing posts with label underestimation. Show all posts

Monday, November 11, 2019

Understated Threat

THOUGHTS ON WHY THE EARLY IPCC ASSESSMENTS UNDERSTATED THE CLIMATE THREAT. Eugene Linden. Nov. 11, 2019.

An op-ed involves extreme compression, and so I thought I’d expand on why I think the initial IPCC reports so underestimated the threat. Make no mistake, the consensus in the summaries for policy makers in the first two assessments did underestimate the threat. The consensus was that permafrost would be stable for the next 100 years and also that the ice sheets would remain stable (there was even a strong sentiment at that time that the East Antarctic sheet would gain mass). Moreover, in 1990, the concept of rapid climate change was at the periphery of mainstream scientific opinion. 


All these things turned out to be wrong.
Of course, there were scientists at that time who raised alarms about the possibility of rapid climate change, collapse of the ice sheets, and nightmare scenarios of melting permafrost, but, fairly or not, the IPCC summary for policy makers was and is taken to represent the consensus of scientific thinking.

In my opinion such documents will always take a more conservative (less dramatic) position than what scientists feel is justified. For one thing the IPCC included policy makers, most of whom were more incentivized to downplay the threats. For another, many of the national governments that were the customers for these assessments barely tolerated the exercise and gave strong signals that they didn’t want to see anything that called for dramatic action, and this being the UN, there was a strong push to present a document that as many governments as possible would accept.

And then there is the nature of science and the state of climate science at that point. There is an inherent structural lag built in to the nature of science. For instance, the 1980’s were marked by the rapid development of proxies to see past climate changes with ever more precision. By the mid-late 80’s the proxies and siting had been refined sufficiently that the GISP and GRIP projects could confidently get ice cores from Greenland that they felt represented a true climate record and by then they also had the proxies with the resolution to see the rapid changes that had taken place in the past. Given the nature of data collection, interpretation, peer-review and publishing, it wasn’t until 1993 that these results were published.

It took nearly another decade for this new, alarming, paradigm about how rapidly global climate can change to percolate through the scientific community, and, even today, much of the public is unaware that climate can change on a dime.

As for the ice sheets, when I was on the West Antarctic Ice Sheet in 1996, there was talk about the acceleration of ice streams feeding the Thwaites and Pine Island glaciers, but the notion that there might be a significant increase in runoff from the ice sheet over the next hundred years was still very much a fringe idea.

With permafrost, the problem was a sparsity of data in the 80s and early 90s and it is understandable that scientists didn’t want to venture beyond the data.

The problem for society as a whole was that the muted consensus on the scale of the threat diminished any sense of urgency about dealing with the problem

Perhaps the best example of this was the early work of William Nordhaus. Working from the IPCC best estimates in the early 1990s Nordhaus published one paper in which he predicted the hit to the US GDP from climate change in 2100 would be about ½ of 1%. Nobody is going to jump out of their chair and demand action if the hit to the economy was going to be 0.5% of GPD a hundred years later. Libertarians such as William Niskanen seized on this and testified before Congress that there was plenty of time to deal with global warming if it was a threat at all.

And then there was the disinformation campaign of industry, particularly fossil fuel lobbyists, as well as pressure from unions (the UAW in particular) and the financial community. These highly motivated, deep-pocketed interests seized on scientific caution to suggest deep divisions among scientists and that the threat was overplayed. Little wonder then that the public failed to appreciate that this was a looming crisis that demanded immediate, concerted action.

Saturday, November 9, 2019

Linden: Understated Threat

How Scientists Got Climate Change So Wrong. Eugene Linden, NYT Op-Ed. Nov. 8, 2019.

Few thought it would arrive so quickly. Now we’re facing consequences once viewed as fringe scenarios.

For decades, most scientists saw climate change as a distant prospect. We now know that thinking was wrong. This summer, for instance, a heat wave in Europe penetrated the Arctic, pushing temperatures into the 80s across much of the Far North and, according to the Belgian climate scientist Xavier Fettweis, melting some 40 billion tons of Greenland’s ice sheet.

Had a scientist in the early 1990s suggested that within 25 years a single heat wave would measurably raise sea levels, at an estimated two one-hundredths of an inch, bake the Arctic and produce Sahara-like temperatures in Paris and Berlin, the prediction would have been dismissed as alarmist. But many worst-case scenarios from that time are now realities.

Science is a process of discovery. It can move slowly as the pieces of a puzzle fall together and scientists refine their investigative tools. But in the case of climate, this deliberation has been accompanied by inertia born of bureaucratic caution and politics. A recent essay in Scientific American argued that scientists “tend to underestimate the severity of threats and the rapidity with which they might unfold” and said one of the reasons was “the perceived need for consensus.” This has had severe consequences, diluting what should have been a sense of urgency and vastly understating the looming costs of adaptation and dislocation as the planet continues to warm.

In 1990, the Intergovernmental Panel on Climate Change, the United Nations group of thousands of scientists representing 195 countries, said in its first report that climate change would arrive at a stately pace, that the methane-laden Arctic permafrost was not in danger of thawing, and that the Antarctic ice sheets were stable.

Relying on the climate change panel’s assessment, economists estimated that the economic hit would be small, providing further ammunition against an aggressive approach to reducing emissions and to building resilience to climate change.

As we now know, all of those predictions turned out to be completely wrong. Which makes you wonder whether the projected risks of further warming, dire as they are, might still be understated. How bad will things get?

So far, the costs of underestimation have been enormous. New York City’s subway system did not flood in its first 108 years, but Hurricane Sandy’s 2012 storm surge caused nearly $5 billion in water damage, much of which is still not repaired. In 2017, Hurricane Harvey gave Houston and the surrounding region a $125 billion lesson about the costs of misjudging the potential for floods.

The climate change panel seems finally to have caught up with the gravity of the climate crisis. Last year, the organization detailed the extraordinary difficulty of limiting warming to 2.7 degrees Fahrenheit (1.5 degrees Celsius), over the next 80 years, and the grim consequences that will result even if that goal is met.

More likely, a separate United Nations report concluded, we are headed for warming of at least 5.4 degrees Fahrenheit. That will come with almost unimaginable damage to economies and ecosystems. Unfortunately, this dose of reality arrives more than 30 years after human-caused climate change became a mainstream issue.

The word “upended” does not do justice to the revolution in climate science wrought by the discovery of sudden climate change. The realization that the global climate can swing between warm and cold periods in a matter of decades or even less came as a profound shock to scientists who thought those shifts took hundreds if not thousands of years.

Scientists knew major volcanic eruptions or asteroid strikes could affect climate rapidly, but such occurrences were uncommon and unpredictable. Absent such rare events, changes in climate looked steady and smooth, a consequence of slow-moving geophysical factors like the earth’s orbital cycle in combination with the tilt of the planet’s axis, or shifts in the continental plates.

Then, in the 1960s, a few scientists began to focus on an unusual event that took place after the last ice age. Scattered evidence suggested that the post-ice age warming was interrupted by a sudden cooling that began around 12,000 years ago and ended abruptly 1,300 years later. The era was named the Younger Dryas for a plant that proliferated during that cold period.

At first, some scientists questioned the rapidity and global reach of the cooling. A report from the National Academies of Science in 1975 acknowledged the Younger Dryas but concluded that it would take centuries for the climate to change in a meaningful way. But not everyone agreed. The climate scientist Wallace Broecker at Columbia had offered a theory that changes in ocean circulation could bring about sudden climate shifts like the Younger Dryas.

And it was Dr. Broecker who, in 1975, the same year as that National Academies report playing down the Younger Dryas, published a paper, titled “Climatic Change: Are We on the Brink of a Pronounced Global Warming?” in which he predicted that emissions of carbon dioxide would raise global temperatures significantly in the 21st century. This is now seen as prophetic, but at the time, Dr. Broecker was an outlier.

Then, in the early 1990s, scientists completed more precise studies of ice cores extracted from the Greenland ice sheet. Dust and oxygen isotopes encased in the cores provided a detailed climate record going back eons. It revealed that there had been 25 rapid climate change events like the Younger Dryas in the last glacial period.

The evidence in those ice cores would prove pivotal in turning the conventional wisdom. As the science historian Spencer Weart put it: “How abrupt was the discovery of abrupt climate change? Many climate experts would put their finger on one moment: the day they read the 1993 report of the analysis of Greenland ice cores. Before that, almost nobody confidently believed that the climate could change massively within a decade or two; after the report, almost nobody felt sure that it could not.”

In 2002, the National Academies acknowledged the reality of rapid climate change in a report, “Abrupt Climate Change: Inevitable Surprises,” which described the new consensus as a “paradigm shift.” This was a reversal of its 1975 report.

“Large, abrupt climate changes have affected hemispheric to global regions repeatedly, as shown by numerous paleoclimate records,” the report said, and added that “changes of up to 16 degrees Celsius and a factor of 2 in precipitation have occurred in some places in periods as short as decades to years.”

The National Academies report added that the implications of such potential rapid changes had not yet been considered by policymakers and economists. And even today, 17 years later, a substantial portion of the American public remains unaware or unconvinced it is happening.

Were the ice sheets of Greenland and Antarctica to melt, sea levels would rise by an estimated 225 feet worldwide. Few expect that to happen anytime soon. But those ice sheets now look a lot more fragile than they did to the climate change panel in 1995, when it said that little change was expected over the next hundred years.

In the years since, data has shown that both Greenland and Antarctica have been shedding ice far more rapidly than anticipated. Ice shelves, which are floating extensions of land ice, hold back glaciers from sliding into the sea and eventually melting. In the early 2000s, ice shelves began disintegrating in several parts of Antarctica, and scientists realized that process could greatly accelerate the demise of the vastly larger ice sheets themselves. And some major glaciers are dumping ice directly into the ocean.

By 2014, a number of scientists had concluded that an irreversible collapse of the West Antarctic ice sheet had already begun, and computer modeling in 2016 indicated that its disintegration in concert with other melting could raise sea levels up to six feet by 2100, about twice the increase described as a possible worst-case scenario just three years earlier. At that pace, some of the world’s great coastal cities, including New York, London and Hong Kong, would become inundated.

Then this year, a review of 40 years of satellite images suggested that the East Antarctic ice sheet, which was thought to be relatively stable, may also be shedding vast amounts of ice.

As the seas rise, they are also warming at a pace unanticipated as recently as five years ago. This is very bad news. For one thing, a warmer ocean means more powerful storms, and die-offs of marine life, but it also suggests that the planet is more sensitive to increased carbon dioxide emissions than previously thought.
The melting of permafrost has also defied expectations. This is ground that has remained frozen for at least two consecutive years and covers around a quarter of the exposed land mass of the Northern Hemisphere. As recently as 1995, it was thought to be stable. But by 2005, the National Center for Atmospheric Research estimated that up to 90 percent of the Northern Hemisphere’s topmost layer of permafrost could thaw by 2100, releasing vast amounts of carbon dioxide and methane into the atmosphere.

For all of the missed predictions, changes in the weather are confirming earlier expectations that a warming globe would be accompanied by an increase in the frequency and severity of extreme weather. And there are new findings unforeseen by early studies, such as the extremely rapid intensification of storms, as on Sept. 1, when Hurricane Dorian’s sustained winds intensified from 150 to 185 miles per hour in just nine hours, and last year when Hurricane Michael grew from tropical depression to major hurricane in just two days.

If the Trump administration has its way, even the revised worst-case scenarios may turn out to be too rosy. In late August, the administration announced a plan to roll back regulations intended to limit methane emissions resulting from oil and gas exploration, despite opposition from some of the largest companies subject to those regulations. More recently, its actions approached the surreal as the Justice Department opened an antitrust investigation into those auto companies that have agreed in principle to abide by higher gas mileage standards required by California. The administration also formally revoked a waiver allowing California to set stricter limits on tailpipe emissions than the federal government.

Even if scientists end up having lowballed their latest assessments of the consequences of the greenhouse gases we continue to emit into the atmosphere, their predictions are dire enough. But the Trump administration has made its posture toward climate change abundantly clear: Bring it on!

It’s already here. And it is going to get worse. A lot worse.

Sunday, September 29, 2019

Scientists Have Been Underestimating the Pace of Climate Change

Scientists Have Been Underestimating the Pace of Climate Change. By Naomi OreskesMichael OppenheimerDale Jamieson, Scientific American. August 19, 2019.

Recently, the U.K. Met Office announced a revision to the Hadley Center historical analysis of sea surface temperatures (SST), suggesting that the oceans have warmed about 0.1 degree Celsius more than previously thought. The need for revision arises from the long-recognized problem that in the past sea surface temperatures were measured using a variety of error-prone methods such as using open buckets, lamb’s wool–wrapped thermometers, and canvas bags. It was not until the 1990s that oceanographers developed a network of consistent and reliable measurement buoys.

Then, to develop a consistent picture of long-term trends, techniques had to be developed to compensate for the errors in the older measurements and reconcile them with the newer ones. The Hadley Centre has led this effort, and the new data set—dubbed HadSST4—is a welcome advance in our understanding of global climate change.

But that’s where the good news ends. Because the oceans cover three fifths of the globe, this correction implies that previous estimates of overall global warming have been too low. Moreover it was reported recently that in the one place where it was carefully measured, the underwater melting that is driving disintegration of ice sheets and glaciers is occurring far faster than predicted by theory—as much as two orders of magnitude faster—throwing current model projections of sea level rise further in doubt.

These recent updates, suggesting that climate change and its impacts are emerging faster than scientists previously thought, are consistent with observations that we and other colleagues have made identifying a pattern in assessments of climate research of underestimation of certain key climate indicators, and therefore underestimation of the threat of climate disruption. When new observations of the climate system have provided more or better data, or permitted us to reevaluate old ones, the findings for ice extent, sea level rise and ocean temperature have generally been worse than earlier prevailing views.

Consistent underestimation is a form of bias—in the literal meaning of a systematic tendency to lean in one direction or another—which raises the question: what is causing this bias in scientific analyses of the climate system?

The question is significant for two reasons. First, climate skeptics and deniers have often accused scientists of exaggerating the threat of climate change, but the evidence shows that not only have they not exaggerated, they have underestimated. This is important for the interpretation of the scientific evidence, for the defense of the integrity of climate science, and for public comprehension of the urgency of the climate issue. Second, objectivity is an essential ideal in scientific work, so if we have evidence that findings are biased in any direction—towards alarmism or complacency—this should concern us We should seek to identify the sources of that bias and correct them if we can.

In our new book, Discerning Experts, we explored the workings of scientific assessments for policy, with particular attention to their internal dynamics, as we attempted to illuminate how the scientists working in assessments make the judgments they do. Among other things, we wanted to know how scientists respond to the pressures—sometimes subtle, sometimes overt—that arise when they know that their conclusions will be disseminated beyond the research community—in short, when they know that the world is watching. The view that scientific evidence should guide public policy presumes that the evidence is of high quality, and that scientists’ interpretations of it are broadly correct. But, until now, those assumptions have rarely been closely examined.
We found little reason to doubt the results of scientific assessments, overall. We found no evidence of fraud, malfeasance or deliberate deception or manipulation. Nor did we find any reason to doubt that scientific assessments accurately reflect the views of their expert communities. But we did find that scientists tend to underestimate the severity of threats and the rapidity with which they might unfold.

Among the factors that appear to contribute to underestimation is the perceived need for consensus, or what we label univocality: the felt need to speak in a single voice. Many scientists worry that if disagreement is publicly aired, government officials will conflate differences of opinion with ignorance and use this as justification for inaction. Others worry that even if policy makers want to act, they will find it difficult to do so if scientists fail to send an unambiguous message. Therefore, they will actively seek to find their common ground and focus on areas of agreement; in some cases, they will only put forward conclusions on which they can all agree.

How does this lead to underestimation? Consider a case in which most scientists think that the correct answer to a question is in the range 1–10, but some believe that it could be as high as 100. In such a case, everyone will agree that it is at least 1–10, but not everyone will agree that it could be as high as 100. Therefore, the area of agreement is 1–10, and this is reported as the consensus view. Wherever there is a range of possible outcomes that includes a long, high-end tail of probability, the area of overlap will necessarily lie at or near the low end. Error bars can be (and generally are) used to express the range of possible outcomes, but it may be difficult to achieve consensus on the high end of the error estimate.

The push toward agreement may also be driven by a mental model that sees facts as matters about which all reasonable people should be able to agree versus differences of opinion or judgment that are potentially irresolvable. If the conclusions of an assessment report are not univocal, then (it may be thought that) they will be viewed as opinions rather than facts and dismissed not only by hostile critics but even by friendly forces. The drive toward consensus may therefore be an attempt to present the findings of the assessment as matters of fact rather than judgment.

The impulse toward univocality arose strongly in a debate over how to characterize the risk of disintegration of the West Antarctic Ice Sheet (WAIS) in the Fourth Assessment Report of the IPCC (AR4). Nearly all experts agreed there was such a risk as climate warmed, but some thought it was only very far in the future while others thought it might be more imminent. An additional complication was that some scientists felt that the available data were simply not sufficient to draw any defensible conclusion about the short-term risk, and so they made no estimate at all.

However, everyone concurred that, if WAIS did not disintegrate soon, it would likely disintegrate in the long run. Therefore, the area of agreement lay in the domain of the long run—the conclusion of a non-imminent risk—and so that is what was reported. The result was a minimalist conclusion, and we know now that the estimates that were offered were almost certainly too low.

This offers a significant point of contrast with academic science, where there is no particular pressure to achieve agreement by any particular deadline (except perhaps within a lab group, in order to be able to publish findings or meet a grant proposal deadline). Moreover, in academic life scientists garner attention and sometimes prestige by disagreeing with their colleagues, particularly if the latter are prominent. The reward structure of academic life leans toward criticism and dissent; the demands of assessment push toward agreement.

A second reason for underestimation involves an asymmetry in how scientists think about error and its effects on their reputations. Many scientists worry that if they over-estimate a threat, they will lose credibility, whereas if they under-estimate it, it will have little (if any) reputational impact. In climate science, this anxiety is reinforced by the drumbeat of climate denial, in which scientists are accused of being “alarmists” who “exaggerate the threat.” In this context, scientists may go the extra mile to disprove the stereotype by down-playing known risks and denying critics the opportunity to label them as alarmists.

Many scientists consider underestimates to be “conservative,” because they are conservative with respect to the question of when to sound an alarm or how loudly to sound it. The logic of this can be questioned, because underestimation is not conservative when viewed in terms of giving people adequate time to prepare. (Consider for example, an underestimate of an imminent hurricane, tornado, or earthquake.) In the AR4 WAIS debate, scientists underestimated the threat of rapid ice sheet disintegration because many of the scientists who participated were more comfortable with an estimate that they viewed as "conservative" than with one that was not.

The combination of these three factors—the push for univocality, the belief that conservatism is socially and politically protective, and the reluctance to make estimates at all when the available data are contradictory—can lead to “least common denominator'' results—minimalist conclusions that are weak or incomplete.

Moreover, if consensus is viewed as a requirement, scientists may avoid discussing tricky issues that engender controversy (but might still be important), or exclude certain experts whose opinions are known to be “controversial” (but may nevertheless have pertinent expertise). They may also consciously or unconsciously pull back from reporting on extreme outcomes. (Elsewhere we have labeled this tendency "erring on the side of least drama.”) In short, the push for agreement and caution may undermine other important goals, including inclusivity, accuracy and comprehension.

We are not suggesting that every example of underestimation is necessarily caused by the factors we observed in our work, nor that the demand for consensus always leads to conservatism. Without looking closely at any given case, we cannot be sure whether the effects we observed are operating or not. But we found that the pattern of underestimation that we observed in the WAIS debate also occurred in assessments of acid rain and the ozone hole.

We found that the institutional aspects of assessment, including who the authors are and how they are chosen, how the substance is divided into chapters, and guidance emphasizing consensus, also mitigate in favor of scientific conservatism. Thus, so far as our evidence goes, it appears that scientists working in assessments are more likely to underestimate than to overestimate threats.

In our book, we make some concrete recommendations. While scientists in assessments generally aim for consensus, we suggest that they should not view consensus as a goal of the assessment. Depending on the state of scientific knowledge, consensus may or may not emerge from an assessment, but it should not be viewed as something that needs to be achieved and certainly not as something to be enforced. Where there are substantive differences of opinion, they should be acknowledged and the reasons for them explained (to the extent that they can be explained). Scientific communities should also be open to experimenting with alternative models for making and expressing group judgments, and to learning more about how policy makers actually interpret the findings that result.

Saturday, August 10, 2019

New Models Point to More Global Warming Than We Expected

New Models Point to More Global Warming Than We ExpectedBob Henson, Weather Underground. August 6, 2019.



Above: Marine stratocumulus clouds from the Pacific Ocean stream atop Chile’s Atacama Desert. Marine stratocumulus cover vast swaths of the tropical and subtropical oceans, where they reflect large amounts of sunlight and provide an overall cooling effect on climate. New global climate models are showing the potential for more global warming than long thought, perhaps due to a reduction in low-level clouds such as marine stratocumulus. Image credit: NCAR/UCAR Image and Multimedia Gallery.


Our planet’s climate may be more sensitive to increases in greenhouse gas than we realized, according to a new generation of global climate models being used for the next major assessment from the Intergovernmental Panel on Climate Change (IPCC). The findings—which run counter to a 40-year consensus—are a troubling sign that future warming and related impacts could be even worse than expected.

One of the new models, the second version of the Community Earth System Model (CESM2) from the National Center for Atmospheric Research (NCAR), saw a 35% increase in its equilibrium climate sensitivity (ECS), the rise in global temperature one might expect as the atmosphere adjusts to an instantaneous doubling of atmospheric carbon dioxide. Instead of the model’s previous ECS of 4°C (7.2°F), the CESM2 now shows an ECS of 5.3°C (9.5°F).

“It is imperative that the community work in a multi-model context to understand how plausible such a high ECS is,” said NCAR’s Andrew Gettelman and coauthors in a paper published last month in Geophysical Research Letters. They added: “What scares us is not that the CESM2 ECS is wrong…but that it might be right.”

At least eight of the global-scale models used by IPCC are showing upward trends in climate sensitivity, according to climate researcher Joëlle Gergis, an IPCC lead author and a scientific advisor to Australia’s Climate Council. Gergis wrote about the disconcerting trends in an August column for the Australian website The Monthly.

Researchers are now evaluating the models to see whether the higher ECS values are model artifacts or correctly depict a more dire prognosis.

“The model runs aren’t all available yet, but when many of the most advanced models in the world are independently reproducing the same disturbing results, it’s hard not to worry,” said Gergis.

A potential upending of a four-decade consensus

The IPCC issues comprehensive climate assessments every few years, along with interim reports on special topics in between. The IPCC’s Sixth Assessment Report (AR6) will be written over the next several years and released in 2021-22, based on papers being published through the end of 2019.

Back in 1979, a landmark U.S. climate study informally called the Charney Report estimated that the planet’s equilibrium climate sensitivity was between 1.5°C and 4.5°C. Each of the IPCC’s five major assessments since 1990 has largely agreed with this conclusion, although a few individual models have gone outside the range.

Figure 1. The consensus range of equilibrium climate sensitivity (ECS) from each of the IPCC's five assessment reports released since 2000. Model assessment is still under way for the sixth report, due in 2021-22. Also shown are ECS values for each of the models contributed by the National Center for Atmospheric Research (NCAR) since the third IPCC report in 2001, as well as the value for the NCAR Community Earth System Model, version 2 (CESM2), which is being used in the next IPCC assessment. Image credit: Values drawn from archived IPCC asssessments. Note: This image has been updated to add the CESM1 value and correct the CESM2 value.


“It does indeed look like many of the latest models will have ECS values higher than the IPCC ‘likely range’ of 1.5-4.5°C,” said Peter Cox (University of Exeter) in an email. “It seems that the new models with high ECS have more low-level cloud that tends to burn off under climate change, producing an amplifying feedback on warming.”

Cox is lead author of a 2018 study in Nature that examined temperature variability around long-term warming. The study concluded that the odds of ECS going outside the long-accepted range of 1.5-4.5°C were very small. “It is worth noting that observational constraints from both the temperature trend and temperature variability still suggest ECS of around 3°C,” said Cox. “So climate science has a conundrum to solve here.”

Clouds in the picture

Cloud-related effects have long been one of the biggest question marks in projecting future climate change, apart from uncertainties in future greenhouse emissions that hinge on human behavior. Low clouds—especially marine stratocumulus, which cover huge swaths of tropical and subtropical ocean—are especially crucial, as they tend to cool the climate by reflecting large amounts of sunlight.

Figure 2. Instruments aboard NASA's CERES satellite analyze Earth’s total radiation budget and provide cloud property estimates that enable scientists to assess clouds’ roles in radiative fluxes from the surface to the top of the atmosphere. Image credit: NASA.


The recent concerns about low-level clouds have been reinforced by ongoing work at NASA drawing on data from the CERES satellite program (Clouds and the Earth’s Radiant Energy System). Measuring the amount of energy entering and leaving the top of Earth’s atmosphere, CERES data shows that net energy in the atmosphere and oceans has climbed steadily with the increase of human-produced greenhouse gases—including both during and after the so-called “hiatus” in global temperature from about 2000 to 2013, when the oceans took up extra energy.

After 2013, the eastern Pacific saw a major drop in low cloud cover, global air temperatures spiked, and “there was a huge increase in sea surface temperatures,” said CERES principal investigator Norman Loeb, who outlined the changes in a 2018 paper.

Loeb is now analyzing how well the models for the upcoming IPCC report—with the higher sensitivities in place—can reproduce cloud cover and air temperature during and after the hiatus, given sea surface temperature. He discussed initial results last month at the 27th IUGG General Assembly (International Union of Geodesy and Geophysics), held in Montreal.

According to Loeb, "some of the models do really darn well” in depicting the cloud changes of the past two decades. He cautions: “I don’t know how far you can extrapolate this. There’s a danger in saying ‘you take the current record and the models nail it, therefore they have the climate sensitivity right.’ I’m cautious about making that leap, but it’s intriguing that they are nailing that post-hiatus difference.”

Figure 3. Differences in sea surface temperatures (left) and in CERES/MODIS-observed energy reflected from low clouds at the top of the atmosphere (right) between the so-called “hiatus” period of dampened surface air temperature increase (defined here as July 2000 – June 2014) and the subsequent period of amplified air temperature increase (July 2014 – June 2017). The post-hiatus period saw a dramatic increase in surface temperature across much of the eastern Pacific, together with a marked decrease in low-level cloud cover. Image credit: Courtesy Norman Loeb.


A 2019 study in Nature Geoscience that used a fine-scale cloud dynamic model found that marine stratocumulus could be depleted in large amounts if carbon dioxide levels were to reach about four times their current values, possibly triggering up to 8°C in additional global warming. See the post from last May by Dr. Jeff Masters on this paper.

Clouds and pollutants

The new NCAR model is based on tests of nearly 300 model configurations, with a focus on how well the models simulated pre-industrial climate and how well they reproduced the main global temperature trends of the last century. These trends include warming from 1920 to 1940, a period of roughly steady global temperature with regional cooling in the mid-20th century, and a more sustained global warming since the late 20th century.

The model also took into account new estimates of aerosol emissions (soot and other particles and droplets). These estimates were designed to be employed by all of the latest IPCC model configurations. Aerosol pollution tends to cool the climate overall, both by blocking sunlight directly and by serving as nuclei for clouds that block sunlight more effectively.

The new data on aerosol emissions led to a stronger cooling effect in the NCAR model than previous versions. However, the stronger aerosol-related cooling also led to an unrealistic portrayal of 20th century climate. When the model was reconfigured in response, it produced a more accurate reproduction of 20th- and 21st-century climate, including cloud behavior—but with a higher ECS, which pointed to a more ominous portrayal of future change.

If the higher ECS in the new models turns out to be on the right track, “it's really bad news,” said Gettelman. “It means we are going to be on the warm end of projections, with larger impacts for any given emissions trajectory.”

A durable index

The ECS allows for apples-to-apples comparison between the bare-bones climate models of decades ago and the far more sophisticated versions now in place. The ECS calculations begin with an instant doubling of carbon dioxide, whereas in our actual atmosphere, carbon dioxide is increasing gradually rather than all at once. The warming produced by the end of a more gradual doubling of CO2 rise is called transient climate sensitivity (TCS). “While TCS may be a better metric for comparison to observations and estimating near-term climate response…ECS has a long history as a convenient metric of future climate change,” said the authors in their GRL paper.

The amount of carbon dioxide in the atmosphere has increased by about 45% during the rapid industrialization of the last 150 years. Since regular measurements began atop Mauna Loa, Hawaii, CO2 concentrations have increased from about 315 parts per million in 1957 to around 410 ppm today. Fossil fuel burning and other human activities generate more than 35 billion tons of airborne CO2 a year. Just over half of that is absorbed by oceans, soil, and plants, and roughly a third of the atmospheric remainder stays in the air for a century or more (some of it for thousands of years).

Although other human-produced greenhouse gases warm the planet—methane molecules, in particular, are very powerful warming agents—CO2 is expected to account for most of the human-produced warming over the next few decades and beyond, as it remains in the atmosphere much longer than methane and is much more prevalent.