Climate Change & Tropospheric Temperature Trends
Part II: A Critical Examination of Skeptic Claims
The fact that the NCPPR would make an argument like this is revealing. “Weather balloons” is a reference to radiosondes. At the time these comments were published, the most commonly cited radiosonde product for MSU comparisons was the Angell 63 station network (Angell, 1988). With data from literally thousands of in situ thermometers globally distributed on all continents and many sea surface records as well, the surface record has far better coverage than Angell 63 or any other radiosonde product since (IPCC, 2001; Seidel et al., 2003; 2004). The NCPPR has chosen to compare the MSU record (which dates from 1979) with the surface record dating back to a century ago. No doubt this was done because the global surface network was considerably smaller in the late 19th century, and they could take advantage of that for their “comparison”. It is evident that the any valid comparison of the two must consider the period of common record – the record since 1979 – during which the surface record has been far more complete. Yet again, the “comparison” has been carefully set up to give the desired results.
2) Climate Scientists prefer RSS only because it agrees with models.
To no one’s surprise, the skeptic literature draws heavily from the MSU record, and UAH products in particular as they show the least amount of tropospheric warming. Most references are to UAH Versions D and 5.0 (Christy et al., 2000; 2003), though even today a few skeptics still cite older Versions 3. In recent years, confidence in UAH upper air products has been waning, and it is now likely that a majority of mainstream climate scientists do not believe their TLT and TMT trends (Trenberth, 2004).
When RSS Version 1.0 was first made public in early 2003 it attracted immediate attention. RSS Version 1.0 was the first new MSU analysis product produced that examined MSU data with the same level of detail and thoroughness as the pioneering UAH products. Like those products, it addressed all currently known sources of error, improving on the characterization of some of them, and incorporated more recent data than the extant UAH product at that time (Version D – Version 5.0 was published later that year). But unlike UAH products, it predicted satellite era troposphere temperature trends that were noticeably higher, and roughly consistent with those of Prabhakara et al. (2000). RSS published their full analysis product later that same year in Journal of Climate (Mears et al., 2003). Skeptics have every reason to be terrified of this analysis. Not only is it well characterized on every level, it yields results that are consistent with the predictions of state-of-the-art AOGCM’s. As such, they wasted no time in going after both with considerable vitriol. Within days, skeptic forums worldwide were claiming that the RSS results were fatally flawed. Without exception, the criticisms had little specific content as to exactly what was flawed about it, and for the obvious reason – they had none. In the absence of any valid method criticisms, they turned to external comparisons. Other than the claim that RSS products lacked “verification” from the radiosonde record (this claim will be examined shortly), the most common attack was that RSS had cooked their analysis to justify the surface record and AOGCM predictions. To this end, they had a specific target to aim at.
Earlier that year, a team led by Ben Santer of the Lawrence Livermore Laboratory that included Carl Mears, Frank Wentz, and Matthias Schabel of RSS, compared four runs of the Dept. of Energy’s Parallel Climate Model (PCM) with MSU data from UAH and RSS. PCM, which is described in Washington et al. (2000) is a coupled land, ocean, atmosphere, and sea-ice model that does not use flux corrections at interfaces. The atmospheric and land components are taken from the NCAR’s Version 3 Community Climate Model (CCM3) and Land Surface Model (LSM). CCM3 is the same atmospheric model that RSS used to characterize their diurnal correction. The reliability of CCM3 for diurnal behavior has already been seen (Figure 3). The ocean and sea-ice components are taken from the Los Alamos National Laboratory Parallel Ocean Program (POP) and a sea-ice model from the Naval Postgraduate School. In PCM, these various components are tied together with a flux coupler that uses interpolations between the component model grids in a manner similar to that used in the NCAR Climate System Model (CSM). Grid resolution varies from ½ deg. at the equator to 2/3 deg. near the North Atlantic. The atmospheric component (CCM3) uses 32 vertical layers from the surface to the top of the atmosphere. In various experiments PCM has been very reliable in reproducing observed global surface temperature behavior (see Figures 15 and 16) and stable, well characterized results for a broad range of forcings, and has done an excellent job of capturing ENSO and volcanic effects as well.
Santer’s team ran four realizations of the “ALL” PCM experiment which makes use of well-mixed greenhouse gases (including anthropogenic greenhouse gas emissions), tropospheric and stratospheric ozone, direct scattering and radiative effects of sulfate and volcanic aerosols, and solar forcing (Ammann et al., 2003; Meehl et al., 2003). All used identical forcings but differing start times. Simulated MSU temperatures were derived from global model results by applying MSU Channel 2 and 4 weighting functions to the PCM output across its 32 vertical layers, and these were then compared with UAH and RSS analysis products. The goal was to see if an anthropogenic fingerprint on global tropospheric temperature trends could be detected in either of the two MSU products. First, the model was “fingerprinted” using standard techniques (Hasselmann, 1979; Santer et al., 1995) to see if observational uncertainties had a significant impact on PCM’s consistency. Internal climate noise estimates (which are necessary for fingerprint detection experiments) were obtained from PCM and the ECHAM/OPYC model of the Max-Planck Institute for Meteorology. The anthropogenic fingerprint on climate change was taken to be the first Empirical Orthogonal Function (EOF), Φ, of the mean of the four ALL runs of PCM. Then, increasing expressions of Φ were sought in UAH and RSS analyses in an attempt to determine the length of time necessary for it to be detected at a 5 percent statistical significance level in both observational records (Santer et al., 2003).
They found that a clear MSU Channel 2 anthropogenic fingerprint was consistently found only in the RSS dataset. This is not surprising as the RSS team found consistently warmer Channel 2 trends that UAH. What is more noteworthy, is that this was true only for the mean-included comparisons. When the means are removed from both datasets, the fingerprint was clearly visible at the 5 percent level in 6 out of 8 cases for the RSS and UAH analyses – a consequence of the fact that PCM captures the observed equator to pole temperature and trend gradients quite well, and these are in turn manifested in Φ. The team concluded that the main differences in the ability of the RSS and UAH products was due to the large global mean and trend differences between the two, and these were in turn likely to be due to uncertainties in how each was analyzed. Santer’s team correctly concluded that,
“Our findings show that claimed inconsistencies between model predictions and satellite tropospheric temperature data (and between the latter and surface data) may be an artifact of data uncertainties.”
(Santer et al., 2003)
This is, of course, exactly what we would expect. Nearly two thirds of the trend discrepancy between the UAH and RSS analyses is related to the differing methods each team used to characterize IBE and do their merge calculations, and to a lesser extent, their differing methods of smoothing and diurnal drift correction. Since detection of the anthropogenic fingerprint in PCM as characterized by Φ, depends on this difference, it would not be surprising if the difference between detection and non-detection is the result of data and/or data processing uncertainties. The fact that the mean-removed analyses of both teams do capture the fingerprint demonstrates the ability of PCM and its component models to capture real tropospheric and surface effects.
Yet we would never gather any of this from the skeptic press. In May of 2003, shortly after Santer et al. (2003) was published, the Greening Earth Society 4 attacked the RSS analysis and the work of Santer’s team in one of their “Virtual Climate Alerts”, which was typical of a wide range of skeptic publications that came out shortly thereafter, and have continued to do so ever since. In it, they stated that,
“As has been known for years, there is a major discrepancy between tropospheric (earth’s atmosphere at an altitude from 5,000 to 30,000 feet) temperatures as measured by satellite-based instruments and projections of those temperatures by climate models. The former find only a tiny warming trend while the models predict something four times larger.
Most scientists, upon recognizing such a discrepancy, would ask themselves what is wrong with the model. Good science, as elementary school students are taught, begins with a hypothesis. In this case, "hypothesis" is another term for "model." The model, or hypothesis, is tested against what is observable in the real world and, if the two differ, the hypothesis (or model) is altered to fit the facts, then retested. Science is not about changing facts to fit hypotheses, unless you happen to be part of a team of researchers led by Ben Santer at the Lawrence Livermore National Laboratory.
Santer, et al report in Science on what we feel compelled to call "an interesting exercise in mathematical philosophy." Specifically, they used a climate model to determine which of two competing datasets is more correct: John Christy’s satellite record or an altered, warmer version of that record which never has been published in peer-reviewed literature…
The observed trend in the lower atmosphere in the Wentz./Schabel dataset is reported to be 0.1ºC per decade greater than the UAH data, so it "more closely matches" the observations from the surface and the climate model projections.
What Santer et al chose to do is compare the temperature projections for the surface and atmosphere from the global climate model developed by the National Center for Atmospheric Research (NCAR) with the satellite temperature history devised by Wentz and Schabel and with that of Christy and Spencer. How could it be a "surprise" that the warmer of the two datasets (Wentz/Schabel) provides a better match with climate model projections?
Santer et al proclaim, "Our findings show that claimed inconsistencies between model predictions and satellite tropospheric temperature data (and between the latter and the surface) may be an artifact of data uncertainties." What they fail to mention is that the climate model they used is one that projects the slowest rate of warming for the next hundred years (all climate models are not created equal). If that model is perturbed with data capturing the observed slowing in per-capita carbon dioxide emissions, the global warming it projects between now and 2100 is very close to the low values espoused by the climate skeptics – something around 1.6°C.”
(GES, May 6, 2003)
Nearly every word of these statements is either false or misleading. “Model” is not another term for “hypothesis”. PCM and other extant AOGCM’s like it have been meticulously constructed off of known climatological principles and repeatedly tested against observed trends. While none is perfect, the ability of the best of them (including PCM) to replicate observed surface temperature trends has already been demonstrated (Washington et al., 2000; IPCC, 2001; see Figures 15 and 16). Nor is it true that PCM projects the slowest rate of warming for the next hundred years. Like any other AOGCM, the warming rates projected by PCM (which was developed by the DOE, not NCAR – only the CCM3 component is from NCAR) depend on how it is forced, and any of a wide range of responses is possible. GES states that it yields a warming of “something around 1.6 deg. C” over the next century when forced with “the observed slowing in per-capita carbon dioxide emissions”. But the slowing of CO2 emissions tells us nothing about the existing baseline rates of emission used for this prediction, or how either relates to what is forecasted for the next century under the various scenarios studied by the IPCC. Naturally, GES carefully avoided providing a proper citation for this statement, so it cannot be put into context or checked for accuracy.
Nor is it true that Santer’s team “used a climate model to determine which of two competing datasets is more correct “. They compared simulated MSU Channel 2 observations from PCM (which are representative of middle troposphere temperatures and trends - not “temperature projections for the surface and atmosphere”) with the corresponding records from UAH Version 5.0 and RSS Version 1.0 to see if an anthropogenic fingerprint on global warming could be detected in either. They concluded that the ability of the two datasets to do so was driven by their differing methods of analysis and/or data uncertainties – which of course, is correct. An anthropogenic fingerprint, as characterized by the first empirical orthogonal function in the PCM runs Santer et al. used, can be detected in UAH Version 5.0, but only after removing global mean values from the dataset – and as was shown in Part I, these do in fact differ from RSS largely because of methodology. Yet taking the comments of GES at face value, we would conclude that Santer’s team deliberately fabricated a purely theoretical AOGCM run and then cooked up an MSU analysis to agree with it! The attempt to discredit the RSS dataset by claiming that it “never has been published in peer-reviewed literature…” was a particularly low blow. In fact, RSS Version 1.0 had already been in review for several months on the day this GES Virtual Climate Alert was published, and in fact, was submitted to the Journal of Climate for its second round of review only 3 days later (on May 9, 2003). It was published in Journal of Climate later that year (Mears et al., 2003).
In passing, one other observation needs to be made here – one that is particularly important for any paper that offers a critical review of climate change “skeptics”. The same week GES published their screed, John Christy of the UAH team offered a similar criticism of the Santer et al. paper in testimony before the U.S. House of Representatives (Christy, 2003). While his remarks suffer from the same shortcomings on this point, it’s instructive to compare their content and tone with those of GES and other similar front groups. In the tradition of his team’s demonstrated commitment to excellence, Christy’s comments were measured, professional, and completely lacking in the ad-hominem that has become a staple for these groups. They were also human - in addition to his scientific testimony, he shared his own experiences as a missionary in Africa in relation to international global warming mitigation efforts. Not only were his remarks more thoughtful than those of other skeptics, they demonstrated that right or wrong, his concerns are ultimately rooted in his own direct experience with people he cares about as well as scientific. GES on the other hand, is being paid to have theirs 4. This should serve as a reminder that not all climate change “skeptics” are alike – a fact that is all too easily forgotten when dealing with contentious subjects like the upper-air record 16.
At times, the attempts to elevate the status of UAH products over those of other teams borders on the absurd. In a recent editorial at Tech Central Station, astrophysicist and industry consultant Sallie Baliunas 5 stated that,
“The best analysis of air temperature over the last 25 years is based on measurements made from satellites and checked with information from weather balloons. That work, conducted by J. Christy and R. Spencer at the University of Alabama at Huntsville (UAH), shows a small global warming trend…
Three new analyses of troposphere temperatures have appeared in the publications Science and Journal of Oceanic and Atmospheric Technology. They all start with the same set of measurements made from satellites, but find different results. Because not one but a series of satellites has collected the data, corrections need to be made to the measurements from each instrument to produce a precise record of temperature going back over two decades. How to find the best result?”
(Baliunas, 2003)
Interspersed among these comments, there was some very general background on surface temperatures and climate models, and glowing praise for UAH analysis methods. The reference to Journal of Oceanic and Atmospheric Technology was for UAH Version 5.0. The other two papers from Science she mentions are Santer et al. 2003b) and Vinnikov and Grody (2003). She makes no mention of Prabhakara (2000) and mentions RSS only in passing saying that their results were “just appearing in Journal of Climate.” In fact, they were already published (Mears et al., 2003), and details of their methods and results had been published over 9 months before (Mears et al., 2003b) and investigated in a separate study by published 6 months before by Santer et al. (2003) that she makes no mention of. After this generalized, and selective, introduction, she concludes that,
“The remaining two studies consider the same satellite measurements and find results consistent with computer-based forecasts of globally-averaged human warming. But those two studies also produce contradictory results, indicating the small temperature trend from UAH is the most reliable.”
(Baliunas, 2003)
In other words, because Santer et al. (2003) and Vinnikov and Grody (2003) reach conclusions that she considers “contradictory” UAH Version 5.0 is thereby proven to be the most reliable MSU analysis available.
This is an outright non-sequiter. Issues with one or two studies do not prove that a completely separate study is reliable! Baliunas does provide a superficial explanation of what she considers to be the problems with Santer et al. (2003b) and Vinnikov and Grody (2003), and an explanation for what she likes about UAH Version 5.0, specifically her belief that radiosondes validate it (I will examine this claim shortly). But her criticisms of Santer et al. and Vinnikov and Grody are for the most part weak (particularly for Santer et al.) and even if true, they would not, by themselves, vindicate UAH Version 5.0 as the best study available. At the time this editorial was published, information regarding the methods and results of RSS Version 1.0 were easily available and had been for almost a year, had she bothered to consult them. Independent studies analyzing their work such as Santer et al. (2003) had been available for over 6 months. Prabhakara et al. (2000) had been available for almost 3 years. Yet she makes no attempt to address any of this, despite the fact that, by her own admission, she was aware of the RSS team’s work. Her complete neglect of these studies is careless at best, and downright negligent at worst.
Ms. Baliunas is of course, quite right to rely on UAH products as indispensable upper-air records. They are the pioneering works in MSU trend analysis and continue to be on the cutting edge of upper-air remote sensing products. It will be some time before they are discarded as contending tropospheric trend analyses (indeed, if they ever are). While favor is tending toward RSS products as of this writing, UAH and RSS analyses must be considered together as complementary MSU/AMSU retrievals. Both are needed to understand tropospheric and stratospheric temperature trends. The issue of contention here is Ms. Baliunas’ selectivity - climate change skeptics and their benefactors rely exclusively on UAH products alone for no other reason than because those products tell them what they want to hear – particularly when combined with other equally cherry picked surface and upper-air records. This is selective science at its very worst.
3) UAH analyses have been independently validated by the radiosonde record.
To date, one of the most common, and strident, of skeptic claims is that the radiosonde record is fully independent of the MSU record and validates UAH products only. Numerous examples of this can be found throughout the skeptic literature. For instance, the same Greening Earth Society Virtual Climate Alert cited above (May 6, 2003) takes up the issue. After their misguided attacks on the work of Santer et al. (2003), they continue,
“All this is very interesting, of course, but what we’ve neglected to remind you (until now) is that there exists a completely independent source of observations of atmospheric temperature – weather balloons. Weather balloons are launched twice daily from sites around the world and have been for the last fifty years or so. As they ascend through the atmosphere, they transmit a host of observations back to the ground, among them temperature and atmospheric pressure. This is data that can be used to construct a dataset from the same part of the atmosphere that is measured by those orbiting satellites. The weather-balloon data is completely independent of that generated by the satellites and serves as a different measurement of the same quantity (the lower atmosphere).
Two research reports published earlier this year carefully compare the weather-balloons with the UAH satellite-derived temperature observations. Lanzante et al (2003) find there to be no difference in the temperature trend derived from the two datasets. The other (Christy et al, 2003) examines several different weather-balloon datasets and finds that the UAH satellite trend (0.06ºC per decade) is within a few hundredths of a degree Celsius of the trends derived from the four sets of weather-balloon observations, but always more positive (0.04ºC, -0.02ºC, 0.00ºC, and 0.05ºC, per decade). This makes the Wentz/Schabel trend of 0.16ºC per decade several times greater than that of any weather-balloon record.”
(GES, May 6, 2003)
At about the same time this report was published, the U.S. House of Representatives was considering ratifying an amendment, introduced by Rep. Bob Menendez (D-NJ) that would have overturned S. Res. 98, (the Byrd-Hagel resolution) and opened the door to the United States supporting the United Nations Framework Convention on Climate Change and becoming a signatory to or the Kyoto Protocol. In response to the so-called Menendez Amendment, Marlo Lewis, a Senior Fellow at the Competitive Enterprise Institute, and Bob Ferguson, Executive Director of the Center for Science and Public Policy (a project of the Frontiers of Freedom Institute) published a report of specific comments on the Menendez amendment in which they advanced the radiosonde vindication argument (Ferguson and Lewis, 2003). In this report they stated that,
Top
|
Global Warming Skeptics
Climate Astroturfing
OISM Petition Project
Leipzig Declarations
Climate Denial 101
Christianity & the Environment
Climate Change
The Web of Life
Managing Our Impact
Caring for our Communities
The Far-Right
Ted Williams Archive
|