>>
>>
>>
Climate Change & Tropospheric Temperature Trends
Part I: What do we know today and where is it taking us?
Current Revision Level
Rev. 1.3: Jan. 21, 2009
Acknowledgements
I would like to thank the following for taking the time out of their already busy schedules to offer badly needed comments and suggestions regarding the content of this paper. Without their contributions, it would not have been possible. Thank you! Dian Seidel (NOAA Air Resources Laboratory, Silver Spring, MD) Kevin Trenberth (National Center for Atmospheric Research / Climate and Global Dynamics, Boulder, CO) Jerry Mahlmann (National Center for Atmospheric Research / Climate and Global Dynamics, Boulder, CO) Rasmus Benestad (Norwegian Meteorological Institute, Oslo, Norway; Contributing Editor for www.RealClimate.com) Gavin Schmidt (Goddard Institute for Space Studies, New York, NY; Contributing Editor for www.RealClimate.com) William Connolley (British Antarctic Survey, Cambridge, U.K.; Contributing Editor for www.RealClimate.com) David Parker (U.K. Met Office, Bracknell, Berkshire, U.K.)
Introduction
There is general agreement among the world’s climate scientists that the Earth’s global average surface-air temperature is now increasing at rates that are without precedent during the last 1000 years, and that this increase is at least in part due to human activity – particularly greenhouse gas emissions and land use practices. These conclusions are based on nearly a century of temperature data from over 900 surface weather stations with close to global coverage, and a wide range of data from various proxy indicators such as tree ring cores, glacier and snow-pack change, radiosonde, rocketsonde, and satellite data, and more. These suggest that the Earth’s global average temperature has risen between 0.4 and 0.8 deg C. since the early 20th century (IPCC, 2001). Even more disconcerting is the likelihood that this global warming is being driven by processes that have very long response times so that once started, it may take generations to stop even after mitigation activities are implemented around the world. Though the evidence for this warming grows stronger every day, there is still a great deal of uncertainty regarding how it will play itself out. Most climate scientists believe that by the end of the 21st century the consequences will be severe, but there is wide disagreement about the level of severity and what the actual impacts will be. There is also disagreement about the extent to which human activity is contributing to this increase. Some have argued that the observed warming is entirely natural and that we cannot do anything to mitigate it. At the more extreme end, some have even argued that the warming is beneficial. If indeed we are contributing to global warming, it is of the utmost importance that the remaining uncertainties about our fingerprint on the earth’s climate be answered soon lest we delay too long before implementing needed changes.
One of the more important open questions involves the relationship between temperatures at the Earth’s surface where we all live, and those of the troposphere and stratosphere, and how the two influence each other. Since anthropogenic (of human origin) greenhouse gases are thought to be a major contributor to this warming, and these gases are well mixed in the atmosphere, climate scientists believe that the lower and middle troposphere should warm at least as much as the surface. Even so, detecting this warming has been problematic. Many recent observations have only revealed about half as much warming as expected, and the difference is likely to be statistically significant (NRC, 2000). Climate scientists point to the many gaps in our understanding of how the surface and troposphere interact with each other as well as how they are forced by the many factors driving climate change. They also point to the many gaps and uncertainties in our data regarding the historic evolution of troposphere and stratosphere temperatures. But others who are more confident of what is already known claim that this discrepancy is a show-stopper for global warming, and proof that global warming mitigation policies are unneeded and wasteful. This perceived discrepancy between surface and troposphere temperature trends is one of the last and most significant roadblocks to a general recognition of the reality of global warming. It must be explained, one way or another, before a clear picture of the nature and extent of anthropogenic climate change can be achieved.
The Dilemma
In situ temperature records from worldwide surface weather monitoring sites indicate that, globally averaged, surface air and sea temperatures have risen by 0.30 to 0.60 deg. C between the late 19th century and 1994, and have risen by at least another 0.10 deg. C since then (IPCC, 2001). Figure 1 shows annual anomalies of combined surface-air and sea surface temperatures (in deg. C) from 1861 to 2000 relative to 1961 to 1990 values for the northern hemisphere (Fig. 1a), the southern hemisphere (Fig. 1b), and the globe (Fig. 1c) as reported by the IPCC (2001). Annual averages are shown as red bars with 2σ confidence intervals (twice the standard error of measurement) shown as demarcated black bars. The data are from in situ land and sea based temperature records that have been gathered and analyzed by the U.K. Met. Office (UKMO) and the Climate Research Unit (CRU) ( Jones et. al., 2001). The underlying trend is shown after averaging with a standard weighting method (dashed lines - IPCC, 1996) and after optimum averaging using variance-covariance matrices instead of correlation functions (Shen et. al., 1998; Folland et. al., 2001). Urban heat island effects (the tendency of temperatures to be artificially higher near urban centers, apart from large scale climatic trends) have been accounted for in these analyses. Using a wide variety of proxy indicators of land and sea surface temperatures, including ice cores, tree ring cores, varved lake sediments, historical records, and more, this analysis can be extended back nearly a millennium. Fig. 2 (IPCC 2001 fig. 2.20) shows the historical northern hemisphere land and sea surface temperature record from 1000 A.D. to 1998 A.D. as determined by Mann et. al. (1999). Data taken directly from in situ instruments as in the previous figure are shown in red. The blue and black curves show, respectively, a 1000 to 1980 A.D. reconstruction from this data and a 40 year smoothed representation of the underlying trend (IPCC, 2001), and the dashed purple curve shows the 1000-1900 A.D. linear trend. The shaded gray region gives 2s confidence intervals. Not surprisingly, the older proxy data has considerably more uncertainty than the more recent datasets. But even so, it can be clearly seen that the last century (particularly, the last few decades) show highly unusual warming trends compared to the long-term historical record. Mann et al. (1999) concluded that as of 1999, the 1990’s was the warmest decade since 1000 A.D. and 1998 was the warmest year. Similar conclusions were reached using independent methods and analyses by Jones et al. (1998) and Crowley and Lowery (2000). Natural climatic variation due to solar variability, El Nino’s and other interdecadal oscillations, and catastrophic events such as volcanic eruptions are contributing to these trends. But increasingly, the evidence suggests that they are largely of anthropogenic origin, and the anthropogenic contributions are likely to increase significantly over the next century unless active mitigation measures are taken (IPCC, 2001).
For the last 15 to 20 years, independent analyses of global climatic temperature trends have been made using mathematical simulations of global atmospheric climate in the hope of independently verifying the in situ and proxy temperature records, and to forecast the trends that can be expected over the next century based on current and projected human industrial and land use activities. These models range from simple models that are intended to characterize one or two particular phenomena (e.g. carbon sequestration by oceans or tropical rainforests, mass and energy transport by oceanic thermohaline cycles, or greenhouse gas emissions) to more complex three dimensional models that are intended to simulate larger regions of global climate using inputs from in situ data and the results of simpler models. The most complex of these are coupled suites of three dimensional oceanic and atmospheric general circulation models (AOGCM’s) that make use of various modeled inputs (IPCC, 2001, Chap. 8). In these simulations, independent models of the world’s oceans and the atmosphere are joined together and supplemented with modeled inputs such as sea-ice and fresh water fluctuations, greenhouse gases and particulates, volcanic eruptions, and more. Typically, these models are initialized at some previous point in history and integrated over time (or “spun up”) until they produce a reasonable simulation of the existing global climate prior to the industrial age. Some of these models require non-physical adjustments in mass transport, energy, or other factors before they will produce stable representations of global climate. These “flux adjustments” can be viewed as corrections for various uncertainties in modeled characteristics of the global climate, and are not representative of actual climate behavior when forced. Increasingly, current state of the art AOGCM’s do not require them to produce realistic climate simulations. With or without these adjustments, once an AOGCM is spun up to a reasonable and stable global climate, it is then forced with simulated natural and human induced atmospheric forcings to evaluate how the global oceanic climate will respond. AOGCM’s allow for a wide range of input scenarios to be evaluated, and this allows us to test the outcomes of many possible mitigation scenarios for how the next century will play out in regards to global warming, and how effective our various proposed solutions to it are likely to be.
Reliable AOGCM’s are crucial to reasonable forecasts of how our activities can be expected to impact future global climate. Our confidence in their ability to produce these forecasts requires at least that they be able to accurately simulate the existing climate change of the last century. Results have been mixed. Current state of the art models reproduce certain global features, such as surface air temperature and large scale ocean/atmosphere heat transport, quite well. Other features, like precipitation and cloud coverage, are not well simulated (though there has been increasing success in modeling some regional monsoons). In general though, the best of these models produce global average climates that are reasonable representations of observation on many if not all important points. One of the more important areas of concern is how well these models represent the vertical structure of the lower atmosphere, particularly the troposphere (NRC, 2000). Generally, AOGCM’s predict that surface-air temperatures and lower to middle troposphere temperatures should evolve at similar rates. For at least the last 25 years, this has not been observed. Though surface-air temperatures have warmed considerably during this period (at least 0.16 to 0.20 deg. K/decade), similar tropospheric trends have not been observed. Some of this is known to be due to short-term climatic fluctuations like El Nino’s and volcanic eruptions (most notable El Chicon in 1982, and Mt. Pinatubo in 1991), and recent AOGCM’s have enjoyed some success in correcting for these. But even after these corrections, a statistically significant difference remains that is likely to be real. This discrepancy brings into question how well current AOGCM’s model the vertical structure of the atmosphere. It is generally agreed that to improve the current generation of AOGCM’s and the reliability of their forecasts, we must gain a better understanding of vertical and horizontal latent heat transport and how the atmosphere is forced (IPCC, 2001; NRC, 2000). But many others have pointed to this discrepancy as proof that AOGCM’s cannot be used to detect an anthropogenic “fingerprint” in recent global warming, and in some cases, even that the atmosphere has not warmed at all over the last century at all. Discussions of these discrepancies have generated much controversy over the last decade in both the scientific community and among policy makers as well. The tropospheric temperature record of the last 25 years is commonly cited as proof that anthropogenic global warming is not happening, and the United States should not adopt the Kyoto Protocol for greenhouse gas reduction targets (Douglass et al., 2004; 2004b; Ferguson and Lewis, 2003).
Though surface temperature trends are based on a wide range in situ and proxy historical data, lower and middle troposphere temperatures have come primarily from two sources – radiosonde data from selected global locations and radiative brightness temperature data from Microwave Sounding Units (MSU’s) flying aboard NOAA/NASA Polar Orbiting Environmental Satellites (POES’s). The radiosonde record dates from the middle of the 20th century, and the MSU satellite record goes back to 1979 1. To settle the current controversy, these records must be examined, including how they were generated, where their strengths and limitations are, and how they do or do not relate to the surface record and the predictions of AOGCM’s. Of the two records, the MSU record is generally thought to provide the most hope for resolving the discrepancies. Though less fine grained than the radiosonde data, it is the only extant record that provides true global coverage of the lower atmosphere, and its error characteristics are better understood. Both however have strengths and weaknesses that the other lacks, and are used to independently evaluate each other. In what follows, we will examine the MSU satellite and radiosonde records in depth, paying special attention to these strengths and weaknesses, and how they relate to each other. Then we will examine how the various datasets have been used and misused in discussions of global warming in both scientific and public policy forums.
Atmospheric Temperature Monitoring with MSU and AMSU Products
In April of 1960, NASA and NOAA (then known as the Environmental Science Services Administration, or ESSA) began operating Polar Orbiting Environmental Satellites (POES) under a joint program for a variety of weather forecasting and short-term climate science studies. These satellites revolutionized weather forecasts by providing for the first ever space based global weather and storm observations in real-time. The first in this series was the No. 1 Television Infrared Observation Satellite (TIROS-1) which carried low and high resolution television cameras and operated for 78 days. Successive generations of TIROS class satellites expanded NOAA weather forecasting capabilities throughout the 60’s and 70’s. TIROS-N, which was put into service in early 1979, was the research prototype for a whole new class of TIROS satellite that carried a much wider array of sensor packages with capabilities that had been considerably enhanced from its predecessors. It provided real-time visible imagery of cloud and weather patterns, and infrared data on atmospheric and oceanic temperatures, humidity, ozone levels, snow and sea ice cover, and a variety of other climatic parameters. After TIROS-N, the following satellites were assigned letter designations during development, and then numeric designations when put into service. For instance, TIROS-N was followed by NOAA-A, which became NOAA-6 on its launch date of June 27, 1979. Between 1978 and 2001 NOAA operated 9 generations of these spacecraft, culminating in NOAA-J (14) which was placed into service in December of 1994 and operated as the POES afternoon observation satellite until March of 2001. Beginning with the launch of NOAA-K (15), NOAA began operating a newer and more advanced version of the TIROS-N Series platform - the Advanced TIROS-N (TIROS-ATN) that carries an expanded and updated sensor packages compared to TIROS-N. The latest of these spacecraft, NOAA-17 (shown in Figure 3) was launched on June 24, 2002. As of this writing, NOAA-16 and NOAA-17 are the currently designated operational spacecraft in the POES program.
TIROS-N Satellites and MSU Products
Figure 4 shows the TIROS-N flight path with respect to the earth’s surface. TIROS-N and its successors followed sun-synchronous polar orbits. An orbital plane precession rate of 0.986” kept the sun, earth, and the orbital plane of each spacecraft in a common plane so that a constant relative daylight illumination was maintained throughout the year as the earth orbits the sun. They relied on the earth turning under their orbital plane to achieve fully global views. Equatorial crossings were at 25 degree latitudinal separations resulting in 14.1 orbits per day and repeated Local Equatorial Crossing Times (LECT’s) every 8 days. NOAA operated these satellites in a two-satellite pattern at all times so that data was gathered four times daily at any particular location. Full global coverage was achieved by each operational pair of satellites every 3 to 4 days. On-board sensor packages included the Advanced Very High Resolution Radiometer (AVHRR) and the TIROS Operational Vertical Sounder (TOVS), which was composed of three different sensors that passively measured incoming infrared and/or microwave radiation. Of the three TOVS components, the one that is most important for tropospheric temperature studies is the Microwave Sounding Unit (MSU), which measures upwelling microwave radiation from the surface to the lower stratosphere.
Figure 5 shows how the MSU’s monitored the earth/atmosphere system from the POES orbital track. The MSU sensors, which are manufactured by the Jet Propulsion Laboratory (JPL) in Pasadena, CA, are four-channel Dicke radiometers that consist of two reflector antennas with two channels each operating at 50.3, 53.74, 54.96, and 57.95 GHz. Microwave energy received passively by each antenna is separated into vertical and horizontal polarization components by an orthomode transducer and sent to one of these 4 MSU channels. Nominal beam width is 7.5 deg at full width half maximum power (FWHM). Every 25.6 seconds these antennas scan a 47.4 deg (to beam centers) off-track portion of the sky to either side of the directly downward looking, or nadir, direction. Data was sampled at 11 positions separated by 9.47 degrees and recorded as digital “counts”. Each count corresponds to a fixed amount of microwave energy received by the radiometer beam that is then assigned to one viewing pixel, so that count number gives the radiation density per pixel. Count number, and therefore radiation density, are proportional to the temperature of the emitting body (in this case, the atmosphere, and to a lesser degree, the earth’s surface). Positions 1 and 11 are the extreme scan angles at 47.4 deg. left and right (labeled “Scan Point” in Figure 5), and position 6 is the Nadir view which looks directly at the satellite subpoint along its orbital flight path. At each position during scan a Dicke switch connected to the incoming signal switched between the signal and a microwave load at instrument temperature. Once per Earth scan the MSU made a calibration measurement by checking the temperature of deep space (2.7 deg. K) against own onboard “hot calibration target”, of which there are two – one for Channels 1 and 2, and another for Channels 3 and 4. These hot calibration targets are quasi-blackbody emitters operating at known temperatures that were monitored by two platinum resistance thermisters according to pre-launch thermal vacuum chamber calibrations by JPL. The MSU radiometers differenced microwave emissions from these hot calibration targets and deep space (at 2.73 deg. K) to generate a calibration scale against which to compare temperature readings. The scan period of 25.6 seconds, when combined with satellite orbital speed yielded a scan to scan separation of roughly 150 km and nadir and extreme angle spatial resolutions of 110 km and 200 km respectively.
Advanced TIROS Satellites and AMSU Products
Beginning with NOAA-15 (launched on May 18, 1998), the TIROS-N spacecraft were replaced by the Advanced TIROS-N, or TIROS-ATN spacecraft and are the current platforms for the POES program. Like their predecessors, they follow sun-synchronous polar orbits in a two-satellite profile with orbital plane precession rates that guarantee a constant sun-earth illumination profile with respect to the spacecraft throughout the year. Onboard sensor packages have been expanded and upgraded compared to their TIROS-N counterparts. The most important change for tropospheric temperature measurements was the upgrade of the TOVS package to include the new Advanced Microwave Sounding Unit (AMSU) products from JPL. The AMSU is now separated into three components – AMSU-A1, AMSU-A2, and AMSU-B – that collectively monitor 20 channels at frequencies ranging from 23.8 GHz to 183.3 GHz. With the expanded frequency range, the AMSU package now provides detailed information about temperature profiles from the surface up to an altitude of 3 mb (45 km) 2 and more sensitive measurements of atmospheric humidity and cloud pattern profiles. The channels of greatest interest for tropospheric temperature profiles are AMSU-A1 Channels 3, 5, 7, and 9 (respectively, 50.3, 53.6, 54.94, and 57.29 GHz), which correspond closely with the 4 TIROS-N Series MSU channels. Figure 6 shows a schematic of the TIROS-ATN platform and the new sensor packages it carries.
Like the TIROS-N Series MSU package, AMSU-A1 is a cross track scanning radiometer with an instantaneous field of view of 3.3 deg. At full width half max power. Functional performance is similar to that described above for MSU devices, but with increased sensitivity. Cross track views per scan have been increased from 11 to 31 with view separations reduced to 3.33 deg. from 9.47 deg. and one full scan spanning a 48.3 deg. view to either side of nadir. The time for a full scan, including deep space and onboard hot target calibration views has been reduced from 25.6 seconds to 8 seconds. At TIROS-ATN orbital speed, this reduces the scan to scan separation from 150 km to 47 km, and nadir and extreme angle spatial resolutions from 110 km and 200 km to 48 km and 87 km respectively, giving much higher image resolution. Temperature sensitivity (NEΔK) has dropped to 0.25 from 0.30 on Channels 3, 5, and 7 where tropospheric temperatures are most closely monitored – a 17 percent improvement. As with TIROS-N Series MSU products, hot target calibration temperatures are determined by monitoring an onboard quasi-blackbody radiator at microwave frequencies with platinum resistance thermisters, and calibration was done in thermal vacuum chamber tests by JPL prior to launch. The existing configuration yields a hot target temperature of 200 deg. K with an accuracy of better than 0.2 deg. K.
MSU and AMSU Datasets
The four MSU channels and their AMSU counterparts collectively monitor temperatures of the troposphere and lower stratosphere, with TIROS-N based MSU Channels 1 through 4 corresponding respectively to AMSU Channels 3, 5, 7, and 9 on the TIROS-ATN platforms. Each of these channels receives its input from broad layers of the atmosphere weighted according to sensitivities that vary as a function of altitude with peak signal occurring at different altitudes for each channel (weightings as a function of altitude for each of the 4 channels are shown in Figure 7). What is measured is the total radiative brightness of that layer with most of the “weight” coming from the altitude of peak emission. This emissive profile is used to derive an average, or bulk, brightness temperature of the layer in question. Each of the 4 channels overlap to some extent. There are slight differences between the weighting functions of the MSU and AMSU products, but they are small enough to be neglected for tropospheric temperature calculations.
Channel 1 (50.3 GHz) is sensitive to the lowest two or three kilometers of the atmosphere. Data from this channel is heavily contaminated by emissions from the surface and atmospheric water and ice and is of limited utility for tropospheric temperature studies. Channel 2 (53.74 GHz) monitors a deeper portion of the atmosphere with its weighting peaked at an altitude of 500 hPa, or approximately 7 km 2. Emissions monitored on this channel are relatively insensitive to humidity and thus are more representative of actual deep troposphere temperatures. This channel also receives input from the surface – possibly up to 10 percent over oceans and 20 percent over mountainous regions such as the Himalayas (NRC, 2000) – though recent research suggests that this impact may be much smaller than previously thought (Litten et al., 2005). Channel 2 is also affected by precipitation sized ice particles in deep convective cloud regions which can contaminate data from mid-latitude squall lines. Channel 3 (54.96 GHz) measures a deeper portion of the troposphere than Channel 2 and has a weighting function that peaks at an altitude of 250 hPa. At this altitude it straddles the tropopause in extra-tropical latitudes, and this is of limited utility in generating a pure troposphere signal. Channel 4 (57.95 GHz), which has a weighting function peaking at 70 hPa, receives most of its signal from the lower stratosphere, with the remainder coming from the very uppermost regions of the troposphere. Because the stratosphere is known to be cooling with respect to the troposphere, largely as a result of ozone depletion and is also significantly impacted by the effects of aerosols and volcanic eruptions, it may respond quite differently than the troposphere to these and other forcings (Bengtsson, 1999; NRC, 2000; IPCC, 2001). Channel 4 measurements provide important ancillary data about how the lower stratosphere might be impacting measurements from other channels.
Because the MSU/AMSU Channel weighting functions overlap and the different scan angles of cross track views see the atmosphere at differing angles, information from different channels and view angles can be combined to generate “synthetic” channels that represent thinner portions of the atmosphere. For instance, a fractional portion of the channel 3 signal can be subtracted from the channel 2 signal to yield a signal representing a thinner portion of the troposphere with less contamination of stratospheric signals. Other combinations of signals from nadir and off-nadir views can be combined to isolate other layers. In the early 90’s, it was shown that by combining Channel 2 data at nadir with fractionally weighted data from 8 of the original 10 off-nadir views, a weighted measurement could be derived that emphasizes a thinner layer of the lower troposphere centered at 4 km altitude (Spencer and Christy, 1992b). This virtual channel has come to be called MSU2LT (for Channel 2 Lower Troposphere). With the advent of the AMSU products, the same philosophy can be followed using the 30 view angles of AMSU products. The newer AMSU weighted lower troposphere view is now called MSUTLT (Christy et al., 2003). Figure 7 shows the weighting function of the MSU2LT channel (labeled as MSU/AMSULT). The MSU2LT and MSUTLT channels remove most of the stratospheric influence, but they are based on a wide angle swath that neglects zonal 3 (that is, east to west) temperature gradients. As such, they are sensitive to temperature fluctuations that happen on timescales comparable to the east to west drift in the satellite’s equatorial crossings and are thus subject to sampling errors. They are also sensitive to direct surface emissions.
As a measure of deep atmospheric temperatures, the MSU and AMSU records far exceed that of other surface and radiosonde based temperature measuring products. Since they see much larger vertical profiles and have far more global geographic coverage than other products, they are less subject to local surface layer noise than ground based upper air stations. With predictable orbits, they are less subject to the “geographic noise” of relocated measurement platforms that plague radiosonde and rocketsonde data. Thus, they provide a degree of consistency and coverage that cannot be replicated by other means. Because of this, most investigators today believe that the MSU and AMSU records present the best available option for resolving open questions regarding climate change and tropospheric temperatures. Even so, it must be remembered that the TIROS-N and TIROS-ATN Series of spacecraft were designed to support local weather forecasting and shorter term regional climate studies. They are lacking in many important respects for long-term climate change studies and the shortcomings must be taken into account. These can be classified into a few groups as follows.
Top
|
Climate Change
General Science
Troposphere Temperatures
Negative Climate Feedbacks
The Hockey Stick
Polar Ice-Caps & Sea-Level Rise
Solar Climate Forcing
Resources & Advocacy
Christianity & the Environment
Global Warming Skeptics
The Web of Life
Managing Our Impact
Caring for our Communities
The Far-Right
Ted Williams Archive
|