The Discovery of Global Warming                                                       Spencer Weart
January 2008     [ HOME ]    Table of Contents     for printer

Basic Radiation Calculations
The foundation of any calculation of the greenhouse effect was a description of how radiation and heat move through a slice of the atmosphere. At first this foundation was so shaky that nobody could trust the results. With the coming of digital computers and better data, scientists gradually worked through the intricate technical problems. A rough idea was available by the mid 1960s, and by the late 1970s, the calculations looked solid — for idealized cases. Much remained to be done to account for all the important real-world factors, especially the physics of clouds. (This genre of one-dimensional and two-dimensional models lay between the rudimentary qualitative models covered in the essay on Simple Models of Climate and the elaborate three-dimensional General Circulation Models of the Atmosphere. See those essays for developments after ca. 1980.) Warning: this is the most technical of all the essays.climate change carbon dioxide CO2 computer models greenhouse effect infrared saturatio
Looking for a complete explanation of greenhouse warming, equations and all? You won’t find it here or anywhere on the Web: first you have to fully grasp at least one good textbook, and even then you can only see how climate may change by running the equations on a large computer model that takes into account all the details of crucial factors like clouds and ocean circulation. (For a link to a draft of a textbook and some other technical information, see the first paragraph on the links page)


"No branch of atmospheric physics is more difficult than that dealing with radiation. This is not because we do not know the laws of radiation, but because of the difficulty of applying them to gases." — G.C. Simpson(1)

 
Simple numerical models helped scientists feel out the basic physics of climate. The first steps were "energy budget" (or "energy balance") models of only one dimension. You pretended that the atmosphere was the same everywhere around the planet, and looked at how things changed with altitude alone. That meant calculating the flow of heat and radiation up and down through a vertical column that rose from the ground to the top of the atmosphere. You started with the radiation, tracking light and heat rays layer by layer as gas molecules scattered or absorbed them. This was the problem of "radiative transfer," an elegant and difficult branch of theoretical physics.        - LINKS -
You could avoid these difficulties (while encountering different problems) by taking another approach. You ignored the differences with height, and used the average absorption and scattering of radiation in the column. The result could be a "zero-dimensional" calculation for the Earth as a whole, or a calculation that varied in other dimensions, for example latitude.  
Such studies had begun in the 19th century, starting with crude calculations for the energy balance of the whole planet, as if it were a rock hanging in front of a fire. (Follow the links at right for more.) A more sophisticated pioneer was Samuel P. Langley, who in the summer of 1881 climbed Mount Wilson in California, measuring the fall of temperature as the air got thinner. He correctly inferred that without any air at all, the Earth's temperature would be lower still — a direct demonstration of the greenhouse effect. Langley followed up with calculations indicating that if the atmosphere did not absorb particular kinds of radiation, the ground-level temperature would drop well below freezing.(2) Subsequent workers crafted increasingly refined calculations.

More history in
<=Simple models

 

Basic explanation:
<=Simple models

In 1896 Svante Arrhenius went a step farther, grinding out a numerical computation of the radiation transfer for atmospheres with differing amounts of carbon dioxide gas (CO2). He did the mathematics not for just one globally averaged column but for a set of columns, each representing the average for a zone of latitude. This two-dimensional or "zonal" model cost Arrhenius a vast amount of arithmetical labor, indeed far more than was reasonable. The data on absorption of radiation (from Langley) was sketchy, and Arrhenius's theory left out some essential factors. On such a shaky foundation, no computation could give more than a crude hint of how changes in the amount of a gas could possibly affect climate.

 

 


<=>Simple models

The main challenge was to calculate how radiation passed through the atmosphere, and what that meant for the temperature at the surface. That would tell you the most basic physical input to the climate system: the planet's radiation and heat balance. This was such a tough task that all by itself it became a minor field of research, tackled by scientist after scientist with limited success. Through the first half of the 20th century, workers refined the one-dimensional and two-dimensional calculations. To figure the Earth's radiation budget they needed to fix in detail how sunlight heated each layer of the atmosphere, how this energy moved among the layers or down to warm the surface, and how the heat energy that was radiated back up from the surface escaped into space. Different workers introduced a variety of equations and mathematical techniques to deal with them, all primitive.(3*)  
A landmark was work by George Simpson. He was the first to recognize that it was necessary to take into account, in detail, how water vapor absorbed or transmitted radiation in different parts of the spectrum. Moving from a one-dimensional model into two dimensions, Simpson also calculated how the winds carry energy from the sun-warmed tropics to the poles, not only as the heat in the air itself but also as heat energy locked up in water vapor.(4*) Other scientists found that if they took into account how air movements conveyed heat up and down, even a crude one-dimensional model would give fairly realistic figures for the variation of temperature with height in the atmosphere. E.O. Hulburt worked out a pioneering example of such a "radiative-convective" model in 1931. His one-dimensional calculations agreed with Arrhenius's rough estimate that doubling or halving the amount of CO2 in the atmosphere would raise or lower the Earth's surface temperature several degrees. However, hardly anyone noticed Hulburt's rudimentary model.(5*) Most scientists continued to doubt the hypothesis that adding or subtracting CO2 from the atmosphere could affect the climate. They believed that laboratory measurements at the turn of the century had proved that the CO2 and water vapor in the atmosphere already blocked infrared radiation so thoroughly that adding more gas would make no difference.

 

 

 

 

 


=>CO2 greenhouse

 

 

<=CO2 greenhouse

In 1938, when G.S. Callendar attempted to revive the theory of carbon dioxide warming, he offered his own simple one-dimensional calculation (he apparently didn't know about Hulburt's work). Dividing the atmosphere into twelve layers, Callendar tried to calculate how much heat radiation would come downward to the surface from each layer, and how the amount of radiation would change if more CO2 were added. He concluded that in future centuries, as humanity put more gas into the air, the result could be a degree or so of warming. But the model (like Arrhenius’s and Hulbert’s) was obviously grossly oversimplified, ignoring many key interactions. It failed to convince anyone.

 

 

 

=>CO2 greenhouse

 

Callendar himself pointed out in 1941 that the way CO2 absorbed radiation was not so simple as every calculation so far had assumed. He assembled measurements, made in the 1930s, which showed that at the low pressures that prevailed in the upper atmosphere, the amount of absorption varied in complex patterns through the infrared spectrum. Nobody was ready to attempt the vast amount of calculation needed to work out effects point by point through the spectrum, since the data were too sketchy to support firm conclusions anyway.(6)  
Solid methods for dealing with radiative transfer through a gas were not worked out until the 1940s. The great astrophysicist Subrahmanyan Chandrasekhar and others, concerned with the way energy moved through the interiors and atmospheres of stars, forged a panoply of exquisitely sophisticated equations and techniques. The problem was so subtle that Chandrasekhar regarded his monumental work as a mere starting-point. It was too subtle and complex for meteorologists.(7)They mostly ignored the astrophysical literature and worked out their own shortcut methods, equations that they could feed through a computer to get rough numerical results. What drove the work was a need for immediate answers to questions about how infrared radiation penetrated the atmosphere — a subject of urgent interest to the military for signaling, sniping, reconnaissance and later for heat-guided missiles.  
The calculations could not be pushed far when people scarcely had experimental data to feed in. There were almost no reliable numbers on how water vapor, clouds, CO2, and so forth each absorbed or scattered radiation of various kinds at various heights in the atmosphere. Laboratories began to gather good data only in the 1950s, motivated largely by military concerns.(8)

 



<=External input

Well into the 1960s, important work continued to be done with the "zero-dimensional" models that ignored how things varied from place to place and even with height in the atmosphere, models that calculated the radiation budget for the planet in terms of its total reflectivity and absorption. Those who struggled to add in the vertical dimension had to confront the subtleties of radiative transfer theory and, harder still, they had to figure how other forms of energy moved up and down: the spin of eddies, heat carried in water vapor, and so forth. A reviewer warned in 1962 that "the reader may boggle at the magnitude of the enterprise" of calculating the entire energy budget for a column of air — but, he added encouragingly, "machines are at hand."(9)

 

 

 


=>Simple models

Digital computers were indeed being pressed into service. Some groups were exploring ways to use them to compute the entire three-dimensional general circulation of the atmosphere. But one-dimensional radiation models would be the foundation on which any grander model must be constructed — a three-dimensional atmosphere was just an assembly of a great many one-dimensional vertical columns, exchanging air with one another. It would be a long time before computers could handle the millions of calculations that such a huge model required. So people continued to work on improving the simpler models, now using more extensive electronic computations.

<=External input

 

 

=>Models (GCMs)

A pioneer was the physicist Gilbert N. Plass, who had been doing lengthy calculations of infrared absorption in the atmosphere. He held an advantage over earlier workers, having not only the use of digital computers, but also better numbers, from spectroscopic measurements done by a group of experimenters he was collaborating with at the Johns Hopkins University. Military agencies supported their work for its near-term practical applications, but Plass happened to have read Callendar's papers, and was personally intrigued by the old puzzle of the ice ages and other climate changes.  

Most experts stuck by the old objection to the greenhouse theory of climate change — in the parts of the spectrum where infrared absorption took place, the CO2 plus the water vapor that were already in the atmosphere sufficed to block all the radiation that could be blocked. In this "saturated" condition, raising the level of the gas could not change anything. But this argument was falling into doubt. The discovery of quantum mechanics in the 1920s had opened the way to an accurate theory for the details of how absorption took place, developed by Walter Elsasser during the Second World War. Precise laboratory measurements studies during the war and after confirmed a new outlook. In the frigid and rarified upper atmosphere where the crucial infrared absorption takes place, the nature of the absorption is different from what scientists had assumed from the old sea-level measurements.

 

Take a single molecule of CO2 or H2O. It will absorb light only in a set of specific wavelengths, which show up as thin dark lines in a spectrum. In a gas at sea-level temperature and pressure, the countless molecules colliding with one another at different velocities each absorb at slightly different wavelengths, so the lines are broadened considerably. With the primitive infrared instruments available earlier in the 20th century, scientists saw the absorption smeared out into wide bands. And they had no theory to suggest anything else.

A modern spectrograph shows a set of peaks and valleys superimposed on each band, even at sea-level pressure. In cold air at low pressure, each band resolves into a cluster of sharply defined lines, like a picket fence. There are gaps between the H2O lines where radiation can get through unless blocked by CO2 lines. That showed up clearly in data compiled for the U.S. Air Force, drawing the attention of researchers to the details of the absorption, especially at high altitudes. Moreover, researchers working for the Air Force had become acutely aware of how very dry the air gets at upper altitudes—indeed the stratosphere has scarcely any water vapor at all. By contrast, CO2 is fairly well mixed all through the atmosphere, so as you look higher it becomes relatively more significant.(9a)

 
The main points could have been understood in the 1930s if scientists had looked at the greenhouse effect carefully (or if they had noticed Hulburt’s paper, which did take a careful look). But it was in the 1950s, with the new calculations and measurements in hand, that a few theoretical physicists realized the question was worth a long and careful new look. Most earlier scientists who looked at the greenhouse effect had treated the atmosphere as a slab, and only tried to measure and calculate radiation in terms of the total content of gas and moisture in a column to the top of the atmosphere. But if you were prepared to tackle the full radiative transfer calculations, layer by layer, you would begin to see things differently. What if water vapor did entirely block any radiation that could have been absorbed by adding CO2 in the lower layers of the atmosphere? It was still possible for CO2 to make a difference in the thin, cold upper layers. With the new absorption data in hand, Lewis D. Kaplan ground through some extensive numerical computations. In 1952, he showed that in the upper atmosphere the saturation of CO2 lines should be weak. Thus adding more of the gas would certainly change the overall balance and temperature structure of the atmosphere.(10)

 

 

 

 

 

 

 

=>Simple models

Neither Kaplan nor anyone else of the time was thinking clearly enough about the greenhouse effect to point out that it will operate regardless of the details of the absorption. The trick, again, was to follow how the radiation passed up layer by layer. Consider a layer of the atmosphere so high and thin that heat radiation from lower down would slip through. Add more gas, and the layer would absorb some of the rays. Therefore the place from which heat energy finally left the Earth would shift to a higher layer. That would be a colder layer, unable to radiate heat so efficiently. The imbalance would cause all the lower levels to get warmer, until the high levels became hot enough to radiate as much energy back out as the planet received. (For additional explanation of the "greenhouse effect," follow the link at right to the essay on Simple Models.) Adding carbon dioxide will make for a stronger greenhouse effect regardless of saturation in the lower atmosphere.


<=>CO2 greenhouse

 

 

 

=>Simple models

(And actually, there is no saturation. The primitive infrared techniques of the laboratory measurements made at the turn of the century had given a midleading result. Studies from the 1940s on have shown that there is not nearly enough CO2 in the atmosphere to block most of the infrared radiation in the bands of the spectrum where the gas absorbs it. That’s even the case for water vapor in deserts where the air is extremely dry.)  
If anyone had put forth these simple arguments in the 1950s, they would not have convinced other scientists unless they were backed up by a specific, numerical calculation. The structure of the H2O and CO2 absorption bands at a given pressure and temperature did need to be considered in figuring just how much radiation is absorbed in any given layer. Every detail had to be taken into account in order to calculate whether adding a greenhouse gas would warm the atmosphere neglibly or by many degrees.  

Plass pursued these details with a thorough set of one-dimensional computations, taking into account the structure of the absorption bands at all layers of the atmosphere. His final figures showed convincingly that adding or subtracting CO2 could seriously affect the radiation balance layer by layer through the atmosphere, altering the temperature by a degree or more down to ground level.(10a) From that point on, nobody could dismiss the theory with the simple old objections. However, Plass's specific numerical predictions for climate change made little impression on his colleagues. For his calculation relied on unrealistic simplifications. Like Callendar, Plass had ignored a variety of important effects — above all the way a rise of global temperature might cause the atmosphere to contain more water vapor and more clouds. As one critic warned, Plass's "chain of reasoning appears to miss so many middle terms that few meteorologists would follow him with confidence."(11)

 
<=Government
=>CO2 greenhouse
= Milestone

Fritz Möller attempted to follow up with a better calculation, and came up with a rise of 1.5°C (roughly 3°F) for doubled CO2. But when Möller took into account the increase of absolute humidity with temperature, by holding relative humidity constant, his calculations showed a massive feedback. A rise of temperature increased the capacity of the air to hold moisture (the "saturation vapor pressure"), and the result was an increase of absolute humidity. More water vapor in the atmosphere redoubled the greenhouse effect — which would raise the temperature still higher, and so on. Möller discovered "almost arbitrary temperature changes." That seemed unrealistic, and he took recourse in a calculation that a mere 1% increase of cloudiness (or a 3% drop in water vapor content) would cancel any temperature rise due to a 10% increase in CO2. He concluded that "the theory that climatic variations are affected by variations in the CO2 content becomes very questionable." Indeed his method for getting a global temperature, like Plass's and Arrhenius's, was later shown to be seriously flawed.  
Yet most research begins with flawed theories, which prompt people to make better ones. Some scientists found Möller's calculation fascinating. Was the mathematics trying to tell us something truly important? It was a disturbing discovery that a simple calculation (whatever problems it might have in detail) could produce a catastrophic outcome. Huge climate changes, then, were at least theoretically conceivable. Moreover, it was now more clear than ever that modelers would have to think deeply about feedbacks, such as changes in humidity and their consequences.(12*)

 


=>Simple models
= Milestone

Clouds were always the worst problem. Obviously the extent of the planet's cloud cover might change along with temperature and humidity. And obviously even the simplest radiation balance calculation required a number that told how clouds reflect sunlight back into space. The albedo (amount of reflection) of a layer of stratus clouds had been measured at 0.78 back in 1919, and for decades this was the only available figure. Finally around 1950 a new study found that for clouds in general, an albedo of 0.5 was closer to the mark. When the new figure was plugged into calculations, the results differed sharply from all the preceding ones (in particular, the flux of heat carried from the equator to the poles turned out some 25% greater than earlier estimates).(13*) Worse, besides the albedo you needed to know the amount and distribution of cloudiness around the planet, and for a long time people had only rough guesses. In 1954, two scientists under an Air Force contract compiled ground observations of cloudiness in each belt of latitude. Their data were highly approximate and restricted to the Northern Hemisphere, but there was nothing better until satellite measurements came along in the 1980s.(14) And all that only described clouds as currently observed, not even considering how cloudiness might change if the atmosphere grew warmer.  
Dear reader: You have made your way into one of the most difficult corners of this experimental site, and it would be very useful to know why. Would you take just three minutes to answer a few questions (even if you already answered for another page, in fact especially if so)? Please click here  
Getting a proper calculation for the actions of water vapor seemed all the more important after Möller's discovery that a simple model with water vapor feedback could show catastrophic instability. No doubt his model was over simple, but what might the real climate actually do? Partly to answer that question, in the mid 1960s Syukuro Manabe with collaborators developed the first approximately realistic model. They began with a one-dimensional vertical slice of atmosphere, averaged over a zone of latitude or over the entire globe. In this column of air they modeled important features such as how the altitude of cloud layers would affect the way each layer of air trapped radiation. Most important, they included the way convective updrafts of warm, moisture-laden air carry heat up from the surface.  
That was a crucial step beyond trying to calculate surface temperatures by considering only the energy balance of radiation reaching and leaving the surface, what Möller and everyone else had done. The key thing about greenhouse gases, after all, is that they block radiation from escaping from the Earth's surface into space. Manabe understood that a significant amount of energy leaves the surface not as radiation but through convection, in the rising of warmed air. Most of that is carried as latent heat-energy in water vapor, for example in the columns of humid air that climb into thunderclouds. The energy eventually reaches thin levels near the top of the atmosphere, and is radiated out into space from there. If the surface got warmer, convection would carry more heat up. Möller's model, and all the earlier calculations back to Arrhenius, had been flawed because they failed to take proper account of this basic process. (With one exception. In his 1931 calculation, Hulburt too had come up with an unreasonably high surface temperature. But he realized that this was because he had considered only the transfer of radiation, and that if the lower atmosphere were so hot it would be unstable — the hot air would rise. He put in a crude measure for transfer of heat by convection, and got a reasonably correct figure for the greenhouse effect... which nobody noticed.)(15*)  
In the numbers printed out for Manabe's model in 1964, some of the general characteristics, although by no means all, looked rather like the real atmosphere.(16) By 1967, after further improvements in collaboration with Richard Wetherald, Manabe was ready to see what might result from raising the level of CO2. The result was the first somewhat convincing calculation of global greenhouse effect warming. The movement of heat through convection kept the temperature from running away to the extremes Möller had seen. Overall, the new model predicted that if the amount of CO2 doubled, temperature would rise a plausible 2°C.(17*) In the view of many experts, this widely noted calculation (to be precise: the Manabe-Wetherald one-dimensional radiative-convective model) gave the first reasonably solid evidence that greenhouse warming really could happen.

=>Models (GCMs)

 

 


=>CO2 greenhouse
<=>Models (GCMs)
= Milestone

Many gaps remained in radiation balance models. One of the worst was the failure to include dust and other aerosols. It was impossible even to guess whether they warmed or cooled a given latitude zone. That would depend on many things, such as whether the aerosol was drifting above a bright surface (like desert or snow) or a dark one. Worse, there were no good data nor reliable physics calculations on how aerosols affected cloudiness.(18) One attempt to attack the problem came in 1971 when S. I. Rasool and Stephen Schneider of NASA worked up their own globally averaged radiation-balance model, with fixed relative humidity, cloudiness, etc. The pioneering feature of their model was an extended calculation for dust particles. They found that the way humans were putting aerosols into the atmosphere could significantly affect the balance of radiation. The consequences for climate could be serious — an enormous increase of pollution, for example, might cause a dire cooling — although they could not say for sure. They also calculated that under some conditions a planet could suffer a "runaway greenhouse" effect. As increasing warmth evaporated ever more water vapor into the air, the atmosphere would turn into a furnace like Venus's. Fortunately our own planet was apparently not at risk.(19*)

 


=>Aerosols

 

 

 

 



=>Simple models

By the 1970s, thanks partly to such one-dimensional studies, scientists were starting to see that the climate system was so rich in feedbacks that a simple set of equations might not give an approximate answer, but a completely wrong one. The best way forward would be to use a model of a vertical column through the atmosphere as the basic building-block for fully three-dimensional models. Nevertheless, through the 1970s and into the 1980s, a number of people found uses for less elaborate models.

 


<=Models (GCMs)

For understanding the greenhouse effect itself, one-dimensional radiative-convective models remained central. Treating the entire planet as a single point allowed researchers to include intricate details of radiation and convection processes without needing an impossible amount of computing time.(20) These models were especially useful for checking the gross effects of influences that had not been incorporated in the bigger models. As late as 1985, this type of schematic calculation gave crucial estimates for the greenhouse effect of a variety of industrial gases (collectively they turned out to be even more important than CO2).(21)

 

 

 



=>Other gases

Another example was a 1978 study by James Hansen's NASA group, which used a one-dimensional model to study the effects on climate of the emissions from volcanic eruptions. They got a realistic match to the actual changes that had followed a 1968 explosion. In 1981, the group got additional important results by investigating various feedback mechanisms while (as usual) holding parameters like relative humidity and cloudiness fixed at a given temperature. Taking into account the dust thrown into the atmosphere by volcanic eruptions plus an estimate of solar activity variations, they got a good match to modern temperature trends.(22)

 


=>Simple models
<=>Aerosols

Primitive one-dimensional models were also valuable, or even crucial, for studies of conditions far from normal. Various groups used simple sets of equations to get a rough picture of the basic physics of the atmospheres of other planets such as Mars and Venus. When they got plausible rough results for the vastly different conditions of temperature, pressure, and even chemical composition, that confirmed that the basic equations were broadly valid. Primitive models could also give an estimate of how the Earth's own climate system might change if it were massively clouded by dust from an asteroid strike, or by the smoke from a nuclear war.

 

<=Venus & Mars

 


=>World winter

Other scientists worked with zonal energy-balance models, taking the atmosphere's vertical structure as given while averaging over zones of latitude. These models could do quick calculations of surface temperatures from equator to pole. They were useful to get a feeling for the effects of things like changes in ice albedo, or changes in the angle of sunlight as the Earth's orbit slowly shifted. More complex two-dimensional models, varying for example in longitude as well as latitude, were becoming useful chiefly as pilot projects and testing-grounds for the far larger three-dimensional "general circulation models" (GCMs). Even the few scientists who had access to months of time on the fastest available computers sometimes preferred not to spend it all on a few gigantic runs. Instead they could do many runs of a simpler model, varying parameters in order to get an intuitive grasp of the effects.

 



=>Simple models

To give one example of many, a group at the Lawrence Livermore Laboratory in California used a zonal model to track how cloud cover interfered with the heat radiation that escaped from the Earth. The relationship changed when they doubled the amount of CO2. They traced the cause of the change to variations in the height and thickness of clouds at particular latitudes. As one expert pointed out, "it is much more difficult to infer cause-effect relationships in a GCM."(23) A GCM's output was hundreds of thousands of numbers, a simulated climate nearly as complicated and inscrutable as the Earth's climate itself.  
Simple models also served as testbeds for "parameterizations" — the simple equations or tables of numbers that modelers built into GCMs to represent averages of quantities they lacked the power to compute for every cubic meter of atmosphere. You could fiddle with details of physical processes, varying things in run after run (which would take impossibly long in a full-scale model) to find which details really mattered. Still, as one group admitted, simple models were mostly useful to explore mechanisms, and "cannot be relied upon for quantitative discussion."(24)  
The basic models could still be questioned at the core. Most critical were the one-dimensional radiative-convective models for energy transfer through a single column of the atmosphere, which were often taken over directly for use in GCMs. In 1979, Reginald Newell and Thomas Dopplick pointed to a weakness in the common GCM prediction that increased CO2 levels would bring a large greenhouse warming. Newell and Dopplick noted that the prediction depended crucially on assumptions about the way a warming atmosphere would contain more of that other greenhouse gas, water vapor. Suggesting that the popular climate models might overestimate the temperature rise by an order of magnitude, the pair cast doubt on whether scientists understood the greenhouse effect at all.(25)  
In 1980 a scientist at the U.S. Water Conservation Laboratory in Arizona, Sherwood Idso, joined the attack on the models. In articles and letters to several journals, he asserted that he could determine how sensitive the climate was to additional gases by applying elementary radiation equations to some basic natural "experiments." One could look at the difference in temperature between an airless Earth and a planet with an atmosphere, or the difference between Arctic and tropical regions. Since these differences were only a few tens of degrees, he computed that the smaller perturbation that came from doubling CO2 must cause only a negligible change, a tenth of a degree or so.(26)  
Stephen Schneider and other modelers counterattacked. They showed that Idso, Newell, and Dopplick were misusing the equations — indeed their conclusions were "simply based upon various violations of the first law of thermodynamics." Refusing to admit error, Idso got into a long technical controversy with modelers, which on occasion descended into personal attacks.(27) It was the sort of conflict that an outsider might find arcane, almost trivial. But to a scientist, raising doubts about whether you were making scientific sense or nonsense aroused the deepest feelings of personal value and integrity.

 


=>Models (GCMs)


=>Public opinion

Most experts remained confident that the radiation models used as the basis for GCMs were fundamentally sound, so long as they did not push the models too far. The sets of equations used in different elementary models were so different from one another, and the methods were so different from the elaborate GCM computations, that they gave an almost independent check on one another. Where all of the approaches agreed, the results were very probably robust — and where they didn't agree, well, everyone would have to go back to their blackboards.(28)  
The most important such comparison of various elementary models and GCMs was conducted for the U.S. government in 1979 by a panel of the National Academy of Sciences, chaired by Jule Charney.(29) The panel's report announced that the simple models agreed quite well with one another and with the GCMs; simple one-dimensional radiative-convective models, in particular, showed a temperature increase only about 20% lower than the best GCMs. That gave a new level of confidence in the predictions, from every variety of model, that doubled CO2 would bring significant warming. As a 1984 review explained, the various simple radiative-convective and energy-balance models all continued to show remarkably good agreement with one another: doubling CO2 would change temperature within a range of roughly 1.3 to 3.2°C (that is, 2.3 to 5.8°F). And that was comfortably within the range calculated by the big general circulation models (with their wide variety of assumptions about feedbacks and other conditions, these gave a wider spread of possible temperatures).(30)

 

 


=>Models (GCMs)

 

 

=>Simple models

Much remained to be done before anyone could be truly confident in these findings. There was the problem of cloud feedback, in particular, which the Charney panel had singled out as one of the "weakest links." Simple models would continue to be helpful for investigating such parameters. Otherwise the Charney panel's report marked the successful conclusion of the program of simple radiation calculations. While they would still provide useful guidance for specialized topics, in future their main job would be making a foundation for the full apparatus of the general circulation models.

Note: this Website does not cover developments from the 1980s forward in radiation models (nor the technical details of the other components of general circulation models, increasingly numerous and sophisticated ).

 

RELATED:

Home
General Circulation Models of the Atmosphere

 NOTES

1. Simpson (1928), p. 70. BACK

2. As Langley later realized, his estimate went much too far below freezing, Langley (1884); see also Langley (1886) . BACK

3. The pioneer was W.H. Dines, who gave the first explicit model including infrared radiation upward and downward from the atmosphere itself, and energy moved up from the Earth's surface into the atmosphere in the form of heat carried by moisture, Dines (1917); Hunt et al. (1986) gives a review. BACK

4. Simpson began with a gray-body calculation, Simpson (1928); very soon after he reported that this paper was worthless, for the spectral variation must be taken into account, Simpson (1928); 2-dimensional model (mapping ten degree squares of latitude and longitude): Simpson (1929); a pioneer in pointing to latitudinal transport of heat by atmospheric eddies was Defant (1921); for other early energy budget climate models taking latitude into account, not covered here, see Kutzbach (1996), pp. 354-59. BACK

5. Hulburt (1931); a still better picture of the vertical temperature structure, in mid-latitudes, was derived by Möller (1935). BACK

6. Callendar (1941); low-pressure resolution of details was pioneered by Martin and Baker (1932). BACK

7. Chandrasekhar (1950), which includes historical notes. Most of this work was first published in the Astrophysical Journal, a publication that meteorological papers of the period scarcely ever referenced. BACK

8. For a review at the time, see Goody and Robinson (1951). BACK

9. Sheppard (1962), p. 93. BACK

9a. The infrared database used to this day descends from data compiled by the Air Force Geophysical Laboratory at Hanscom, MA, referred to in early radiative transfer textbooks as the "AFGL Tape." I am grateful to Raymond F. Pierrehumbert for clarifying important points in this section. BACK

10. Kaplan (1952). BACK

10a. Plass (1956); see also Plass (1956); Plass (1956); Möller (1957) reviews the state of understanding as of about 1955. BACK

11. Kaplan (1960); see exchange of letters with Plass, Plass and Kaplan (1961); "chain of reasoning:" Crowe (1971), p. 486; another critique: Sellers (1965), p. 217. BACK

12. Möller (1963), quote p. 3877. Möller recognized that his calculation, since it did not take all feedbacks into account, gave excessive temperatures, p. 3885. BACK

13. Houghton (1954). Houghton did not discuss whether an important part of the heat flux might be carried by the oceans. BACK

14. Published only in an Air Force contract report, Telegdas and London (1954). BACK

15. The earlier workers mostly assumed that the flux of sensible and latent heat would be fixed. Möller was aware that this was an oversimplification which needed further work. Arrhenius further had inadequate data for water vapor absorption, while Callendar and Plass left out the water vapor feedback altogether. I thank S. Manabe for clarifying these matters. Hulburt (1931). BACK

16. Manabe and Strickler (1964); see also Manabe et al. (1965); the 1965 paper was singled out by National Academy of Sciences (1966), see pp. 65-67 for general discussion of this and other models. BACK

17. "Our model does not have the extreme sensitivity... adduced by Möller." Manabe and Wetherald (1967), quote p. 241; the earlier paper, Manabe and Strickler (1964), used a fixed vertical distribution of absolute humidity, whereas the 1967 work more realistically had moisture content depend upon temperature by fixing relative humidity, a method adopted by subsequent modelers. 21st-century modelers recognized that relative humidity tends to remain constant in the lowest kilometer or so of the atmosphere but follows a more complex evolution in higher levels. BACK

18. The pioneer radiation balance model incorporating aerosols was Freeman and Liou (1979); for cloudiness data they cite Telegdas and London (1954). BACK

19. Rasool and Schneider (1971). This paper has been cited by skeptics of global warming as just about the only example of a true scientific paper that actually predicted an imminent ice age. In fact it was only a rough calculation of possible effects of very large human inputs. They underestimated the effects of greenhouse gas increases but admitted their estimate was unreliable. See the essay on aerosols. BACK

20. Ramanathan and Coakley (1978) gives a good review, see p. 487. BACK

21. Ramanathan et al. (1985). BACK

22. Hansen et al. (1978); Another pioneer radiation balance model incorporating aerosols was Freeman and Liou (1979); Hansen et al. (1981). BACK

23. Potter et al. (1981); quote: Ramanathan and Coakley (1978), p. 487. BACK

24. GCMs were "typically as complicated and inscrutable as the Earth's climate..." simple models "cannot be relied upon," Washington and Meehl (1984), p. 9475. BACK

25. Newell and Dopplick (1979). BACK

26. Idso (1980); Idso (1987). BACK

27. Schneider et al. (1980), see pp. 7-8; Ramanathan (1981) (with the aid of W. Washington's model); National Research Council (1982); Cess and Potter (1984), quote p. 375; Schneider (1984); Webster (1984); for further references, see Schneider and Londer (1984); cf. reply, Idso (1987); the controversy is reviewed by Frederick M. Luther and Robert D. Cess in MacCracken and Luther (1985), App. B, pp. 321-34; see also Gribbin (1982), pp. 225-32. BACK

28. Schneider and Dickinson (1974), p. 489; North et al. (1981), quote p. 91, see entire articles for review. BACK

29. National Academy of Sciences (1979). BACK

30. Schlesinger (1984). BACK

copyright © 2003-2007 Spencer Weart & American Institute of Physics