| Climate is governed by the general circulation of the atmosphere
the global pattern of air movements, with its semi-tropical trade
winds, its air masses rising in the tropics to descend farther north,
its cyclonic storms that carry energy and moisture through middle
latitudes, and so forth. Many meteorologists suspected that shifts
in this pattern were a main cause of climate change. They could only
guess about such shifts, for the general circulation was poorly mapped
before the 1940s (even the jet streams remained to be discovered).
The Second World War and its aftermath brought a phenomenal increase
in observations from ground level up to the stratosphere, which finally
revealed all the main features. Yet up to the 1960s, the general circulation
was still only crudely known, and this knowledge was strictly observational.
The general circulation of the atmosphere
CLICK FOR FULL IMAGE
| From the 19th century
forward, many scientists had attempted to explain the general pattern
by applying the laws of the physics of gases to a heated, rotating
planet. All their ingenious efforts failed to derive a realistic mathematical
solution. The best mathematical physicists could only offer simple
arguments for the character of the circulation, arguments which might
seem plausible but in fact were mere hand-waving.(3) And with the general global circulation not explained, attempts
to explain climate change in terms of shifts of the pattern were less
science than story-telling.
Full discussion in
| The solution would come by taking the problem from the other end.
Instead of starting with grand equations for the planet as a whole,
one might seek to find how the circulation pattern was built up from
the local weather at thousands of points. But the physics of local
weather was also a formidable problem.
| Early in the 20th century a Norwegian meteorologist, Vilhelm Bjerknes,
argued that weather forecasts could be calculated from the basic physics
of the atmosphere. He developed a set of seven "primitive equations"
describing the behavior of heat, air motion, and moisture. The solution
of the set of equations would, in principle, describe and predict
large-scale atmospheric motions. Bjerknes proposed a "graphical calculus,"
based on weather maps, for solving the equations. His methods were
used and developed until the 1950s, but the slow speed of the graphical
calculation methods sharply limited their success in forecasting.
Besides, there were not enough accurate observational data to begin
| In 1922, the British mathematician and physicist Lewis Fry Richardson
published a more complete numerical system for weather prediction.
His idea was to divide up a territory into a grid of cells, each with
its own set of numbers describing its air pressure, temperature, and
the like, as measured at a given hour. He would then solve the equations
that told how air behaved (using a method that mathematicians called
finite difference solutions of differential equations). He could calculate
wind speed and direction, for example, from the difference in pressure
between two adjacent cells. These techniques were basically what computer
modelers would eventually employ. Richardson used simplified versions
of Bjerknes's "primitive equations," reducing the necessary arithmetic
computations to a level where working out solutions by hand seemed
feasible. Even so, "the scheme is complicated," he admitted,
"because the atmosphere itself is complicated."
| The number of required computations was so great that Richardson
scarcely hoped his idea could lead to practical weather forecasting.
Even if someone assembled a "forecast-factory" employing tens of thousands
of clerks with mechanical calculators, he doubted they would be able
to compute weather faster than it actually happens. But if he could
make a model of a typical weather pattern, it could show meteorologists
how the weather worked.
|So Richardson attempted to compute how
the weather over Western Europe had developed during a single eight-hour
period, starting with the data for a day when scientists had coordinated
balloon-launchings to measure the atmosphere simultaneously at various
levels. The effort cost him six weeks of pencil-work Perhaps never
has such a large and significant set of calculations been carried
out under more arduous conditions: a convinced pacifist, Richardson
had volunteered to serve as an ambulance-driver on the Western Front.
He did his arithmetic as a relief from the surroundings of battle
chaos and dreadful wounds.
|The work ended in complete failure. At the center of Richardson's
simulacrum of Europe, the computed barometric pressure climbed far
above anything ever observed in the real world. "Perhaps some day
in the dim future it will be possible to advance the calculations
faster than the weather advances," he wrote wistfully. "But that is
a dream." Taking the warning to heart, meteorologists gave up any
hope of numerical modeling.(5)
| Numerical Weather Prediction (1945-1955)
| The alternative to the failed numerical approach
was to keep trying to find a solution in terms of mathematical functions
a few pages of equations that an expert might comprehend as
easily as a musician reads music. Through the 1950s, some leading
meteorologists tried a variety of such approaches, working with simplified
forms of the primitive equations that described the entire global atmosphere.
They managed to get mathematical models that reproduced some features
of atmospheric layers, but they were never able to convincingly show
the features of the general circulation not even something
as simple and important as the trade winds. The proposed solutions
had instabilities. They left out eddies and other features that evidently
played crucial roles. In short, the real atmosphere was too complex
to pin down in a few hundred lines of mathematics. "There is very
little hope," climatologist Bert Bolin declared in 1952, "for the possibility of deducing a theory for the general
circulation of the atmosphere from the complete hydrodynamic and thermodynamic
| That threw people back on Richardson's program
of numerical computation. What had been hopeless with pencil and paper
might possibly be made to work with the new digital computers. A handful
of extraordinary machines, feverishly developed during the Second
World War to break enemy codes and to calculate atomic bomb explosions,
were leaping ahead in power as the Cold War demanded ever more calculations.
In the lead, energetically devising ways to simulate nuclear weapons
explosions, was the Princeton mathematician John von Neumann. Von
Neumann saw parallels between his explosion simulations and weather
prediction (both are problems of non-linear fluid dynamics). In 1946,
soon after his pioneering computer ENIAC became operational, he began
to advocate using computers for numerical weather prediction.(7)
| This was a subject of
keen interest to everyone, but particularly to the military services,
who well knew how battles could turn on the weather. Von Neumann,
as a committed foe of Communism and a key member of the American national
security establishment, was also concerned about the prospect of "climatological
warfare." It seemed likely that the U.S. or the Soviet Union could
learn to manipulate weather so as to harm their enemies.
| Under grants from the Weather Bureau, the
Navy, and the Air Force, von Neumann assembled a small group of theoretical
meteorologists at Princeton's Institute for Advanced Study. (Initially
the group was at the Army's Aberdeen Proving Grounds, and later it
also got support from the U.S. Atomic Energy Commission.) If regional
weather prediction proved feasible, the group planned to move on to
the extremely ambitious problem of modeling the entire global atmosphere.
Von Neumann invited Jule Charney, an energetic and visionary meteorologist,
to head the new Meteorology Group. Charney came from Carl-Gustaf Rossby's
pioneering meteorology department at the University of Chicago, where
the study of weather maps and fluids had developed a toolkit of sophisticated
mathematical techniques and an intuitive grasp of basic weather processes.
| Richardson's equations were the necessary
starting-point, but Charney had to simplify them if he hoped to run
large-scale calculations in weeks rather than centuries. Solutions
for the atmosphere equations were only too complete. They even included
sound waves (random pressure oscillations, amplified through the computations,
were a main reason Richardson's heoric attempt had failed). Charney
explained that it would be necessary to "filter out" these unwanted
solutions, as one might use an electronic filter to remove noise from
a signal, but mathematically.
|Charney began with a set of simplified equations
that described the flow of air along a narrow band of latitude. By
1949, his group had results that looked fairly realistic sets
of numbers that you could almost mistake for real weather diagrams,
if you didn't look too closely. In one characteristic experiment,
they modeled the effects of a large mountain range on the air flow
across a continent. Modeling was taking the first steps toward the
computer games that would come a generation later, in which the player
acts as a god: raise up a mountain range and see what happens! Soon
the group proceeded to fully three-dimensional models for a region.(8)
| All this was based on a few equations that could be written on
one sheet of paper. It would be decades before people began to argue
that modelers were creating an entirely new kind of science; to Charney,
it was just an extension of normal theoretical analysis. "By reducing
the mathematical difficulties involved in carrying a train of physical
thought to its logical conclusion," he wrote, "the machine will give
a greater scope to the making and testing of physical hypotheses."
Yet in fact he was not using the computer just as a sort of giant
calculator representing equations. With hindsight we can see that
computer models conveyed insights in a way that could not come from
physics theory, nor a laboratory setup, nor the data on a weather
map, but in an altogether new way.(9)
| The big challenge was still what it had been in the traditional
style of physics theory: to combine and simplify equations until you
got formulas that gave sensible results with a feasible amount of computation.
To be sure, the new equipment could handle an unprecedented volume
of computations. However, the most famous computers of the 1940s and
1950s were dead slow by comparison with a simple laptop computer of
later years. Moreover, a team had to spend a good part of its time
just fixing the frequent breakdowns. A clever system of computation
could be as helpful as a computer that ran five times faster. Developing
usable combinations and approximations of meteorological variables
took countless hours of work, and a rare combination of mathematical
ingenuity and physical insight. And that was only the beginning.
| To know when you were getting close to a realistic model, you had
to compare your results with the actual atmosphere. To do that you
would need an unprecedented number of measurements of temperature,
moisture, wind speed, and so forth for a large region indeed
for the whole planet, if you wanted to check a global model. During
the war and after, networks had been established to send up thousands
of balloons that radioed back measurements of the upper air. This
was largely to meet military needs, and later to help civilian aviation.
For the first time the atmosphere was seen not as a single layer,
as represented by a surface map, but in its full three dimensions.
By the 1950s, the weather over continental areas, up to the lower
stratosphere, was being mapped well enough for comparison with results
from rudimentary models.(10)
|The first serious weather simulation that Charney's team completed
was two-dimensional. They ran it on the ENIAC in 1950. Their model,
like Richardson's, divided the atmosphere into a grid of cells; it
covered North America with 270 points about 700 km apart. Starting
with real weather data for a particular day, the computer solved all
the equations for how the air should respond to the differences in
conditions between each pair of adjacent cells. Taking the outcome
as a new set of weather data, it stepped forward in time (using a
step of three hours) and computed all the cells again. The authors
remarked that between each run it took them so long to print and sort
punched cards that "the calculation time for a 24-hour forecast was
about 24 hours, that is, we were just able to keep pace with the weather."
The resulting forecasts were far from perfect, but they turned up
enough features of what the weather had actually done on the chosen
day to justify pushing forward.(11)
| The Weather Bureau and units of the armed forces established a
Joint Numerical Weather Prediction Unit, which in May 1955 began issuing
real-time forecasts in advance of the weather.(12) They were not the first: since December
1954 a meteorology group at the University of Stockholm had been delivering
forecasts to the Royal Swedish Air Force Weather Service, sometimes
boasting better accuracy than traditional methods.(13)
At their best, these models could give fairly good forecasts up to
three days ahead. Yet with the limited computing power available,
they had to use simplifying assumptions, not the full "primitive
equations" of Bjerknes and Richardson. Even with far faster computers,
the teams would have been limited by their ignorance about many features
of weather, such as how clouds are formed. It would be well over a
decade before the accuracy of computer forecasts began to reliably
outstrip the subjective guesswork of experienced human forecasters.(14)
|These early forecasting models were regional, not global in scale.
Calculations for numerical weather prediction were limited to what
could be managed in a few hours by the rudimentary digital computers
banks of thousands of glowing vacuum tubes that frequently
burned out, connected by spaghetti-tangles of wiring. Real-time weather
forecasting was also limited by the fact that a computation had to
start off with data that described the actual weather at a given hour
at every point in a broad region. That was always far from perfect,
for the instruments that measured weather were often far apart and
none too reliable. Besides, the weather had already changed by the
time you could bring the data together and convert it to a digital
form that the computers could chew on. It was not for practical weather prediction
that meteorologists wanted to push on to model the entire general
circulation of the global atmosphere.
|The scientists could justify the expense by claiming that their work might eventually show how to alter a region’s climate for better or worse, as in von Neumann's project of climatological warfare. Perhaps some of them also hoped to learn what had caused the climate changes known from the past, back to the great Ice Ages. Some historians believed that past civilizations had collapsed because of climate changes, and it might be worth knowing about that for future centuries. But for the foreseeable future the scientists' interest was primarily theoretical: a hope of understanding at last how the climate system worked.
|That was a fundamentally different type of problem from forecasting.
Weather prediction is what physicists and mathematicians call an "initial
value" problem, where you start with the particular set of conditions
found at one moment and compute how the system evolves, getting less
and less accurate results as you push forward in time. Calculating
the climate is a "boundary value" problem, where you define
a set of unchanging conditions, the physics of air and sunlight and
the geography of mountains and oceans, and compute the unchanging
average of the weather that these conditions determine. To see how
climate might change, modelers would eventually have to combine these
two approaches, but that would have to wait until they could compute
something resembling the present average climate. That computation became a holy grail for theoretical meteorologists.
| The First General Circulation Models (1955-1965)
| Norman Phillips in Princeton took up the
challenge. He was encouraged by "dishpan" experiments carried out
in Chicago, where patterns resembling weather had been modeled in
a rotating pan of water that was heated at the edge. For Phillips
this proved that "at least the gross features of the general circulation
of the atmosphere can be predicted without having to specify the heating
and cooling in great detail." If such an elementary laboratory system
could model a hemisphere of the atmosphere, shouldn't a computer be
able to do as well? To be sure, the computer at Phillips's disposal
was as primitive as the dishpan (its RAM held all of five kilobytes
of memory and its magnetic drum storage unit held ten). So his model
had to be extremely simple. By mid-1955 Phillips had developed improved
equations for a two-layer atmosphere. To avoid mathematical complexities,
his grid covered not a hemisphere but a cylinder, 17 cells high and
16 in circumference. He drove circulation by putting heat into the
lower half, somewhat like the dishpan experimenters only with numbers
rather than an electrical coil. The calculations turned out a plausible
jet stream and the evolution of a realistic-looking weather disturbance
over as long as a month.
| This settled an old
controversy over what processes built the pattern of circulation.
For the first time scientists could see, among other things, how giant
eddies spinning through the atmosphere played a key role in moving
energy and momentum from place to place. Phillips's model was quickly
hailed as a "classic experiment" the first true General Circulation
|Von Neumann immediately called a conference
to publicize Phillips's triumph, drumming up government funding for
a long-term project. The effort got underway that same year, 1955,
under the direction of Joseph Smagorinsky at the U.S. Weather Bureau
near Washington, DC. Smagorinsky's goal was the one first envisaged
by von Neumann and Charney: a general circulation model of the entire
three-dimensional global atmosphere built directly from the primitive
equations.(16) In 1958, Smagorinsky invited Syukuro
("Suki") Manabe to join the lab. Manabe was one of a group of young
men who had studied physics at Tokyo University in the difficult years
following the end of the Second World War. These ambitious and independent-minded
students had few opportunities for advancement in Japan, and several
wound up as meteorologists in the United States. With Smagorinsky and others, Manabe built one of the world's most vigorous and long-lasting GCM development programs.
| Smagorinsky and Manabe put into their model how radiation passing through the atmosphere was impeded not only by water vapor but also by ozone and carbon dioxide gas (CO2), they put in how the air exchanged water and heat with simplified ocean, land, and ice surfaces, they put in the way rain fell on the surface and evaporated or ran off in rivers, and much more. Manabe spent many hours in the library
studying such esoteric topics as how various types of soil absorbed
water. The huge complexities of the modeling required contributions
from several others. "This venture has demonstrated to me," Smagorinsky
wrote, "the value if not the necessity of a diverse, imaginative,
and dedicated working group in large research undertakings." As decades passed this necessity would drive the community of researchers to grow by orders of magnitude without ceasing to collaborate closely.
|By 1965 Manabe's group had a reasonably complete three-dimensional global model
that solved the basic equations for an atmosphere divided into nine
levels. This was still highly simplified, with no geography
land and ocean were blended into a single damp surface, which exchanged
moisture with the air but could not take up heat. Nevertheless, the
way the model moved water vapor around the planet looked gratifyingly
realistic. The printouts showed a stratosphere, a zone of rising air
near the equator (creating the doldrums, a windless zone that becalmed
sailors), a subtropical band of deserts, and so forth. Many details
came out wrong, however.(17)
| From the early 1960s on, modeling work
interacted crucially with fields of geophysics such as hydrology (soil
moisture and runoff), glaciology (ice sheet formation and flow), meteorological
physics (cloud formation and precipitation, exchanges between winds
and waves, and so forth). Studies of local small-scale phenomena
often stimulated by the needs of modelers provided basic parameters
for GCM's. Those developments are not covered in these essays.
| In the late 1950s, as
computer power grew and the need for simplifying assumptions diminished,
other scientists around the world began to experiment with
many-leveled models based on the primitive equations of Bjerknes and
Richardson. An outstanding case was the work of Yale Mintz in the
Department of Meteorology of the University of California, Los Angeles (UCLA).
Already in the early 1950s Mintz had been trying to use the temperamental
new computers to understand the circulation of air "heroic
efforts" (as a student recalled) "during which he orchestrated an
army of student helpers and amateur programmers to feed a prodigious
amount of data through paper tape to SWAC, the earliest computer on
campus."(18) Phillips’s pioneering 1956 paper convinced Mintz
that numerical models would be central to progress in meteorology.
He embarked on an ambitious program (far too ambitious for one junior
professor, grumbled some of his colleagues). Unlike Smagorinsky's
team, Mintz sometimes had to scramble to get access to enough computer
time.(19) But like Smagorinsky, Mintz had the
rare vision and drive necessary to commit himself to a research program
that must take decades to reach its goals. And like Smagorinsky, Mintz
recruited a young Tokyo University graduate, Akio Arakawa, to help
design the mathematical schemes for a general circulation model. In
the first of a number of significant contributions, Arakawa devised
a novel and powerful way to represent the flow of air on a broad scale
without requiring an impossibly large number of computations.
| A supplementary essay on Arakawa's
Computation Device describes his scheme for computing fluid flow,
a good example of how modelers developed important (but sometimes
| From 1961 on, Mintz and Arakawa worked away
at their problem, constructing a series of increasingly sophisticated
GCMs. By 1964 they had produced a climate computed for an entire globe,
with only a two-layer atmosphere but including realistic geography
the topography of mountain ranges was there, and a rudimentary
treatment of oceans and ice cover. Although the results missed some
features of the real world's climate, the basic wind patterns and
other features came out more or less right. The model, packed with useful techniques, had a powerful
influence on other groups.(20*)
|Arakawa was becoming especially interested
in a problem that was emerging as a main barrier to progress
accounting for the effects of clouds. The smallest single
cell in a global model that a computer can handle, even today, is
far larger than an individual cumulus cloud. Thus the computer calculates
none of the cloud's details. Models had to get by with a "parameterization," a scheme using a set of numbers (parameters) representing the net behavior of all the clouds in a cell under given conditions. That was tricky. For example, in some of the early models the entire cloud cover "blinked" on and off in a given grid cell as the average value for humidity or the like went slightly above or below a critical threshold. Through the decades, Arakawa and others would spend countless hours developing and exchanging ways to attack the problem of representing clouds correctly.(21)
| Modeling techniques and entire GCMs spread
by a variety of means. In the early days, as Phillips recalled, modelers
had been like "a secret code society." The machine-language computer
programs were "an esoteric art which would be passed on in an apprentice
system."(22) Over the years,
programming languages became more transparent and codes were increasingly
well documented. Yet there were so many subtleties that a real grasp
still required an apprenticeship on a working model. Commonly, a new
modeling group began with some version of another group's model. A
post-doctoral student (especially from the influential UCLA group) might take a job at another institution, bringing
along his old team's computer code. The new team he assembled would
start off working with the old code and then set to modifying it.
Others built new models from scratch. Through the 1960s and 1970s,
important GCM groups emerged at institutions from New York to Australia.
|Americans dominated the field during
the first postwar decades. That was assured by the government funding
that flowed into almost anything related to geophysics, computers,
and other subjects likely to help in the Cold War. The premier group
was Smagorinsky's Weather Bureau unit (renamed the Geophysical Fluid
Dynamics Laboratory in 1963), with Manabe's groundbreaking models.
In 1968, the group moved from the Washington, DC area to Princeton,
and it eventually came under the wing of the U.S. National Oceanic
and Atmospheric Administration. Almost equally influential were
the Mintz-Arakawa group at UCLA. Another major effort got underway
in 1964 at the National Center for Atmospheric Research (NCAR) in
Boulder, Colorado under Warren Washington and yet another Tokyo
University graduate, Akira Kasahara. The framework of their first
model was quite similar to Richardson’s pioneering attempt,
but without the instability that had struck him down, and incorporating
additional features such as the transfer of radiation up and down
through the atmosphere — or rather between the two vertical
layers that was all their computer could handle. Less visible was
a group at RAND Corporation, a defense think-tank in Santa Monica,
California. Their studies, based on the Mintz-Arakawa model, were
driven by the Department of Defense's concern about possibilities
for deliberately changing a region's climate. Although the RAND
results were published only in secret "gray" reports, the work produced
useful techniques that became known to other modelers.(23)
| Many Kinds of Models
| Although the modelers of the 1950s and early 1960s got results
good enough to encourage them to persevere, they were still a long
way from reproducing the details of the Earth's actual circulation
patterns and climate zones. In 1965, a blue-ribbon panel of the U.S.
National Academy of Sciences reported on where GCMs stood that year.
The panel reported that the best models (like Mintz-Arakawa and Smagorinsky-Manabe)
calculated simulated atmospheres with gross features "that have some
resemblance to observation." There was still much room for improvement
in converting equations into systems that a computer could work through
within a few weeks. To do much better, the panel concluded, modelers
would need computers that were ten or even a hundred times more powerful.(24)
| Yet even if the computers had been vastly faster, the simulations would
still have been unreliable. For they were running up against that
famous limitation of computers, "garbage in, garbage out." Some sources
of error were known but hard to drive out, such as getting the right
parameters for factors like convection in clouds. To diagnose the
failings that kept GCMs from being more realistic, scientists needed
an intensified effort to collect and analyze aerological data
the actual profiles of wind, heat, moisture, and so forth, at every
level of the atmosphere and all around the globe. The data in hand
were still deeply insufficient. Continent-scale weather patterns had
been systematically recorded only for the Northern Hemisphere's temperate
and arctic regions and only since the 1940s. Through the 1960s, the
actual state of the entire general circulation remained unclear. For
example, the leisurely vertical movements of air had not been measured
at all, so the large-scale circulation could only be inferred from
the horizontal winds. As for the atmosphere's crucial water and energy
balances, one expert estimated that the commonly used numbers might
be off by as much as 50%.(25) Smagorinsky put the problem
succinctly in 1969: "We are now getting to the point where the dispersion
of simulation results is comparable to the uncertainty of establishing
the actual atmospheric structure."(26)
| In the absence of a good match between atmospheric
data and GCM calculations, many researchers continued through the
1960s to experiment with simple models for climate change. A few equations
and some hand-waving gave a variety of fairly plausible descriptions
for how one or another factor might cause an ice age or global warming.
There was no way to tell which of these models was correct, if any.
As for the present circulation of the atmosphere, some continued to
work on pencil-and-paper mathematical models that would represent
the planet's shell of air with a few fundamental physics equations,
seeking an analytic solution that would bypass the innumerable mindless
computer operations. They made little headway. In 1967, Edward Lorenz,
an MIT professor of meteorology, cautioned that "even the trade winds
and the prevailing westerlies at sea level are not completely explained."
Another expert more bluntly described where things stood for an explanation
of the general circulation: "none exists." Lorenz and a few others
began to suspect that the problem was not merely difficult, but impossible
in principle. Climate was apparently not a well-defined system, but
only an average of the ever-changing jumble of daily thunderstorms
and storm fronts.(27)
| Would computer modelers ever be able to say they had "explained"
the general circulation? Many scientists looked askance at the new
method of numerical simulation as it crept into more and more fields
of research. This was not theory, and it was not observation either;
it was off in some odd new country of its own. People were attacking
many kinds of scientific problems by taking a set of basic equations,
running them through hundreds of thousands of computations, and publishing
a result that claimed to reflect reality. Their results, however,
were simply stacks of printout with rows of numbers. That was no "explanation"
in the traditional sense of a model in words or diagrams or equations,
something you could write down on a few pages, something your brain
could grasp intuitively as a whole. The numerical approach "yields
little insight," Lorenz complained. "The computed numbers are not
only processed like data but they look like data, and a study of them
may be no more enlightening than a study of real meteorological
| Yet the computer scientist could "experiment" in a sense, by varying
the parameters and features of a numerical model. You couldn't put
a planet on a laboratory bench and vary the sunlight or the way clouds
were formed, but wasn't playing with computer models functionally
equivalent? In this fashion you could make a sort of "observation"
of almost anything, for example, the effect of changing the amount of moisture or CO2 in the atmosphere. Through many such trials you might eventually come to understand
how the real world operated. Indeed you might be able to observe the
planet more clearly in graphs printed out from a model than in the
clutter of real-world observations, so woefully inaccurate and incomplete.
As one scientist put it, "in many instances large-scale features predicted
by these models are beyond our intuition or our capability to measure
in the real atmosphere and oceans."(29)
| Sophisticated computer
models were gradually displacing the traditional hand-waving models
where each scientist championed some particular single "cause" of
climate change. Such models had failed to come anywhere near to explaining
even the simplest features of the Earth's climate, let alone predicting
how it might change. A new viewpoint was spreading along with digital
computing. Climate was not regulated by any single cause, the modelers
said, but was the outcome of a staggeringly intricate complex of interactions,
which could only be comprehended in the working-through of the numbers
| GCMs were not the only way to approach this problem. Scientists
were developing a rich variety of computer models, for there were
many ways to slice up the total number of arithmetic operations that
a computer could run through in whatever time you could afford to
pay for. You could divide up the geography into numerous cells, each
with numerous layers of atmosphere; you could divide up the time into
many small steps, and work out the dynamics of air masses in a refined
way; you could make complex calculations of the transfer of radiation
through the air; you could construct detailed models for surface effects
such as evaporation and snow cover... but you could not do all these
at once. Different models intended for different purposes made different
|One example was the work of Julian Adem in Mexico City, who
sought a practical way to predict climate anomalies a few months
ahead. He built a model that had low geographical resolution but
incorporated a large number of land and ocean processes. John Green
in London pursued a wholly different line of attack, aimed at shorter-term
weather prediction. His analysis concentrated on the actions of
large eddies in the atmosphere and was confined to idealized mathematical
equations. It proved useful to computer modelers who had to devise
numerical approximations for the effects of the eddies. Other groups
chose to model the atmosphere in one or two dimensions rather than
all three.(30) The decisions such people made in choosing an approach
involved more than computer time. They also had to allocate another
commodity in short supply the time they could spend thinking.
|This essay does not cover the entire range of models, but concentrates
on those which contributed most directly to greenhouse effect studies.
For models in one or two dimensions, see the article on Basic Radiation Calculations.
|None of the concepts of the 1960s inspired
confidence. The modelers were missing some essential physics, and
their computers were too slow to perform the millions of computations
needed for a satisfactory solution. But as one scientist explained,
where the physics was lacking, computers could do schematic "numerical
experiments" directed toward revealing it.(31) By the time modelers got their equations and parameters
right, surely not many years off, the computers would have grown faster
by another order of magnitude or so and would be able to handle the
necessary computations. In 1970, a report on environmental problems
by a panel of top experts declared that work on computer models was
"indispensable" for progress in the study of climate change.(32*)
| The growing community of climate modelers
was strengthened by the advance of computer systems that carried
out detailed calculations on short timescales for weather prediction.
This progress required much work on parameterization schemes
for representing cloud formation, interactions between waves and winds,
and so forth. Such studies accelerated as the 1970s began.(33) The weather forecasting models also required data on conditions
at every level of the atmosphere at thousands of points around the
world. Such observations were now being provided by the balloons and
sounding rockets of an international World Weather Watch, founded
in the mid 1960s. The volume of data was so great that computers had
to be pressed into service to compile the measurements. Computers
were also needed to check the measurements for obvious errors (sometimes
several percent of the thousands of observations needed to be adjusted).
Finally, computers would massage the data with various smoothing and
calibration operations to produce a unified set of numbers to feed into calculations. The instrumental systems were increasingly oriented toward producing numbers meaningful to the models, and vice-versa; global data and global models were no longer distinct
entities, but parts of a single system for representing the world.(34) The weather predictions became accurate enough —
looking as far as three days ahead — to be economically important.
That built support for the meteorological measurement networks and
computer studies necessary for climate work.
|An example of the crossover could be found
at NASA's Goddard Institute for Space Studies in New York City. A
group there under James (Jim) Hansen had been developing a weather
model as a practical application of its mission to study the atmospheres
of planets. For one basic component of this model, Hansen developed
a set of equations for the transfer of radiation through the atmosphere,
based on work he had originally done for studies of the planet Venus.
The same equations could be used for a climate model, by combining
them with the elegant method for computing fluid dynamics that Arakawa
| In the 1970s, Hansen assembled a team to
work up schemes for cloud physics and the like to put into a model
that would be both fast-running and realistic. An example of the kind
of detail they pursued was a simple equation they devised to represent
the reflection of sunlight from snow. They included the age of the
snow layer (as it gradually melted away) and the "masking" by vegetation
(snowy forests are darker than snowy tundra). To do the computations
within a reasonable time, they had to use a grid with cells a thousand
kilometers square, averaging over all the details of weather. Eventually
they managed to get a quite realistic-looking climate. It ran an order
of magnitude faster than some rival GCMs, permitting the group to
experiment with multiple runs, varying one factor or another to see
what changed.(35*) In such
studies, the global climate was beginning to feel to researchers like
a comprehensible physical system, akin to the systems of glassware
and chemicals that experimental scientists manipulated on their laboratory
| Meanwhile the community of modelers continued to devise more realistic parameters
for various physical processes, and to sharpen their mathematical techniques.
A major innovation that spread during the 1970s took a new approach
to the basic architecture of models. Some groups, instead of dividing
the planet's surface into a grid of thousands of square cells, took
to dividing it into a tier of segments hemispheres, quadrants,
eighths, sixteenths, etc. ("spherical harmonics"). After doing a calculation
on this abstracted system, they could combine and transform the numbers
back into a geographical map. This "spectral transform" technique
simplified many of the computations, but it was feasible only with
the much faster new computers. For decades afterward, physicists who
specialized in other fields of fluid dynamics were startled when they
saw a climate model that did not divide up the atmosphere into millions
of boxes, but used the refined abstraction of spherical harmonics.
The method worked only because the Earth's atmosphere has an unusual
property for a fluid system it is in fact quite nearly spherical.
| The new technique was especially prized because it got around the
trouble computers had with the Earth's poles, where all the lines
of longitude converge in a point and the mathematics gets weird.
(The earliest models had avoided the poles altogether and computed
climate on a cylinder, but that wouldn’t take you very far.)
Spherical harmonics did not exhaust the ingenuity of climate modelers.
For example, in the late 1990s, when people had begun to run separate
computations for the atmospheric circulation and the equally important
circulation of ocean currents, many groups introduced new coordinate
schemes for their ocean models. They avoided problems with the North
and South Poles simply by shifting the troublesome convergence points onto a land mass.(36)
|Groups continued to proliferate, borrowing ideas
from earlier models and devising new techniques of their own. Here
as in most fields of science, Europeans had recovered from the war's
devastation and were catching up with the Americans. In particular,
during the mid-1970s a consortium of nations set up a European Centre
for Medium-Range Weather Forecasts and began to contribute to climate
modeling. A "family tree" of relations between leading
models is here.
| Predictions of Warming (1965-1979)
| In their first decade or so of work the GCM modelers had treated climate
as a given, a static condition. They had their hands full just trying
to understand one year's average weather. Typical was a list that
Mintz made in 1965 of possible uses for his and Arakawa's computer
model. Mintz showed an interest mainly in answering basic scientific
questions. He also listed long-range forecasting and "artificial climate
control" but not greenhouse effect warming or other possible
causes of long-term climate change.(38)
|Around this time, however, a few modelers began to take
an interest in global climate change as a problem over the long term.
The discovery that the level of CO2 in the atmosphere
was rising fast prompted hard thinking about greenhouse warming, prompting
conferences and government panels in which GCM experts like Smagorinsky
modelers began to interact with the community of carbon researchers.
Another stimulus was Fritz Möller's discovery in 1963 that simple
models built out of a few equations the only models available
for long-term climate change showed grotesque instabilities.
Everyone understood that Möller's model was unrealistic (in fact
it had fundamental flaws). Nevertheless it raised a nagging possibility
that mild perturbations, such as humanity itself might bring about,
could trigger an outright global catastrophe.(40)
|Manabe took up the challenge. He had a long-standing
interest in the effects of CO2, not because he
was worried about the future climate, but simply because the gas at
its current level was a significant factor in the planet's heat balance.
But when Möller visited Manabe and explained his bizarre results,
Manabe decided to look into how the climate system might change. He
and his colleagues were already building a model that took full account
of the movements of heat and water. To get a really sound answer, the
entire atmosphere had to be studied as a tightly interacting system.
In particular, Manabe's group calculated the way rising columns of moisture-laden air conveyed heat from the surface into the upper atmosphere, a crucial part of the system which most prior models had failed to incorporate. The required computations were so extensive, however, that Manabe
stripped down the model to a single one-dimensional column, which
represented the atmosphere averaged over the globe (or in some runs,
averaged over a particular band of latitude). His aim was to get a
system that could be used as a basic building-block for a full three-dimensional
|In 1967, Manabe and a collaborator, Richard Wetherald, used the
one-dimensional model to test what would happen if the level of CO2
changed. Their target was something that would eventually become a
central preoccupation of modelers: the climate's "sensitivity." Just
how much would temperature be altered when something affected incoming
and outgoing radiation (a change in the Sun's output of sunlight,
say, or a change in CO2)? The method was transparent.
Run a model with one value of the something (say, of CO2
concentration), run it again with a new value, and compare the answers.
Researchers since Arrhenius had pursued this with highly simplified
models. They used as a benchmark the difference if the CO2
level doubled.(42) That not only made comparisons between results easier,
but seemed like a good number to look into. For it seemed likely that
the level would in fact double before the end of the 21st century, thanks to humanity's ever-increasing use of fossil fuels.
The answer Manabe's group came up with was that global temperature
would rise roughly 2°C (around 3-4°F).(43)
|This was the first time a greenhouse warming
computation included enough of the essential factors to seem plausible
to many experts. For instance, Wallace Broecker, who would later play
a major role in climate change studies, recalled that it was the 1967
paper "that convinced me that this was a thing to worry about."(44) The work drew on all the experience and insights accumulated
in the labor to design GCMs, but it was no more than a first baby
step toward a realistic three-dimensional model of the changing climate.
|The next important step was taken in the
late 1960s by Manabe’s group, now at Princeton. Their GCM was
still highly simplified. In place of actual land and ocean geography
they pictured a geometrically neat planet, half damp surface (land)
and half wet (a "swamp" ocean). Worse, they could not predict cloudiness
but just held it unchanged at the present level when they calculated
the warmer planet with doubled CO2. However,
they did incorporate the movements of water, predicting changes in
soil moisture and snow cover on land, and they calculated sea surface
temperatures well enough to show the extent of sea ice. They computed nine atmospheric levels. The results,
published in 1975, looked quite realistic overall (link
| The model with increased CO2 had more moisture
in the air, with an intensified hydrological cycle of evaporation
and precipitation. That was what physicists might have expected for
a warmer atmosphere on elementary physical grounds (if they had thought
about it, which few had). Actually, with so many complex interactions
between soil moisture, cloudiness, and so forth, a simple argument
could be in error. It took the model computation to show that this
accelerated cycle really could happen, as hot soil dried out in one
region and more rain came down elsewhere. The Manabe-Wetherald model
also showed greater warming in the Arctic than in the tropics. This
too could be predicted from simple reasoning. Not only did a more
active circulation carry more heat poleward, but less snow and ice
meant more absorption of sunlight by ground and sea. Again it took
a calculation to show that what sounded reasonable on elementary principles
would indeed happen in the real world (or at least in a reasonable simulation of it).(45*)
| Averaged over the entire planet, for doubled CO2 the
computer predicted a warming of around 3.5°C. It all looked plausible.
The results made a considerable impact on scientists, and through
them on policy-makers and the public.
| Manabe and Wetherald warned that "it is not advisable to take too
seriously" the specific numbers they published.(46) They singled out the way the model treated the oceans as
a simple wet surface. On our actual planet, the oceans absorb large
quantities of heat from the atmosphere, move it around, and release
it elsewhere.(47) Another and more subtle problem was
that Manabe and Wetherald had not actually computed a climate change.
Instead they had run their model twice to compute two equilibrium
states, one with current conditions and one with doubled CO2.
In the real world, the atmosphere would pass through a series of changes
as the level of the gas rose, and there were hints that the model
could end up in different states depending on just what route it took.
| Even if those uncertainties could be cleared
up, there remained the old vexing problem of clouds. As the planet
got warmer the amounts of cloudiness would probably change at each
level of the atmosphere in each zone of latitude, but change how?
There was no reliable way to figure that out. Worse, it was not enough
to have a simple number for cloud cover. Scientists were beginning
to realize that clouds could either tend to cool a region (by reflecting
sunlight) or warm it (by trapping heat radiation from below, especially
at night). The net effect depended on the types of cloud and how high
they floated in the atmosphere. A better prediction of climate change
would have to wait on general improvements.
| Progress was steady, thanks to the headlong
advance of electronic computers. From the mid 1950s to the mid 1970s,
the power available to modelers increased by a factor of thousands.
That meant modelers could put in more factors in more complex ways,
they could divide the planet into more segments to get higher resolution
of geographical features, and they could run models to represent longer
periods of time. The models no longer had gaping holes that required
major innovations, and the work settled into a steady improvement
of existing techniques. At the foundations, modelers devised increasingly
more sophisticated and efficient schemes of computation. As input
for the computations they worked endlessly to improve parameterizations
describing various processes. From around 1970 on, many journal articles
appeared with ideas for dealing with convection, evaporation of moisture,
reflection from ice, and so forth.(48)
|The most essential
element for progress, however, was better data on the real world.
Strong efforts were rapidly extending the observing systems. For
example, in 1959 the physicist Lewis Kaplan found an ingenious way
to use measurements of infrared radiation from satellites to find
the temperature at different levels of the atmosphere, all around
the world. During the 1960s satellite data began to provide heat
budgets by zones of latitude, which gave a measure of transport
of heat toward the poles. "It is a warmer and darker
planet than we previously believed," one report announced.
"More solar energy is being absorbed, primarily in the tropics...
The trend toward departure from the earlier computation studies
of the radiation budget seems irreversible." In 1969 NASA's
Nimbus 3 satellite began to broadcast measurements designed explicitly
to provide a fundamental check on model results. The reflection
of sunlight at each latitude from Manabe's 1975 model planet agreed
pretty well with the actual numbers for the Earth, as measured by
the Nimbus 3 (see above).(48a)
|Also encouraging was a 1972 model by Mintz and Arakawa (unpublished,
like much of their work), which managed to simulate in a rough way
the huge changes as the sunlight shifted from season to season. During
the next few years, Manabe and collaborators published a model that
produced entirely plausible seasonal variations. To modelers, the
main point of such work was gaining insight into the dynamics of climate
through close inspection of their printouts. (They could study, for
example, just what role the ocean surface temperature played in driving
the tropical rain belt from one hemisphere to the other as the seasons
changed.) To everyone else, seasons were a convincing test of the
models' validity. It was almost as if a single model worked for two
quite different planets the planets Summer and Winter. A 1975
review panel felt that with this success, realistic numerical climate
models "may be considered to have begun."(49*)
|Yet basic problems such as predicting cloudiness remained unsolved, while new difficulties rose into view. For example,
scientists began to realize that the way clouds formed, and therefore
how much they helped to warm or cool a region, could be strongly affected
by the haze of dust and chemical particles floating in the atmosphere.
Little was known about how these aerosols helped or hindered the formation
of different types of clouds. Another surprise came when two scientists
pointed out that the reflectivity of clouds and snow depends on the
angle of the sunlight and in polar regions the Sun always struck
at a low angle.(50) Figuring how sunlight might warm an
ice cap was as complicated as the countless peculiar forms taken by
snow and ice themselves. Little of this had been explored through
physics theory. Nor had it been measured in the field, for it was
only gradually that model-makers realized how much they suffered from
the absence of reliable measurements of the parameters they needed
to describe the action of dust particles, snow surfaces, and so forth.
Overall, as Smagorinsky remarked in 1972, modelers still needed "to
meet standards of simulation fidelity considerably beyond our present
| Modelers felt driven
to do better, for people had begun to demand much more than a crude
reproduction of the present climate. Weather disasters and the energy
crisis of the early 1970s had put greenhouse warming on the public
agenda. It was now a matter of concern to citizens (or at least the more scientifically well-informed citizens) whether the computer models were correct in
their predictions of how CO2 emissions would
raise global temperatures. Newspapers reported disagreements among
prominent scientists. Some experts suspected that factors overlooked
in the models might keep the climate system from warming at all, or
might even bring on cooling instead. "Meteorologists still hold out
global modeling as the best hope for achieving climate prediction,"
a senior scientist observed in 1977. "However, optimism has been replaced
by a sober realization that the problem is enormously complex."(52*)
| The problem was so vexing that the President's
Science Adviser (who happened to be a geophysicist) asked the National
Academy of Sciences to study the issue. The Academy appointed a panel,
chaired by Jule Charney and including other respected experts who
had been distant from the recent climate debates. They convened at
Woods Hole in the summer of 1979. Charney’s group compared two
independent GCMs, one constructed by Manabe and the other by Hansen
— elaborate three-dimensional models that used different physical
approaches and different computational methods for many features.
The panel found differences in detail but solid agreement for the
main point: the world would get warmer as CO2
| But might both GCMs share some fundamental
unrecognized flaw? As a basic check, the Charney panel went back to
the models of one-dimensional and two-dimensional slices of atmosphere,
which various groups were using to explore a wider range of possibilities
than the GCMs could handle. These models showed crudely but directly
the effects of adding CO2 to the atmosphere.
All the different approaches, simplified in very different ways, were
in rough overall agreement. They came up with figures that were at
least in the same ballpark for the temperature in an atmosphere with
twice as much CO2 (the level projected for around
the middle of the 21st century).(53*)
| To make their conclusion
more concrete, the Charney panel decided to announce a specific range
of numbers. They argued out among themselves a rough-and-ready compromise.
Hansen's GCM predicted a 4°C rise for doubled CO2,
and Manabe's latest figure was around 2°C. Splitting the difference,
the panel thought it "most probable" that that as CO2
reached this level the planet would warm up by about three degrees,
plus or minus fifty percent: in other words, 1.5-4.5°C (2.7-8°F).
They concluded dryly, "We have tried but have been unable to find
any overlooked or underestimated physical effects" that could reduce
| Ocean Circulation and Real Climates
| In the early 1980s, several groups pressed ahead toward more realistic
models. They put in a reasonable facsimile of the Earth's actual geography,
and replaced the wet "swamp" surface with an ocean that could exchange
heat with the atmosphere. Thanks to increased computer power the models
were now able to handle seasonal changes as a matter of course. It
was also reassuring when Hansen's group and others got a decent match
to the rise-fall-rise curve of global temperatures since the late
19th century, once they put in not only the rise of CO2
but also changes in emissions of volcanic dust and solar activity.
| Adding a solar influence was a stretch, for nobody had figured out any
plausible way that the superficial variations seen in numbers of sunspots
could affect climate. To arbitrarily adjust the strength of the presumed
solar influence in order to match the historical temperature curve
was guesswork, dangerously close to fudging. But many scientists suspected
there truly was a solar influence, and adding it did improve the match.
Sometimes a scientist must "march with both feet in the air," assuming
a couple of things at once in order to see whether it all eventually
works out.(55) Reassured
that they might be on the right track, in the 1980s climate modelers
increasingly looked toward the future. When they introduced a doubled
CO2 level into their improved models, they consistently
found the same few degrees of warming.(56*)
| The skeptics were not persuaded. The Charney panel itself had pointed
out that much more work was needed before models would be fully realistic.
The treatment of clouds remained a central uncertainty. Another great
unknown was the influence of the oceans. Back in 1979 the Charney
panel had warned that the oceans' enormous capacity for soaking up
heat could delay an atmospheric temperature rise for decades. Global
warming might not become obvious until all the surface waters had
warmed up, which would be too late to take timely precautions.(57)
This time lag was not revealed by the existing GCMs, for these computed
only equilibrium states. The models, lacking nearly all the necessary
data and thwarted by formidable calculational problems, simply did
not account for the true influence of the oceans.
| Massive international programs of data-gathering were beginning to solve one of the problems. Oceanographers were coming to realize that
large amounts of energy were carried through the seas by a myriad
of whorls of various types, from tiny convection swirls up to sluggish
eddies a thousand kilometers wide. Calculating these whorls, like
calculating all the world's individual clouds, was beyond the reach
of the fastest computer. Again parameters had to be devised to summarize
the main effects, only this time for entities that were far worse
observed and understood than clouds. Modelers could only put in average
numbers to represent the heat that they knew somehow moved vertically
from layer to layer in the seas, and the energy somehow carried from
warm latitudes toward the poles. They suspected that the actual behavior
of the oceans might work out quite differently from their models.
And even with the simplifications, to get anything halfway realistic
required a vast number of computations, indeed more than for the atmosphere.
|Manabe was keenly aware that if the Earth's future climate were
ever to be predicted, it was "essential to construct a realistic
model of the joint ocean-atmosphere system."(58) He shouldered the task in collaboration
with Kirk Bryan, an oceanographer with meteorological training,
who had been brought into the group back in 1961 to build a stand-alone
numerical model of the circulation of an ocean. The two got together to construct a
computational system that coupled together their separate models.
Manabe's winds and rain would help drive Bryan's ocean currents,
while in return Bryan's sea-surface temperatures and evaporation
would help drive the circulation of Manabe's atmosphere. At first
they tried to divide the work: Manabe would handle matters from
the ocean surface upward, while Bryan would take care of what lay
below. But they found things just didn't work that way for studying
a coupled system. They moved into one another's territory, aided
by a friendly personal relationship.
|Bryan and Manabe were the first to put together in one package approximate
calculations for a wide variety of important features. They not only
incorporated both oceans and atmosphere, but added into the bargain
feedbacks from changes in sea ice and a detailed
scheme that represented, region by region, how moisture built up in
the soil, evaporated, or ran off in rivers to the sea.
| Their big problem was that from a standing start it took several
centuries of simulated time for an ocean model to settle into a realistic
state. After all, that was how long it would take the surface currents
of the real ocean to establish themselves from a random starting-point.
The atmosphere, however, readjusts itself in a matter of weeks. After
about 50,000 time steps of ten minutes each, Manabe's model atmosphere
would approach equilibrium. The team could not conceivably afford
the computer time to pace the oceans through decades in ten-minute
steps. Their costly Univac 1108, a supercomputer by the standards
of the time, needed 45 minutes to compute the atmosphere through a
single day. Bryan's ocean could use longer time steps, say a hundred
minutes, but the simulated currents would not even begin to settle
down until millions of these steps had passed.
| The key to their success was a neat trick for matching the different
timescales. They ran their ocean model with its long time steps through
twelve days. They ran the atmosphere model with its short time-steps
through three hours. Then they coupled the atmosphere and ocean to
exchange heat and moisture. Back to the ocean for another twelve days,
and so forth. They left out seasons, using average annual sunlight
to drive the system.
| Manabe and Bryan were confident enough of their model to undertake
a heroic computer run, some 1100 hours long (more than 12 full days
of computer time devoted to the atmosphere and 33 to the ocean). In
1969, they published the results in an unusually short paper, as Manabe
recalled long afterward "and still I am very proud of it."(59)
| Bryan wrote modestly at the time that "in
one sense the... experiment is a failure." For even after a simulated
century, the deep ocean circulation had not nearly reached equilibrium.
It was not clear what the final climate solution would look like.(60) Yet it was a great success just to
carry through a linked ocean-atmosphere computation that was at least
starting to settle into equilibrium. The result looked like a real
planet not our Earth, for in place of geography there was only
a radically simplified geometrical sketch, but in its way realistic.
It was obviously only a first draft with many details wrong, yet there
were ocean currents, trade winds, deserts, rain belts, and snow cover,
all in roughly the right places. Unlike our actual Earth, so poorly
observed, in the simulation one could see every detail of how air,
water, and energy moved about.
|Following up, in 1975 Manabe and Bryan
published results from the first coupled ocean-atmosphere GCM that
had a roughly Earth-like geography. Looking at their crude map,
one could make out continents like North America and Australia,
although not smaller features like Japan or Italy. The supercomputer
ran for fifty straight days, simulating movements of air and sea
over nearly three centuries. "The climate that emerges," they wrote,
"includes some of the basic features of the actual climate."
For example, it showed the Sahara and the American Southwest as
deserts, but plenty of rain in the Pacific Northwest and Brazil.
Manabe and Bryan had not shaped their equations deliberately to
bring forth such features. These were "emergent features,"
emerging spontaneously out of the computations. The computer’s
output looked roughly like the actual climate only because the modelers
had succeeded in roughly representing the actual operations of the
atmosphere upon the Earth’s geography.
|"However," Manabe and Bryan admitted, their model had
"many unrealistic features." For example, it still failed
to show the full oceanic circulation. After all, the inputs had not
been very realistic — for one thing, the modelers had not put
in the seasonal changes of sunlight. Still, the results were getting
close enough to reality to encourage them to push ahead.(61) By 1979, they had mobilized enough
computer power to run their model through more than a millennium while
| Meanwhile the team headed by Warren Washington at NCAR in Colorado
developed another ocean model, based on Bryan's, and coupled it to
their own quite different GCM. Since they had begun with Bryan's ocean
model it was not surprising that their results resembled Manabe and
Bryan's, but it was still a gratifying confirmation. Again the patterns
of air temperature, ocean salinity, and so forth came out roughly
correct overall, albeit with noticeable deviations from the real planet,
such as tropics that were too cold. As Washington's team admitted
in 1980, the work "must be described as preliminary."(63) Through the 1980s, these and other teams continued to refine
coupled models, occasionally checking how they reacted to increased
levels of CO2. These were not so much attempts
to predict the real climate as experiments to work out methods for
|The results, for all their limitations, said something about the predictions
of the atmosphere-only GCMs. As the Charney panel had pointed out,
the oceans would delay the appearance of global warming for decades
by soaking up heat. Hansen's group therefore warned in 1985 that a
policy of "wait and see" might be wrongheaded. A mild temperature
rise in the atmosphere might not become apparent until much worse
greenhouse warming was inevitable. Also as expected, complex feedbacks
showed up in the ocean circulation, influencing just how the weather
would change in a given region. Aside from that, including a somewhat
realistic ocean did not turn up anything that would alter the basic
prediction of future warming. Once again it was found that simple
models had pointed in the right direction.(64)
|A few of the calculations
showed a disturbing new feature a possibility that the ocean
circulation was fragile. Signs of rapid past changes in circulation
had been showing up in ice cores and other evidence that had set oceanographers
to speculating. In 1985, Bryan and a collaborator tried out a coupled
atmosphere-ocean model with a CO2 level four
times higher than at present. They found signs that the world-spanning
"thermohaline" circulation, where differences in heat and salinity
drove a vast overturning of sea water in the North Atlantic, could
come to a halt. Three years later Manabe and another collaborator
produced a simulation in which, even at present CO2
levels, the ocean-atmosphere system could settle down in one of two
states the present one, or a state without the overturning.(66*) Some experts worried that global warming
might indeed shut down the circulation. They feared that halting the steady flow of
warm water into the North Atlantic would bring devastating climate
changes in Europe and perhaps beyond.
Wallace Broecker remarked that the early GCMs had been designed to
come to equilibrium, giving a stability that might be illusory. As
scientists got better at modeling ocean-atmosphere interactions, they
might find that the climate system was liable to switch rapidly from
one state to another. On the other hand, since the cold oceans would
take up heat for many decades before they reached an equilibrium,
a climate that was computed for an atmosphere with doubled CO2
would not show what the planet would look like immediately after a
doubling took place, but only what it would look like many decades
later. Acknowledging these criticisms, Hansen's group and a few others
undertook protracted computer runs to find what would actually happen
while the CO2 level rose. Instead of separately
computing "before" and "after" states, they computed the entire "transient
response," plodding through a century or more simulating from one
day to the next. Hansen's coupled ocean-atmosphere model, which incorporated
the observed rise not only of CO2 but also other greenhouse
gases, plus the historical record of aerosols from volcanic explosions, turned out a fair approximation to
the observed global temperature trend of the previous half century.
Pushed into the future, the model showed sustained global warming.
By 1988 Hansen had enough confidence to issue a strong public pronouncement,
warning of an imminent threat.
|This was pushing the state of the art to its limit, however. In 1989 a meeting of climate experts concluded, in a rebuke to Hansen, that an attribution of the recent warming to the greenhouse effect "cannot now be made with any degree of confidence." Most
model groups could barely handle the huge difficulties of constructing
three-dimensional models of both ocean circulation and atmospheric
circulation, let alone link the two together and run the combination
through a century or so.(67)
| Limitations and Critics TOP
| The climate changes that different GCMs computed for doubled CO2, reviewers noted in 1987, "show many quantitative and even qualitative
differences; thus we know that not all of these simulations can be
correct, and perhaps all may be wrong."(68) Skeptics pointed out that GCMs were unable to represent
even the present climate successfully from first principles. Anything
slightly unrealistic in the initial data or equations could be amplified
a little at each step, and after thousands of steps the entire result
usually veered off into something impossible. To get around this,
the modelers had kept one eye over their shoulder at the real world.
They adjusted various parameters (for example, the numbers describing
cloud physics), "tuning" the models and running them again and again
until the results looked like the real climate. This was possible because the real climate was increasingly well mapped by massive field studies. The adjustments were not calculated from physical principles, nor were they pinned down precisely by the field studies; they were fiddled until the model became stable. As a check, the final models had to be able to reproduce real-world data and features that they had not been tuned to match, for example, regional monsoons. It was possible to get a crude climate representation
without the tuning, but the best simulations relied on this back-and-forth
between model and observation. If models were tuned to match current
climate, the critics asked, how reliably could they calculate a future,
| One way to check that was to see whether models could make a reasonable
facsimile of the Earth during a glacial period virtually a
different planet. If you could reproduce a glacial climate with the
same physical parameters for clouds and so forth that you used for
the current planet, that would be evidence the models were not arbitrarily
trimmed just to reproduce the present. But first you would need to
know what the conditions had actually been around the world during
an ice age. That required far more data than paleoclimatologists had
turned up. Already in 1968 a meteorologist warned that henceforth
reconstructing past climate would not be limited by theory so much
as by "the difficulty of establishing the history of paleoenvironment."
Until data and models were developed together, he said, atmospheric
scientists could only gaze upon the ice ages with "a helpless feeling
| To meet the need, a
group of oceanographers persuaded the U.S. government to fund a large-scale
project to analyze ooze extracted from the sea bottom at numerous
locations. The results, combined with terrestrial data from fossil
pollen and other evidence, would give a world map of temperatures at the
peak of the last ice age. As soon as this CLIMAP project began publishing
its results in 1976, modelers began trying to make a representation
for comparison. The first attempts showed only a very rough agreement,
although good enough to reproduce essential features such as the important
role played by the reflection of sunlight from ice.(70*)
| At first the modelers simply worked to reproduce
the ice age climate over land by using the CLIMAP figures for sea
surface temperatures. But when they tried to push on and use models
to calculate the sea surface temperatures, they ran into trouble.
The CLIMAP team had reported that in the middle of the last ice age,
tropical seas had been only slightly cooler than at present, a difference
of barely 1°C. That raised doubts about whether the climate was
as sensitive to external forces (like greenhouse gases) as the modelers
thought. Moreover, while the tropical seas had stayed warm during
the last ice age, the air at high elevations had certainly been far
colder. That was evident in lower altitudes of former snowlines detected
by geologists on the mountains of New Guinea and Hawaii. No matter
how much the GCMs were fiddled, they could not be persuaded to show
such a large difference of temperature with altitude. A few modelers
contended that the tropical sea temperatures must have varied more
than CLIMAP said. But they were up against an old and strongly held
scientific conviction that the lush equatorial jungles had changed
little over millions of years, testifying to a stable climate. (This
was an echo of traditional ideas that the entire planet's climate
was fundamentally stable, with ice ages no more than regional perturbations
at high latitudes and elevations.)(71*)
<=Uses of shells
| On the other hand, by 1988 modelers
had passed a less severe test. Some 8,000 years ago the world had
gone through a warm period presumably like the climate that
the greenhouse effect was pushing us toward. One modeling group managed
to compute a fairly good reproduction of the temperature, winds, and
moisture in that period. (The comparison of model results with the
past was only possible, of course, thanks to many geologists who worked
with the modelers to assemble and interpret data on ancient climates.)(72)
| Meanwhile all the main models had been developed to a point where
they could reliably reproduce the enormously different climates of
summer and winter. That was a main reason why a review panel of experts
concluded in 1985 that "theoretical understanding provides a firm
basis" for predictions of several degrees of warming in the next century.(73) So why did the models fail
to match the relatively mild sea-surface temperatures along with cold
mountains reported for the tropics in the previous ice age? Experts
could only say that the discrepancies "constitute an enigma."(74)
| A more obvious and annoying problem was the way models failed to
tell how global warming would affect a particular region. Policy-makers
and the public were less interested in the planet as a whole than
in how much warmer their own particular locality would get, and whether
to expect wetter or dryer conditions. Already in 1979, the Charney
panel's report had singled out the absence of local climate predictions
as a weakness. At that time the modelers who attacked climate change
had only tried to make predictions averaged over entire zones of latitude.
They might calculate a geographically realistic model through a seasonal
cycle, but nobody had the computer power to drive one through centuries.
In the mid 1970s, when Manabe and Wetherald had introduced a highly
simplified geography that divided the globe into land and ocean segments
without mountains, they had found, not surprisingly, that the model
climate's response to a raised CO2 level was
"far from uniform geographically."(75)
| During the 1980s, modelers got enough computer power to introduce
much more realistic geography into their climate change calculations.
They began to grind out maps in which our planet's continents could
be recognized, showing climate region by region in a world with doubled
CO2. However, for many important regions the
maps printed out by different groups turned out to be incompatible.
Where one model predicted more rainfall in the greenhouse future,
another might predict less. That was hardly surprising, for a region's
climate depended on particulars like the runoff of water from its
type of soil, or the way a forest grew darker as snow melted. Modelers
were far from pinning down such details precisely. A simulation of
the present climate was considered excellent if its average temperature
for a given region was off by only a few degrees and its rainfall
was not too high or too low by more than 50% or so. On the positive
side, the GCMs mostly did agree fairly well on global average predictions.
But the large differences in regional predictions emboldened skeptics
who cast doubt on the models' fundamental validity.(76)
| A variety of other criticisms were voiced.
The most prominent came from Sherwood Idso. In 1986 he calculated
that for the known increase of CO2 since the
start of the century, models should predict something like 3°C
of warming, which was far more than what had been observed. Idso insisted
that something must be badly wrong with the models' sensitivity, that
is, their response to changes in conditions.(77) Other scientists gave little heed to the claim. It was
only an extension of a long and sometimes bitter controversy in which
they had debated Idso's arguments and rejected them as too
oversimplified to be meaningful.
| Setting Idso's criticisms aside, there undeniably remained points
where the models stood on shaky foundations. Researchers who studied
the transfer of radiation through the atmosphere and other
physical features warned that more work was needed before the fundamental
physics of GCMs would be entirely sound. For some features, no calculation
could be trusted until more observations were made. And even when
the physics was well understood, it was no simple task to represent
it properly in the computations. "The challenges to be overcome through
the use of mathematical models are daunting," a modeler remarked,
"requiring the efforts of dedicated teams working a decade or more
on individual aspects of the climate system."(78) As Manabe regretfully explained, so
much physics was involved in every raindrop that it would never be
possible to compute absolutely everything. "And even if you have a
perfect model which mimics the climate system, you don't know it,
and you have no way of proving it."(79)
| Indeed philosophers of science explained to anyone who would listen
that a computer model, like any other embodiment of a set of scientific
hypotheses, could never be "proved" in the absolute sense one could
prove a mathematical theorem. What models could do was help people
sort through countless ideas and possibilities, offering evidence
on which were most plausible. Eventually the models, along with other
evidence and other lines of reasoning, might converge on a representation
of climate that if necessarily imperfect, like all human knowledge
could be highly reliable.(80)
|Through the 1980s and beyond, however, different models persisted
in coming up with noticeably different numbers for climate in one
region or another. Worse, some groups suspected that even apparently
correct results were sometimes generated for the wrong reasons. Above
all, their modeling of cloud formation was still
scarcely justified by the little that was known about cloud physics.
By now modelers were attempting to incorporate the different properties of different types of clouds at different heights. But even the actual cloudiness of various regions of the world had been
measured in only a sketchy fashion. Until satellite measurements
became available later in the 1980s, most models used data from the
1950s that only gave averages by zones of latitude, and only for the
Northern Hemisphere. Modelers mirrored the set to represent clouds
in the Southern Hemisphere, with the seasons reversed although
of course the distribution of land, sea, and ice is very different
in the two halves of the planet. Many modelers
felt a need to step back from the global calculations. Reliable progress
would require more work on fundamental elements, to improve the sub-models
that represented not only clouds but also snow, vegetation, and so forth.(81)
Modelers settled into a long grind of piecemeal improvements.
| Success (1988-2001) TOP
| "There has been little change over the last 20 years or so in the
approaches of the various modeling groups," an observer remarked in
1989. He thought this was partly due to a tendency "to fixate on specific
aspects of the total problem," and partly to limited resources. "The
modeling groups that are looking at the climate change process," he
noted, "are relatively small in size compared to the large task."(82) The limitations not only in resources
but in computer power, global data, and plain scientific understanding
kept the groups far from their goal of precisely reproducing all the
features of climate. Yet under any circumstances it would be impossible
to compute the current climate perfectly, given the amount of sheer
randomness in weather systems. Modelers nevertheless felt they now
had a basic grasp of the main forces and variations in the atmosphere.
Their interest was shifting from representing the current climate
ever more precisely to studies of long-term climate change.
| The research front accordingly moved from
atmospheric models to coupled ocean-atmosphere models, and from calculating
stable systems to representing the "transient response" to changes
in conditions. Running models under different conditions, sometimes
through simulated centuries, with rising confidence the teams drew
rough sketches of how climate could be altered by various influences
and especially by changes in greenhouse gases. Many were now
reasonably sure that they knew enough to issue clear warnings of future
global warming to the world's governments.(83)
| As GCMs incorporated ever more complexities,
modelers needed to work ever more closely with one another and with
people in outside specialties. Communities of collaboration among
experts had been rapidly expanding throughout geophysics and the other
sciences, but perhaps nowhere so obviously as in climate modeling.
The clearest case centered around NCAR. It lived up to its name of
a "National Center" (in fact an international center) by developing
what was explicitly a "Community Climate Model." The first version
used pieces drawn from the work of an Australian group, and the European
Centre for Medium-Range Weather Forecasts, and several others. In
1983 NCAR published all its computer source codes along with a "Users'
Guide" so that outside groups could run the model on their own machines.
The various outside experiments and modifications in return informed
the NCAR group. Subsequent versions of the Community Climate Model,
published in 1987, 1992, and so on, incorporated many basic changes
and additional features — for example, the Manabe group's scheme
for handling the way rainfall was absorbed, evaporated, or ran off
in rivers. The version released in 2004 was called the Third Community
Climate System Model, CCSM3, reflecting the ever increasing
complexity. NCAR had an exceptionally strong institutional commitment
to building a model that could be run on a variety of computer platforms,
but in other ways their work was not unusual. By now most models used
contributions from so many different sources that they were all in
a sense "community" models.(84)
|The effort was no longer dominated by American
groups. At the Hadley Centre for Climate Prediction and Research
in the United Kingdom and the Max Planck Institute for Meteorology
in Germany, in particular, groups were starting to produce pathbreaking
model runs. By the mid 1990s, some modelers in the United States
feared they were falling behind. One reason was that the U.S. government
forbade them from buying foreign supercomputers, a technology where
Japan had seized the lead. National rivalries are normal where groups
compete to be first with the best results, but competition did not
obstruct the collaborative flow of ideas.
|An important example of massive collaboration
was a 1989 study involving groups in the United States, Canada, England,
France, Germany, China, and Japan. Taking 14 models of varying complexity,
the groups fed each the same external forces (using a change in sea
surface temperature as a surrogate for climate change), and compared
the results. The simulated climates agreed well for clear skies. But
"when cloud feedback was included, compatibility vanished." The models
varied by as much as a factor of three in their sensitivity to the
external forces, disagreeing in particular on how far a given increase
of CO2 would raise the temperature.(85) A few respected meteorologists concluded that the modelers'
representation of clouds was altogether useless.
| Three years later, another
comparison of GCMs constructed by groups in eight different nations
found that in some respects they all erred in the same direction.
Most noticeably, they all got the present tropics a bit too cold.
It seemed that "all models suffer from a common deficiency in some
aspect of their formulation," some hidden failure to understand or
perhaps even to include some mechanisms.(86) On top of this came evidence that the
world's clouds would probably change as human activity added dust,
chemical haze, and other aerosols to the atmosphere. "From a climate
modeling perspective these results are discouraging," one expert remarked.
Up to this point clouds had been treated simply in terms of moisture,
and now aerosols were adding "an additional degree of complication."(87)
| Most experts nevertheless
felt the GCMs were on the right track. In the multi-model comparisons,
all the results were at least in rough overall agreement with reality.
A test that compared four of the best GCMs found them all pretty close
to the observed temperatures and precipitations for much of the Earth's
land surface.(88) Such studies
were helped greatly by a new capability to set their results against
a uniform body of world-wide data. Specially designed satellite instruments
were at last monitoring incoming and outgoing radiation, cloud cover,
and other essential parameters. It was now evident, in particular,
where clouds brought warming and where they made for cooling. Overall,
it turned out that clouds tended to cool the planet strongly
enough so that small changes in cloudiness would have a serious feedback
|No less important, the sketchy parameterizations in the models were increasingly refined by field studies. Decade by decade the science community mounted ever larger fleets of ships, aircraft, balloons, drifting buoys and satellites in massive experiments to observe the actual processes in clouds, ocean circulation, and other key features of the climate system. (See the separate essay on International Cooperation.) Processing and regularizing the measurements from such an exercise was in itself a major task for computer centers: it was little use having gigabytes of observational data unless that could be properly compared with the gigabytes of numbers produced by a computer model.
| There was also progress in building aerosols
into climate models. When Mount Pinatubo erupted in the Philippines
in June 1991, sharply increasing the amount of sulfuric acid haze
in the stratosphere world-wide, Hansen's group declared that "this
volcano will provide an acid test for global climate models." Running
their model with the new data, they predicted a noticeable cooling
for the next couple of years.(90) By 1995 their predictions for different levels of the atmosphere
were seen to be on the mark. "The correlations between the predictions
and the independent analyses [of temperatures]," a reviewer observed,
"are highly significant and very striking." The ability of modelers to not only reproduce but predict Pinatubo's effects gave scientists a particularly strong reason for believing that the GCMs had some kind of reliable connection with reality, the actual planet.(91)
into GCMs improved the agreement with observations, helping to answer
a major criticism. Typical GCMs had a climate sensitivity that predicted
about 3°C warming for a doubling of CO2. However, as Idso and others pointed out, the actual rise in
temperature over the century had not kept pace with the rise of the
gas. Try as they might, the modelers had not been able to tune their models to get the modest temperature rise that was observed. An answer came from models that put in the increase of aerosols from humanity's rising polluton.
The aerosols' cooling effect, it became clear, had tended to offset
the greenhouse warming. This reversed the significance of the models' earlier inability to reproduce the temperature trend. Apparently the models that had been tuned without aerosols had correctly represented a planet without aerosols; they had been grounded solidly enough in reality to resist attempts to force them to give a false answer.
|By now computer
power was so great that leading modeling groups could confidently go beyond
static pictures and explore through time. Besides taking into
account the rise of greenhouse gases and pollution, the modelers had
new data and theories arguing that it was not fudging to put in solar
variations. In particular, a dip in solar activity seemed to have
played a role, along with pollution and some volcanic eruptions, in
the dip seen in Northern Hemisphere temperatures from the 1940s through
the 1960s. In 1995, models at three centers (the Lawrence Livermore
National Laboratory in California, the Hadley Centre, and the Max
Planck Institute) all reproduced fairly well the overall trend of
20th-century temperature changes and even the observed geographical
patterns. The correspondence with data was especially close where
the model simulations reached the most recent decades, when the rising
level of greenhouse gases began to predominate over other forces.
However, as the modelers pressed toward greater precision, their progress faltered. No matter
how they tried to tweak their models, the computers could not be forced
to show the full extent of the Northern Hemisphere cooling recorded
in the 1940s and 1950s. Finally in 2007 a careful analysis revealed
that the global data had been distorted by a change in the way ocean
temperatures were measured after the Second World War ended. The models
had been better than the observations.(92*)
| This GCM work powerfully influenced the Intergovernmental
Panel on Climate Change, appointed by the world's governments. The IPCC's 2001 report in particular was swayed by charts showing the pattern of geographical and vertical distribution of atmospheric heating that the models computed for greenhouse warming. The pattern of change was different from the patterns that other influences alone (for example, changes in the Sun) would produce. The computed greenhouse effect's
"signature," and no other pattern, roughly matched the actual observational record of recent decades. That backed up the panel's official conclusion: a human influence on climate had
probably been detected.(93*)
are always happier if they can reproduce an answer using independent
methods. This had always been a problem with climate models, with
their tendency to interbreed computer code and to rely on similar
data sets. One solution to the problem was to cut down to the central
question — how much would temperature change if you changed
the CO2 level? — and look for a completely different way to get
an answer. The answer could be boiled down to a simple number, the
climate's "sensitivity," which by now was conventionally taken to mean the temperature change for a doubling of CO2.
A new way to find this number, entirely separate from GCMs, was becoming
available from ice core measurements, which recorded that showed large
swings of both temperature and CO2 levels through previous ice ages. A big step forward came in
1992 when two scientists reconstructed reconstructed climate data
not only for the Last Glacial Maximum, with its lower temperature
and CO2 levels, but also for the mid-Cretaceous Maximum (an era when,
according to ingenious analysis of fossil leaves, shells, and other
evidence, CO2 levels had been much higher than at present and dinosaurs had
basked in unusual warmth). The climate sensitivity they found for
both cases, roughly two degrees of warming for doubled CO2, was comfortably within the range offered by computer modelers.(93a)
|Confidence rose further in the late 1990s when the modelers' failure to match
the CLIMAP data on ice-age temperatures was resolved. An early sign of where the trouble lay came from a group that laboriously sifted coral-reef samples and announced in 1994 that the tropical sea-surface temperatures had been much cooler than CLIMAP had claimed. They noted that their finding "bears directly on modeling future climate." But one finding in isolation could not shake the CLIMAP consensus. The breakthrough
came when a team under Lonnie Thompson of the Polar Research Center
at Ohio State University struggled onto a high-altitude glacier in
the tropical Andes. The team managed to drill out a core that recorded
atmospheric conditions back into the last ice age. The results, they
announced, "challenge the current view of tropical climate history..." It was not the computer models that had been unreliable,
but the oceanographers' complex manipulation of their data as they
sought numbers for tropical sea-surface temperatures.
|More coral measurements and
other new types of climate measures agreed that tropical ice age waters
had turned significantly colder, by perhaps 3°C or more. That
was roughly what the GCMs had calculated ten years earlier. The fact that nobody had
been able to adjust a model to make it match the CLIMAP team’s
numbers now took on a very different significance — evidently
the computer models rendered actual climate processes so faithfully
that they could not be forced to lie.(94)
Debate continued, as
some defended the original CLIMAP estimates with other types of
data. Moreover, the primitive ice-age GCMs required special adjustments
and were not fully comparable with the ocean-coupled simulations
of the present climate. But there was no longer a flat contradiction
with the modelers, who could now feel more secure in the way their
models responded to things like the reflection of sunlight from
ice and snow. The discovery that the tropical oceans had felt the
most recent ice age put the last nail in the coffin of the traditional
view of a planet where some regions, at least, maintained a stable
climate. (Computer studies in the following decade increasingly
turned out good matches to climates for all periods from the Last
Glacial Maximum, through the warm “mid-Holocene” period
8,000 years ago, to the present.)(95*)
|Another persistent problem was the instability of models that coupled atmospheric circulation to a full-scale ocean, the type of model that now dominated computer work. The coupled models all tended to drift over time into unrealistic patterns. In particular,
models seemed flatly unable to keep the thermohaline circulation going.
The only solution was to tune the models to match real-world conditions
by adjusting various parameters. The simplest method, used for instance
by Suki Manabe in his influential global warming computations, was
to fiddle with the flux of heat at the interface between ocean and
atmosphere. As the model began to drift away from reality, it was
telling him (as he explained), "Oh, Suki, I need this much heat here."
And he would put heat into the ocean or take it away as needed to
keep the results stable. Modelers would likewise force transfers of
water and so forth, formally violating basic laws of physics to compensate
for their models' deficiencies.(95a)
|The workers who used this technique argued that it was fair play
for finding the effects of greenhouse gases, so long as they imposed
the same numbers when they ran their model with higher greenhouse
gas levels. Some of them added that the procedure made it easier to
present the problem of greenhouse warming convincingly to people outside
the modeling community, for they could show "before and after"
pictures in which the "before" map looked plausibly like
the real climate of the present. But the little community of modelers
was divided, with some roundly criticizing flux adjustments as "fudge
factors" that could bring whatever results a modeler sought.
They felt it was premature to produce detailed calculations until
fundamental research had ironed out puzzles such as cloud formation. A few scientists who
were entirely skeptical about global warming brought the criticism
into public view, arguing that GCMs were so faulty that there was
no reason to contemplate any policy to restrict greenhouse gases. If the
models were arbitrarily tuned to match the present climate, why believe
they could tell us anything at all about a different situation? The
argument was too technical, however, to attract much public attention.
Most modelers, reluctant to give ammunition to critics of their enterprise,
preferred to carry on the debate privately with their colleagues.(96)
|Around 1998, different
groups published crudely consistent simulations of the ice age climate
based on the full armament of coupled ocean-atmosphere models. This
was plainly a landmark, showing that the models were not so elaborately
adjusted that they could work only for a climate resembling the present
one. The work called for a variety of ingenious methods, along with
brute force one group ran its model on a supercomputer for
more than a year.(96a*) Better still, by 1999
a couple of computer groups simulating the present climate managed
to do away altogether with flux adjustments while running their models
through centuries. Their results had reasonable seasonal cycles and
so forth, not severely different from the results of the earlier flux-adjusted
models. Evidently the tuning had not been a fatal cheat. Models without
flux adjustments soon became standard.(97*)
|Another positive note was the plausible representation
of middle-scale phenomena such as the El Niño-Southern Oscillation
(ENSO). This irregular cycle of wind patterns and water movement in
the tropical Pacific Ocean became a target for modelers once it was
found to affect weather powerfully around the globe. Such mid-sized
models, constructed by groups nearly independent of the GCM researchers,
offered an opportunity to work out and test solutions to tricky problems
like the interaction between winds and waves. By the late 1990s, specially
designed regional models showed some success in reproducing the structure
of El Niños (although predicting them remained as uncertain
as predicting any specific weather pattern months in advance). As
global ocean-atmosphere models improved, they began to spontaneously
generate their own El Niño-like cycles.
| Meanwhile other groups confronted the problem
of the North Atlantic thermohaline circulation, spurred by evidence
from ice and ocean-bed cores of drastic shifts during glacial periods.
By the turn of the century modelers had produced convincing simulations
of these past changes.(98)
Manabe's group looked to see if something like that could happen in
the future. Their preliminary work in the 1980s had aimed at steady-state
models, which were a necessary first step, but unable by their very
nature to see changes in the oceans. Now the group had enough computer
power to follow the system as it evolved, plugging in a steady increase
of atmospheric CO2 level. They found no sudden, catastrophic shifts. Still, sometime
in the next few centuries, global warming might seriously weaken the
| Progress in handling the oceans underpinned
striking successes in simulating a wide variety of changes. Modelers
had now pretty well reproduced not only simple geographical and seasonal
averages from July to December and back, but also the spectrum of
random regional and annual fluctuations in the averages indeed
it was now a test of a good model that a series of runs showed a variability
similar to the real weather. Modelers had followed the climate through
time, matching the 20th-century temperature record. Exploring unusual
conditions, modelers had reproduced the effects of a major volcanic
eruption, and even the ice ages. All this raised confidence that climate
models could not be too far wrong in their disturbing predictions
of future transformations. Plugging in a standard 1% per year rise
in greenhouse gases and calculating through the next century, an ever
larger number of modeling groups with ever more sophisticated models
all found a significant temperature rise.(100)
| Yet the models were far from proven beyond
question. The most noticeable defect was that when it came to representing
the present climate, models that coupled atmosphere to oceans were
notably inferior to plain atmosphere-only GCMs. That was no wonder,
since arbitrary assumptions remained. For example, oceanographers
had not solved the mystery of how heat is transported up or down from
layer to layer of sea water. The modelers relied on primitive average
parameterizations, which new observations cast into doubt.
were not severe enough to prevent several groups from reproducing
all the chief features of the atmosphere-ocean interaction. In particular,
in 2001 two groups using coupled models matched the rise of temperature
that had been detected in the upper layers of the world's oceans.
They got a good match only by putting in the rise of greenhouse
gases. By 2005, computer modelers had advanced far enough to declare
that temperature measurements over the previous four decades gave
a detailed, unequivocal "signature" of the greenhouse effect. The
pattern of warming in different ocean basins neatly matched what
models predicted would arise, after some delay, from the solar energy
trapped by humanity's emissions into the atmosphere. Nothing else
could produce such a warming pattern, not the observed changes in
the Sun's radiation, emissions from volcanoes, or any other proposed
| Earth System Models
|Yet if modelers now understood
how the climate system could change and even how it had
changed, they were far from saying precisely how it would
change in future. Never mind the average global warming; citizens and policy-makers wanted to know what heat waves, droughts or floods were likely in their particular region. Only models that incorporated a much more realistic ocean and clouds would be able to calculate that. The attention of the community turned to making predictions in ever more detail.
|For example, a scheme for representing clouds developed in the 2000s at the Max Planck Institute for Meteorology used 79 equations to describe the formation of stratiform clouds (cumulus clouds required a different scheme). The equations incorporated a variety of constants; some were known precisely from experiments or observations, but others had to be adjusted until they gave realistic results. To further adjust parameters, the modelers relied on specialized computer simulations that resolved the details of clouds in a small area. All that computation for each grid cell was a challenge even for supercomputers.(101a)
| Looking farther afield,
the future climate system could not be determined very accurately
until ocean-atmosphere GCMs were linked interactively with models
for changes in vegetation. Dark forests and bright deserts not only
responded to climate, but influenced it. Since the early 1990s the
more advanced numerical models, for weather prediction as well as
climate, had incorporated descriptions of such things as the way plants
took up water through their roots and evaporated it into the atmosphere; models for climate change also had to figure in competition between plant species as the temperature rose. As usual, comparison with global data posed a problem: while the models disagreed with one another in simulating what type of vegetation should dominate in certain regions, surveys of the actual planet disagreed with one another just as much.(102) Changes in the chemistry of the atmosphere also had to be incorporated,
for these influenced cloud formation and more. All these complex interactions
were tough to model. Over longer
time scales, modelers would also need to consider changes in ocean
chemistry, ice sheets, entire ecosystems, and so forth.
|When people talked
now of a "GCM" they no longer meant a "General Circulation Model,"
built from the traditional equations for weather. "GCM"
now stood for "Global Climate Model" or even "Global Coupled Model,"
incorporating many things besides the circulation of the atmosphere.
Increasingly, people talked about building "Earth System Models," in which air, water and ice were tied to many features of chemistry, biology and ecosystems — sometimes including that outstanding ecological factor, human activity (for example in agriculture). Such simulations strained the resources of the newest and biggest supercomputers, some of which were built with climate modeling primarily in mind. Where early models had used a few thousand lines of code, an advanced simulation of the 2000s might incorporate more than a million lines.
| For projecting the future climate, experts still had plenty of work to do. The range of modelers'
predictions of global warming for a doubling of CO2 remained broad, anywhere between roughly 1.5 and 4.5°C.
The ineradicable uncertainty was still caused largely by ignorance of what
would happen to clouds as the world warmed. Much was still unknown
about how aerosols helped to form clouds, what kinds of clouds would
form, and how the various kinds of clouds would interact with radiation.
That problem came to the fore in 1995, when a controversy was triggered
by studies suggesting that clouds absorbed much more radiation than
modelers had thought. Through the preceding decade, modelers had adjusted
their calculations to remove certain anomalies in the data, on the
assumption that the data were unreliable. Now careful measurement
programs indicated that the anomalies could not be dismissed so easily.
As one participant in the controversy warned, "both theory and observation
of the absorption of solar radiation in clouds are still fraught with
|As the 21st century
began, experts continued to think of new subtleties in the physics
of clouds that might significantly affect the models' predictions.
For example, the most respected critic of global warming models,
Richard Lindzen, started a long debate by speculating that as the
oceans warmed, tropical clouds would become more numerous. They
would reflect more sunlight, he said, making for a self-stabilizing
system.(103) And in fact the models and observations were still so imprecise that experts could not say whether changes in cloudiness with warming would tend to hold back further global warming, or hasten it by trapping radiation rising from below, or have little effect one way or the other. Despite these uncertainties, the effects of clouds did seem to be pinned down well enough to show that they would not prevent global warming. Indeed climate experts (aside from Lindzen and a bare handful of other experts) were now nearly certain that serious global warming was visibly underway. Still, difficulties with calculating clouds remained the main reason that different GCMs gave different predictions for the warming in the 21st century, ranging from only a degree or two Celsius to half a dozen degrees.(104)
|It was also disturbing that model calculations did not match observations of the temperature structure of the atmosphere. In 1990 Roy Spencer and John R. Christy of the University of Alabama, Huntsville had published a paper that eventually resulted in hundreds of publications by many groups. Although warming might be observed at the Earth's surface, they pointed out that satellite measurements showed essentially no warming in recent decades at middle levels of the atmosphere — the upper troposphere. More direct measurements by balloons and radiosondes (rockets) likewise showed no warming there. However, a greenhouse-warming "tropospheric hot spot," especially in the tropics, had been predicted by all models clear back to the 1975 work of Manabe and Wetherald.(104a) Indeed not only greenhouse warming, but anything that produced surface warming in the tropics should also warm the atmosphere above it, through convection. People who insisted that global warming was a myth seized on this discrepancy. They said it proved that people should disbelieve the computer models and indeed all expert opinion on global warming. But was it the models that were wrong, or the data?
|The satellites, balloons, and radiosondes that measured upper atmosphere temperatures had been designed to produce data for daily weather prediction, not gradual long-term climate changes. Over the decades there had been many changes in practices and instrumentation. A few meteorologists buckled down to more rigorous inspection of the data, and gradually concluded that the numbers were not trustworthy enough to disprove the models. The orbits of the satellites, for example, had shifted gradually over time, introducing spurious trends. As more groups weighed in, the 1990s were full of controversy and confusion. Some groups manufactured adjustments to the data that did show upper-troposphere warming; Spencer and Christy adjusted their own data and stoutly maintained their distrust in any form of global warming. The problem was resolved in 2004-2005, when different groups described errors in the analysis of observations. For example, the observers had not taken proper account of how instruments in the balloons heated up when struck by sunlight. The mid-level atmosphere had indeed been warming up; even Spencer and Christy conceded that they had been in error.(105)
|It was one more case, like the CLIMAP controversy, where computer modelers had been unable to tweak their models until they matched data, not because the models were bad but because the observations were wrong. To be precise, the raw data were fine, but numbers are meaningless until they are processed; it was the complex analysis of the data that had gone astray.(105a*) (In the public sphere, even a decade later Christy and others would continue to rely on the slippery satellite data to deny that the world was warming. Once an idea gets on the internet it can never be removed from circulation.)
| More important, the high stratosphere was undoubtedly getting cooler. This was what modelers had predicted ever since Manabe and Wetherald's pioneering 1967 paper, as a result of the increase of greenhouse gases blocking radiation from below. A stratospheric cooling would not arise from other forces that would warm the surface. Increased solar radiation, for example, should produce warming everywhere. The stratospheric cooling was one component of the greenhouse effect "signature" that impressed the IPCC in 2001 and thereafter
|The skeptics were not satisfied, for some discrepancies remained. In particular, the modelers
still could not reproduce some observations of temperature trends in the
upper troposphere in the tropics. Exhaustive reviews concluded that
there was room for the discrepancies to eventually be resolved, as
so often before. It might be the models that would be adjusted. More likely the observations, still full of uncertainties and spanning only a couple of decades, would
again turn out to be less reliable than the models. And so it proved. In 2008 a group reported, "there is no longer a serious discrepancy between modeled and observed trends."(105b)
|Critics kept focusing on such minor discrepancies and pointing them
out as publicly as possible. Usually this was an exercise in "cherry-picking,"
pouncing on the few items among many hundreds that supported a preconceived
viewpoint. Yet modelers readily admitted that many uncertain assumptions
lurked in their equations. And nobody denied the uncertainties in
the basic physical data that the models relied on, plus further uncertainties
in the way the data were manipulated to fit things together
|Modelers were particularly worried by a persistent failure to work
up a reasonable simulation of the climate of the mid-Pliocene epoch,
a few million years ago, when global temperatures had reached levels
as high as those predicted for the end of the 21st century. Paleontologists
claimed that the Pliocene had seen only a modest difference in temperature
between the poles (much hotter than now) and the equator (not much
hotter). The modelers could not figure out how the oceans or atmosphere
could have moved so much heat from the tropics to the poles. The same
problem showed up in the Cretaceous epoch — a super-greenhouse
period a hundred million years ago when the Earth had a CO2
level several times higher than the present. Paleontologists reported
dinosaurs flourishing in Siberia, basking in warmth not much cooler
than the tropics. No model was able to reproduce that. The problem showed up again in attempts to model the Paleocene-Eocene Thermal Maximum 55 million years ago, when temperatures had suddenly soared globally... and by 10°C more near the North Pole than modelers could account for
|If our greenhouse
emissions heated Earth that far, there would apparently be conditions
(super hurricanes? radical changes in cloudiness?) stranger than anything
the models were designed to calculate. Modelers worried, as one of
them remarked, that "the field is missing fundamental feedback
processes that amplify warming." These uncertainties
persisted through the first decades of the 21st century.(106*)
|For a climate pretty much like the present, however, all the significant
mechanisms must have gotten incorporated somehow into the parameters.
For the models did produce reasonable weather patterns for such different
conditions as summer and winter, the effects of volcanic
eruptions and so forth. At worst, the models were somehow all getting
right results for wrong reasons — flaws that would only show
up after greenhouse gases pushed the climate beyond any conditions
that the models were designed to reproduce. If there were such deep-set
flaws, that did not mean, as some critics implied, that there was
no need to worry about global warming. If the models were faulty,
the future climate changes could be worse than they predicted,
|Those who still denied there was a serious
risk of climate change could not reasonably dismiss computer modeling
in general. That would throw away much of the past few decades’
work in many fields of science and engineering, and even key business
practices. The challenge to them was to produce a simulation that
did not show global warming. Now that personal computers were far
more powerful than the most expensive computers of earlier decades,
it was possible to explore thousands of combinations of parameters.
But no matter how people fiddled with climate models, whether simple
one- or two-dimensional models or full-scale GCMs, the answer was
the same. If your model could simulate something at all resembling the present
climate, and then you added some greenhouse gases, the model would show significant global warming.(107)
|Your personal computer
can run a climate model in its idle minutes. To join this important
experiment, visit climateprediction.net
|The modelers had reached
a point where they could confidently declare what was reasonably
likely to happen. They did not claim they would ever be able
to say what would certainly happen. Different model runs
continued to offer a range of possible future temperatures, from mildly bad to disastrous.
Worse, the various GCMs stubbornly continued to give a wide range
of predictions for particular regions. Some things looked quite certain,
like higher temperatures in the Arctic (hardly a prediction now, for
such warming was becoming blatantly visible in the weather data).
Most models projected crippling heat and dryness in the American Southwest and Southern Europe. But for many of the Earth's populated places, the models could not
reliably tell the local governments whether to brace themselves for
more droughts, more floods, or neither or both.
dawn of the 21st century, climate models had become a crucial source
of information for policy-makers and the public. Where once the modelers
had expected only to give talks at small meetings of their peers followed
by formal publication in obscure scientific journals, their attention
now focussed on working up results to be incorporated in the reports
that the IPCC issued to the world's governments. Struggling to provide
a better picture of the coming climate changes, the community of modelers
grew larger and better organized. Projects to compare the models devised
by different groups became a major ongoing activity. This cooperative
framework forced the groups to agree on schemes for representing features
of climate and formats for reporting their data.
|That was not as simple
as it might seem. Just to make sure "that the words used by each
group and for each model have the same meaning," a French team
leader remarked, "requires a great number of meetings."
But once all the numbers were given a well-defined meaning, the computer
outputs could serve as raw material for groups that had nothing to
do with the originators. That opened new paths for criticism and experimentation.
A joint archive was established, which by 2007 contained more than
30 terabytes of data utilized by more than 1000 scientists. Groups
were exchanging so much data that it would have taken years to transfer
it on the internet, and they took to shipping it on terabyte hard
|There were about a dozen major teams now
and a dozen more that could make significant contributions. The
decades of work by teams of specialists, backed up by immense improvements
in computers and data, had gradually built up confidence in the
prediction of global warming. It was largely thanks to their work
that, as the editor of Science magazine announced in 2001,
a "consensus as strong as the one that has developed around this
topic is rare in the history of science."(109*)
|Each computer modeling group normally worked in a cycle. When their
model began to look outdated, and still more if they managed to acquire
a new supercomputer, they would go back to basics and spend a few
years developing a new model. It was no simple process, for introducing
a new wrinkle (for example, a new way to calculate convection in the
tropics) often introduced unexpected feedbacks that caused the entire
model to crash. Once a team had persuaded their model to produce stable
results that looked like the real world, they would spend the next
year or two using it to analyze climate processes, gathering ideas
for the next cycle.
|After finishing their part of the IPCC's 2001 report, the modeling
community worked to synchronize the teams' separate cycles. By early
2004, nearly all the major models simultaneously reached the analysis
stage. That made it possible for the teams to share and compare data
in time to produce results for the next IPCC report, scheduled for
2007. In the end 17 groups contributed, using funds provided by their
own national authorities or simply putting in personal time alongside
other projects (successful scientists work far beyond a 40-hour week).(110)
|The IPCC pressed the modelers to work out a consensus on a specific
range of possibilities to be published in the 2007 report. The work
was grueling. After a group had invested so much of their time, energy,
and careers in their model, they could become reluctant to admit its
shortcomings to outsiders and perhaps even to themselves. A frequent
result was "prolonged and acrimonious fights in which model developers
defended their models and engaged in serious conflicts with colleagues"
over whose approach was best.(111)
Yet in the end they found common ground, working out a few numbers
that all agreed were plausible.
|The most likely number for climate "sensitivity" had scarcely
changed since the pioneering computer estimates of the 1970s. Doubling
the level of CO2, which was expected to come
well before the end of the 21st century, would most likely bring a
rise of about 3°C in the average global temperature (much more
in some places and seasons, much less in others). The uncertainty also remained as before: the number might be as low as two degrees, or as high as five or six. The next half-dozen years of work did little to advance this. Cloud feedbacks continued to be the largest source of uncertainty. "We're just fine-tuning things," remarked a leading modeler in 2012. "I don't think much has changed over the last decade."(112)
|The modelers' sensitivity estimate got an entirely independent confirmation in the geologists' latest estimates for how the mean global temperature had connected with the level of CO2 in past eras. Roughly speaking,
there was scarcely one chance in twenty that a doubling of CO2 would warm the planet less than 1.5°C. The upper limit was harder
to fix, since doubled CO2 would push the atmosphere
into a state not seen for tens of millions of years. The models could
not reliably calculate such a foreign condition, and the geological evidence
for temperatures and gas levels was hard to interpret. In the end
the geologists and the computer modelers independently concluded that
doubling CO2 was scarcely likely to bring a rise
greater than 6°C averaged over the entire planet. That was scant comfort: a rise of that magnitude would bring global changes unprecedented in human experience.(113)
Projected temperatures for 2080-2099 (rise above the 1980-1999 level, mean of mulitiple GCMs) for the "A2" scenario, in which the world begins now to restrict its greenhouse gas emissions.
Source: IPCC report (2007b), p. 766
What if the scientists were too optimistic about their level
of certainty? A minority of experts were beginning to worry that the
IPCC reports did not give humanity proper warning. It was all very
well to hammer out a conservative consensus on what climate changes
were most likely. But shouldn't we consider not just what was most
likely, but also the worst things that might in fact happen? What
if aerosol and cloud processes were a bit different from what the
models assumed, although still within the range of what physics allowed?
Confirming such worries, a group reported in 2008 that smoky "black
carbon" emissions had a much stronger effect than the models
had guessed, making for worse warming. And what if any of the many
amplifying feedbacks turned out to be stronger than the models estimated,
once regions warmed into a condition for which we had no data? Several
new studies pointed in that direction. The probability that the IPCC
had seriously under-estimated the danger seemed easily as great as
one in ten — far above many risks that sensible people normally
took precautions against.
|A comprehensive study that ran models with 400 different combinations
of likely parameters announced in 2009 that the IPCC had cautiously
underestimated a great deal. In the worst case — where the world
continued with "business as usual" — it was even odds
that the world would see a 5°C rise by the end of the century.
If the average global temperature did soar that high, it would launch
the planet into a state utterly unlike anything in the history of
the human race (even a 2°C rise would go above anything known
since the spread of agriculture). And still higher temperatures were
|For the IPCC's fifth report, issued in 2013, computer modeling teams launched an even more massive cooperative multi-year effort. The results were scarcely different from earlier attempts. "The drive to complexity has not reduced key uncertainties," two of the experts reported. "Rather than reducing biases stemming from an inadequate representation of basic processes, additional complexity has multiplied the ways in which these biases introduce uncertainties in climate simulations." The panel concluded that equilibrium sensitivity for doubled CO2 was "likely" to be in the range 1.5 to 4.5°C — exactly the same, albeit with much higher confidence and on a much sounder basis of evidence, as the conclusion reached by the Charney panel 34 years earlier.(115)
|For all the millions of hours the modelers
had devoted to their computations, in the end they could not say exactly
how serious future global warming would be. They could only say that
it was almost certain to be bad, and unless strong steps were taken soon, it might well be an appalling
What do the current models predict global warming will mean for
humanity in practical terms? See the summary of expected Impacts of Climate Change.
Simple Models of Climate
Ocean Currents and Climate
Basic Radiation Calculations
Arakawa's Computation Device
Chaos in the Atmosphere
Reflections on the Scientific Process
1. The first version of this essay was partly based, by permission, on
Edwards (2000b). For a complete history of climate models and more, see
Paul Edwards, A
Vast Machine: Computer Models, Climate Data, and the Politics of Global
Warming (Cambridge, MA: MIT Press, 2010).
2. Simpson (1929), p. 74.
3. For the history of work on the general circulation, see Lorenz (1967), 59ff.
4. Nebeker (1995); for Bjerknes
and scientific meteorology, see also Friedman (1989).
5. Richardson (1922),
forecast-factory p. 219, "dream" p. ix; see Nebeker
(1995), ch. 6, esp. pp. 81-82; Lynch (2006).
6. Bolin (1952), p. 107.
7. Here and below: Aspray
(1990); Nebeker (1995), ch. 10. For a comprehensive study published after the bulk of this essay was written, see Harper (2008).
8. Charney (1949); for a comprehensive
discussion, Charney and Eliassen (1949); the
first experiment (raising up the Himalayas) in a GCM was Mintz
9. Charney (1949), pp. 371-72;
for general discussion of heuristic modeling (including Charney's filtering), see Dalmedico (2001).
10. An example of important mathematical work is Phillips (1951); for all this history, see Nebeker (1995), pp. 87, 141-51, 183; Smagorinsky (1983); also Smagorinsky
(1972); Kutzbach (1996), pp. 362-68.
11. Charney et al. (1950),
quote p. 245; Platzman (1979). See Archer and Pierrehumbert (2011), pp. 78-80.
12. Nebeker (1995).
13. Bergthorsson et al. (1955).
14. For operational forecasting, see also Cressman (1996).
15. Phillips (1956); on
dishpans, see also Norman Phillips, interview by T. Hollingsworth, W. Washington, J. Tribbia
and A. Kasahara, Oct. 1989, p. 32, copies at National Center for Atmospheric Research, Boulder,
CO, and AIP. See also quote by Phillips in Lewis (2000), p.
104, and see ibid. passim for a detailed discussion of this work; "Classic": Smagorinsky (1963), p. 100; already in 1958 Mintz called it a
"landmark," see Arakawa (2000), pp. 7-8.
16. Smagorinsky (1983).
17. Manabe et al. (1965); it
was "the first model bearing a strong resemblance to today's atmospheric models" according to
Mahlman (1998), p. 89; see also Smagorinsky (1963), quote p. 151; Smagorinsky et al. (1965). See also Manabe, interview by P.
Edwards, March 14, 1998, AIP.
18. Arakawa et al. (1994).
19. Johnson and Arakawa
(1996), pp. 3216-18.
20. The oceans were given an infinite heat capacity (fixed
temperature), while land and ice had zero capacity. Mintz
(1965) (done with Arakawa); this is reprinted in Bates et al.
(1993); see Lorenz (1967), p. 133; Arakawa (1970); Edwards (2010), p. 158.
21. Arakawa and Schubert
(1974) was a major step, and briefly reviews the history. Blinking: Edwards (2010), p. 340.
22. Norman Phillips, interview by T. Hollingsworth, W.
Washington, J. Tribbia, and A. Kasahara, Oct. 1989, p. 23, copies at National Center for
Atmospheric Research, Boulder, CO, and AIP.
23. Kasahara and Washington (1967);
Edwards (2000b). BACK
24. National Academy of Sciences
(1966), vol. 2, pp. 65-67.
25. Lorenz (1967), pp. 26, 33,
90-91, ch. 5 passim.
26. Smagorinsky (1970), p. 33
(speaking at a 1969 conference); similarly, see Smagorinsky
(1972), p. 21; "future computer needs will be tempered by the degree to which... we can
satisfy the requirements for global data." National Academy of
Sciences (1966), vol. 2, p. 68; Wilson and Matthews
(1971), p. 112-13.
27. Lorenz (1967), quote p. 10;
"As for a satisfactory explanation of the general circulation... none exists": Rumney (1968), p. 63.
28. Lorenz (1967), p. 8, see pp.
134-35, 145, 151.
29. Kellogg and Schneider
(1974), p. 1166.
30. See the "pyramid" typology of models developed in Shine and Henderson-Sellers (1983); McGuffie and Henderson-Sellers (1997), pp. 44, 55 and passim;
Adem (1965); Green (1970).
31. A.R. Robinson (about ocean modeling) in Reid et al. (1975), p. 356.
32. "The use of mathematical computer models of the
atmosphere is indispensable in achieving a satisfactory understanding..." Matthews et al. (1971), p. 49; a followup study the next year,
gathering together the world's leading climate experts, likewise endorsed research with GCMs.
Wilson and Matthews (1971). The section on GCMs was
drafted by Manabe.
33. Nebeker (1995), p. 179.
34. Edwards (2000); Edwards (2010); Nebeker (1995), p. 176.
35. A more fundamental problem of detail was parameters for
the absorption and scattering of solar radiation by clouds, aerosol particles, etc. Lacis and Hansen (1974); Hansen et
al. (1983); Hansen, interview by Weart, Oct. 2000, AIP, and Hansen et al. (2000), pp. 128-29.
36. Edwards (2000), p. 80
gives as references (which I have not seen) the following; Silberman
(1954); Platzman (1960); Robert (1968); Orszag (1970); Bourke (1974); Eliasen et al.
37. For more see Edwards
(2000), Edwards (2010) BACK
38. Mintz (1965), p. 153.
39. He recalled that the committee meetings prompted him to
ask Manabe to add CO2 to his radiation model. Smagorinsky,
interview by Weart, March
1989, AIP. National Academy of Sciences (1966).
40. Möller (1963).
41. Manabe and Strickler
(1964). For all this see Manabe, interview by Paul Edwards, March
15, 1998, AIP. BACK
42. E.g., Arrhenius (1896) (who
also calculated for increases by factors of 1.5, 2.5, and 3 as well as lowered levels); Plass (1956); Möller (1963).
43. Manabe, interview by P. Edwards, March 14, 1998, AIP. Manabe and Wetherald (1967).
44. Broecker, interview by Weart, Nov. 1997, AIP.
45. Manabe and Wetherald
(1975); preliminary results were reported in Wilson and
Matthews (1971). Their planet had "land" surface at high latitudes and was confined to less
than one-third of the globe.
46. Manabe and Wetherald
(1975), p. 13.
47. Hart and Victor (1993), p.
48. Nebeker (1989), p. 311.
(1959); Wark and Hilleary (1969);
Vonder Haar and Suomi (1971), p. 312, emphasis
in original; for atmospheric measurements in general see Conway (2008), chap. 2. BACK
49. Mintz et al. (1972); as cited
by GARP (1975), p. 200; importance of the seasonal cycle to
check climate models was noted e.g. in Wilson and Matthews
(1971), p. 145; Manabe had a rough seasonal simulation by 1970 and published a full
seasonal variation in 1974. Manabe, interview by P. Edwards, March 14, 1998, AIP. Manabe et al. (1974); an example of a later test is Warren and Schneider (1979).
50. The "neglect of zenith angle dependence" had led to
overestimates of ice-albedo feedback in some models. Lian and Cess
(1977), p. 1059.
51. Smagorinsky (1972), pp.
52. Specifically, "a few scientists can be found who privately
suggest that because of complex feedback phenomena the net effect of increased CO2 might be global cooling." Abelson
53. "Our confidence in our conclusion... is based
on the fact that the results of the radiative-convective and heat-balance
model studies can be understood in purely physical terms and are verified
by the more complex GCM's. The last... agree reasonably well with the
simpler models..." National Academy of Sciences
(1979), p. 12. BACK
54. National Academy of Sciences
(1979), pp. 2, 3; see Stevens (1999), pp.
148-49. for Manabe’s account see also Stokstad
(2004). Hansen's model was not published until 1983.
55. Hansen et al. (1981); for
details of the model, see Hansen et al. (1983). I heard "march
with both feet in the air" from physicist Jim Faller, my thesis adviser.
56. Doubling: e.g., Manabe and
Stouffer (1980); additional landmarks: Washington and Meehl
(1984); Hansen et al. (1984); Wilson and Mitchell (1987). All three used a "slab" ocean 50m or
so deep to store heat seasonally, and all got 3-5°C warming for doubled CO2.
57. National Academy of Sciences
(1979), p. 2.
58. Manabe et al. (1979), p.
59. Manabe, interview by P. Edwards, March 14, 1998. The
time steps were explained in a communication to me by Manabe, 2001. The short paper is Manabe and Bryan (1969); details are in Manabe (1969); Bryan (1969).
60. Bryan (1969), p. 822.
61. Manabe et al. (1975); Bryan et al. (1975); all this is reviewed in Manabe (1997).
62. Manabe et al. (1979).
63. Washington et al. (1980),
quote p. 1887.
64. Hoffert et al. (1980); Schlesinger et al. (1985) ; Harvey
and Schneider (1985); "yet to be realized warming calls into question a policy of 'wait and
see'," Hansen et al. (1985); ocean delay also figured in Hansen et al. (1981); see discussion in Hansen et al. (2000), pp. 139-40.
65. [note omitted]
66. Bryan and Spelman (1985);
Manabe and Stouffer (1988).
67. Broecker (1987),
p. 123. For example, the GFDL group, Manabe et
al. (1991), found that increasing CO2 by 1%
a year, compounded so that it doubled in 70 years, produced a 2.4°C
global temperature increase, whereas the equilibrium response was about
4°C. See Manabe and Stouffer (2007), pp.
388-92. Hansen et al. (1988); "cannot now be made": Kerr (1989a), p. 1043.
68. Schlesinger and Mitchell
(1987), p. 795.
69. Mitchell (1968), p. iii.
70. Gates (1976); Gates (1976); another attempt (citing the motivation as seeking an
understanding of ice ages, not checking model validity): Manabe
and Hahn (1977).
71. The pioneering indicator of variable tropical seas was coral
studies by Fairbanks, starting with Fairbanks and Matthews
(1978); snowlines: e.g., Webster and Streten (1978); Porter (1979); for more bibliography, see Broecker (1995), pp. 276-77; inability of models to fit: noted e.g.,
in Hansen et al. (1984), p. 145 who blame it on bad CLIMAP
data; see discussion in Rind and Peteet (1985); Manabe did feel
that ice age models came close enough overall to give "some additional confidence" that the
prediction of future global warming "may not be too far from reality." Manabe and Broccoli (1985), p. 2650. There were also
disagreements about the extent of continental ice sheets and sea ice.
72. COHMAP (1988)
(Cooperative Holocene Mapping Project); also quite successful was Kutzbach
and Guetter (1984). BACK
73. MacCracken and Luther
(1985), p. xxiv.
74. "enigma:" Broecker and
Denton (1989), p. 2468.
75. Manabe and Wetherald
(1980), p. 99.
76. MacCracken and Luther
(1985), see pp. 266-67; Mitchell et al. (1987); Grotch (1988). A pioneer climate change model for one region: Dickinson et al. (1989).
77. Idso (1986); Idso (1987).
78. E.g., "discouraging... deficiencies" are noted and
improvements suggested by Ramanathan et al. (1983), see p.
606; one review of complexities and data deficiencies is Kondratyev
(1988), pp. 52-62, see p. 60; Mahlman
(1998), p. 84.
79. Manabe, interview by Weart, Dec. 1989.
80. Oreskes et al. (1994); Norton and Suppe (2001).
81. Zonally averaged cloud climatology: London
(1957). Schlesinger and Mitchell
(1987); McGuffie and Henderson-Sellers
(1997), p. 55. My thanks to Dr. McGuffie for personal communications.
82. Dickinson (1989), p.
83. The 1990 Intergovernmental Panel on Climate Change
report drew especially on the Goddard Institute model, Hansen et al.
84. A brief history is in Kiehl
et al. (1996), pp. 1-2, available here; see also Anthes (1986), p. 194. Bader et
al. (2005)summarize the interagency politics of the project.
85. Cess et al. (1989); Cess et al. (1990) (signed by 32 authors).
86. Boer et al. (1992), quote
87. Albrecht (1989), p. 1230.
88. Kalkstein (1991); as cited
in Rosenzweig and Hillel (1998).
89. Purdom and Menzel
(1996), pp. 124-25; cloudiness and radiation budget: Ramanathan et al. (1989); see also Ramanathan et al. (1989).
90. Hansen et al. (1992), p.
218. The paper was submitted in Oct. 1991.
91. Carson (1999), p. 10; ex. of
later work: Soden et al. (2002).
92. Mitchell et
al. (1995); similarity increasing in recent decades: Santer
et al. (1996). For causes of modern variations see Hegerl
et al. (2007). During the war most measurements were by US ships which
measured the temperature of water piped from the sea into the engine room.
But after 1945 a good share of data came from UK ships, which dipped a
bucket in the ocean; the water in the bucket cooled as it was hauled aboard,
Thompson et al. (2008). Note that in IPCC
(2007b), p. 11, the 1940s-1950s is the only element of the 20th century
temperature record that the models failed to match. BACK
93. The 1990 report drew especially on the Goddard
Institute model, viz., Hansen et al.
(1988); the Hadley model with its correction for aerosols was particularly
influential in the 1995 report according to Kerr (1995); Carson
(1999); "The probability is very low that these correspondences could
occur by chance as a result of natural internal variability only." IPCC
(1996a), p. 22, see ch. 8; on problems of detecting regional variations,
see Schneider (1994). The "signature"
or "fingerprint" method was pioneered by Klaus Hasselmann's
group at the Max Planck Institute, Cubasch et al. (1992); see also, e.g., Hasselmann (1993). See also Santer
et al. (1996). BACK
93a. Hoffert and
Covey (1992). For Cretaceous measures etc. see also below, note
94. Corals: Guilderson et al. (1994) (the group leader was Richard Fairbanks). Thompson et al. (1995),
quote p. 50. Prediction was Rind and Peteet (1985). Another temperature measurement that shook paleoclimatology came from the fraction of noble gases in ancient groundwater: Stute et al. (1995). Farrera et al. (1999) reviewed data that "support the inference that tropical sea-surface temperatures (SSTs) were lower than the CLIMAP estimates." See also Crowley (2000), Krajick (2002) and Bowen
95. A similar issue was a mismatch between GCMs
and geological reconstructions of tropical ocean temperatures during warm
periods in the distant past, which was likewise resolved (at least in
part) in favor of the models, see Pearson et al. (2001); the sensitivity of tropical
climate was adumbrated in 1985 by a Peruvian ice core that showed shifts
in the past thousand years, Thompson et al. (1985); new data: especially
Mg in forams, Hastings et al. (1998); see Bard
(1999); Lee and Slowey (1999);
for the debate, Bradley (1999), pp. 223-26; see also discussion
in IPCC (2001a), pp. 495-96. On later work see
Jansen et al. (2007); Kutzbach
(2007); Webb (2007). BACK
95a. Manabe, interview by Paul Edwards,
March 15, 1998, AIP; Manabe and Stouffer (1988).
96. Shackley et
al. (1999); Dalmedico (2007), p. 142; J.
Fleming, essay online
here re Cess et al. (1989).
96a. These results helped convince
me personally that there was unfortunately little chance that global warming
was a mirage. To be sure, the models still had a long way to go before
they would be fully consistent with one another or with the (still uncertain)
data. "Landmark": Rahmstorf (2002), p. 209, with refs.
97. Kerr (1997) (for NCAR model of W.M.
Washington and G.A. Meehl). Boville and Gent (1998) reported "The fully coupled model has been run for 300 yr with no surface flux corrections in momentum, heat, or freshwater." Also Carson (1999),
pp. 13-17 (for Hadley Centre model of J.M. Gregory and J.F.B. Mitchell). BACK
98. E.g., Ganopolski and
99. Manabe and Stouffer
100. Ice ages without flux adjustments, e.g., Khodri et al. (2001).
101. Levitus et al. (2001);
Barnett et al. (2001) (with no flux adjustments);
Barnett et al. (2005) with two high-end models
and much better data (from Levitus's group), concluding there is "little
doubt that there is a human-induced signal" (p. 287). Hansen
et al. (2005) found that "Earth is now absorbing 0.85 +/- 0.15
Watts per square meter more energy from the Sun than it is emitting to
space," an imbalance bound to produce severe effects. BACK
101a. Gramelsberger (2010), p. 237. BACK
102. Edwards (2010), p. 419. BACK
102a. "fraught:" Li et al.
(1995); for background and further references on "anomalous absorption," see Ramanathan and Vogelman (1997); IPCC (2001a), pp. 432-33.
103. Lindzen et al.
104. A classic experiment on cloud parameterization
was Senior and Mitchell (1993). Le
Treut et al. (2007), p. 114; . IPCC
(2001a), pp. 427-31; Randall et al. (2007)
pp. 636-38. BACK
104a. Spencer and Christy (1990); Manabe and Wetherald (1975). BACK
et al. (2005); Mears and Wentz (2005);
Karl et al. (2006) (online here);
IPCC (2007a) , p. 701. Allen and Sherwood (2008) used a different method to derive temperatures. Conceded: Christy and Spencer (2005). BACK
105a. A why-didn't-I-think-of-that analysis by Fu et al. (2004) showed that the microwave wavelengths supposed to measure the mid-level troposphere had been contaminated by a contribution from the higher stratosphere, which was rapidly cooling (as predicted by models). See Schiermeier (2004b); Kerr (2004b). The coup de grace: Mears and Wentz (2005) found that the Alabama group had used the wrong sign in correcting for the drift of the satellite’s orbit. For fuller discussion and references see Lloyd (2012) and Edwards (2010), pp. 413-18. BACK
105b. Manabe and Wetherald (1967). Criticism by Douglass et al. (2008) (other authors included long-time critics Christy, Pearson, Singer) was answered by Santer et al. (2008), quote p. 1703. For technicalities see http://www.skepticalscience.com/tropospheric-hot-spot.html. For a thorough history of the entire tropospheric hot spot question, see Thorne et al. (2011). BACK
106. Pliocene: e.g., Heywood
and Valdes (2004); data doubts: Huber (2009);
“field is missing”: David Beerling, "Journal Club,"
Nature 453 (2008): 827. Paleocene/Eocene ("PETM"): Sluijs et al. (2006). And for the warm Paleocene era, "Climate models seem to lack key components or feedbacks that may enhance climatic sensitivity to radiative forcing at high greenhouse gas concentrations," Hollis (2009). BACK
107. Varying parameters (from climateprediction.net
cooperative experiment): Stainforth
et al. (2005). See remarks in Jones
and Mann (2004), p. 28; Piani et al. (2005). N.b. A tiny fraction of the thousands of combinations of parameters can give a result with no warming; a slightly larger fraction give a horrendous warming of 10°C or even more. Neither extreme is consistent with evidence about ancient climates. BACK
108. Pioneer Model Intercomparison Projects
("MIPs") were Cess et al (1989) (see above)
and Gates et al. (1999). Same meaning: Jean-Philippe
Laforre, quoted Dalmedico (2007), p. 146. See
Le Treut et al. (2007), p118; Randall
et al. (2007), p. 594; Lawrence Livermore National Laboratory, "About
the WCRP CMIP3 Multi-Model Dataset Archive at PCMDI," on the Livermore
Lab site. BACK
(2001). Presumably he meant recent and complex topics, not simple
scientific facts nor long-accepted theories such as relativity.
110. Gavin Schmidt, "The IPCC model
simulation archive," realclimate.org (posted Feb. 4, 2008), online
here. Instability: e.g., Dalmedico (2007),
p. 137-38. BACK
111. Lahsen (2005a),
p. 916, see p. 906. BACK
112. Andrews et al. (2012). Tom Wigley, quoted in Bill McKibben, "Global Warming’s Terrifying New Math," Rolling Stone, Aug. 2, 2012,
113. Paleoclimate sensitivity: Hegerl
et al. (2006) and an even lower upper limit according to Annan
and Hargreaves (2006), see also Kerr
(2006). A more recent landmark study by a multitude of groups, PALAEOSENS (2012), again converged on a range of 2-5°C. Earlier literature is reviewed in Royer
et al. (2001); some key studies were Berner
(1991) (chemical and other measures of high Cretaceous CO2)
and McElwain and Chaloner (1995) (using characteristics
of fossil leaves). Another, rougher, way to measure sensitivity, using the amount of cooling after major recent volcanic eruptions, again gave results within this range: Wigley et al. (2005). IPCC (2007b) p. 13 gives
a set of "likely" ranges depending on emission scenarios, with
the lowest "likely" (5% probability) global mean temperature
1.1°C and the highest 6.4°C. These are for the decade 2090-2099,
but the decade that would see doubled CO2 depends
on the economic scenario. BACK
114. Black carbon: Ramanathan
and Carmichael (2008). Pearce (2007c),
ch. 18; Stainforth et al. (2005); Meinrat
et al. (2005); Schwartz et al. (2007);
Roe and Baker (2007). Multi-model study: Sokolov
et al.(2009). See also Fasullo and Trenberth (2012). Another important study using a combination of computer model and observational results reported that climate sensitivity was probably more than 3°C: Sherwood et al. (2014). BACK
Uncertainties: Stevens and Bony (2013). IPCC (2014a), p. 16. BACK
© 2003-2016 Spencer Weart & American Institute of Physics