Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
Please contact [email protected] with any feedback.
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of Joseph Smagorinsky by Spencer Weart on 1989 March 1, Niels Bohr Library & Archives, American Institute of Physics, College Park, MD USA, www.aip.org/history-programs/niels-bohr-library/oral-histories/5056
For multiple citations, "AIP" is the preferred abbreviation for the location.
Brief overview of the history of numerical general circulation models from about 1950-1970, centering on Smagorinsky's group, including Syukuro Manabe, at the U. S. Weather Bureau, The Institute for Advanced Study, Princeton, and the General Fluid Dynamics Laboratory, Princeton. Notes the interactions with weather predictions and climate change concerns, especially carbon dioxide warnings (greenhouse effect).
I wanted to ask you who the guys are, what the main developments were. The main institutions that you knew of personally in this whole broad area of climatology, applying computers to the weather, all this kind of thing.
OK, well, to give yourself proper perspective, you have to drop back before computers. If you say, 30 or 40 years ago, were to ask me, who's a climatologist and I mentioned some names, then you would find that they were all descriptive geographers, who compiled statistics and made maps and did things of this sort. That was the nature of climatology.
I hadn't thought of them in terms of geographers.
They were for the most part geographers. Some people were in transition, like Helmut Landsberg, an interesting character. But the point was, that was climatology. Climatology was descriptive, and it was and it was essentially statistical in nature. It was hardly physical. And what a climatologist in those days was mainly interested in was, what was happening between the ground and the top of my head. That was climatology, because of the application—first of all, that was where the human experience was. Second of all, most of the applications had to do with agriculture, with human health and human activities in general.
So there wasn't any explanation, it was oriented towards practical —
That's right. Now, the field of understanding the general circulation was a separate field for the most part. That was more physical. But the two were not often connected. Whereas what's happening at ground level is very much determined by things that happen in the free atmosphere.
In a way, this is the story of the penetration of geophysics attitudes into more and more areas.
Which is something we find all through. There are some common themes in geophysics, and one is the tendency to go from the descriptive to the explanatory in many fields.
Well, the transition didn't occur directly. The transition was a little indirect for maybe a decade. That is, people who understood the general circulation of the atmosphere, as it was understood in the forties and fifties, Rossby and Harold Jeffries.
Those are people we would consider real geophysicists even back then.
Yes, well, Rossby was a meteorologist, he considered himself a meteorologist, but he also made tremendous inroads in the field of oceanography, and meteorology has very close connections with oceanography and hydrology. When the first general circulation model was made — that was Norman Phillips in the fifties — he was not thinking in terms of climate. He was thinking in terms of simply general circulation. This was a direct outgrowth of the technology that had developed in making short range forecasts, he essentially built the model — and short range forecasts assumed that the atmosphere was inertial, that means the sun wasn't shining, there’s no friction, nothing.
This was the first model in the sense of being the first computational model?
The first computational model. There had been some earlier work, things like Milankovitch’s theory, hand-waving. But Phillips was interested in seeing whether he could make a model which was energetically self-sufficient so that once you started it, it could run on its own. Now, a normal short range forecasting model couldn't do this, because even though it left out both the heat sources and the heat sinks, you had to put them both in together.
The energy didn’t balance.
That's right, you had to put them both in together. That's the only way you could account for the longer term behavior of the atmosphere and the resulting structure. He did this in the middle fifties.
Where was he located?
Well, he was at the Institute for Advanced Study in Princeton. But he had also been at the University of Oslo. That really started the field of general circulation modeling. It was a very simple model, but it had some remarkable similarities to the atmosphere.
By the way, don't be modest, be sure you put your own work in.
Well, we haven't come to that yet. What I did immediately followed. First of all, my laboratory was established as a result of Phillips’ work and with von Neumann’s urging. There's a whole history of that but there's no sense speaking about that for the moment.
I hope we'll get a chance to.
The main change that I made in our first effort was to deal with a more general set of equations, which would admit important things that happened in the equatorial zone, that Phillips's equations wouldn't allow — he really couldn't deal with global phenomena. And that was an innovation in its own right. But in the first instance, it looked something like Phillips's model. It was also a two-level model, but was able to describe and account for phenomena which were near equatorial.
The primitive equations. These were equations that would allow non-geostrophic (?) modes to operate. For example, his equations were invalid at the equator, actually invalid. So he had to keep away from the equator, naturally. But he got a lot of beautiful essence of the real atmosphere. Because of World War II, there was a very large fund of aerological data, mainly in the hands of two scientists, Victor (Stern?) at MIT and Jac Bjerknes at UCLA who did diagnostic studies from this aerological data to determine what the nature of the structure of the atmosphere was — the thermo-dynamic structure as well as the mechanical kinematic structure. Then what we did after that, 1955, immediately, as we were already working with the two-level model, was to ask ourselves, what more would we need in order to describe not just the mean structure, which is deducible from on two levels, but information from a much more detailed structure, which had never been done before, to go to a nine-level model which extended up into the stratosphere and dealt with the real stratosphere, a very simple one but a real one, unlike a two-level, and one which permitted us to go down into the boundary layer of the atmosphere.
Was this done on MANIAC?
No, MANIAC did Phillips's calculations. Really not the MANIAC, it's the Institute for Advanced Study computer. There were several copies of it made. One of them was called MANIAC. This one was sometimes called the JOHNNIAC.
Oh, the JOHNNIAC, yes, I know that.
It’s official name was the Institute for Advanced Study computer, which is the grand daddy of all modern day computers.
Right. And this model that you were doing —
Phillips did that model. What we did with the primitive equation model was on the 701, which was a commercial copy of the IAS computer, except that it had very sophisticated input/output equipment which is why IBM got into the business they had, with card equipment. They didn't know a hell of a lot about computers. They were the ones who were able to make the IAS computer a commercial success. As a matter of fact, I made two...well...
Yes, we're trying to be very brief. I’m just trying to get my framework lined up. I hope later on...
So that was the beginning of beginning to not only reconstruct the mean general circulation of the atmosphere, but it went into the second generation models, which were run in the early sixties. I'm trying to remember now, that went on to stretch the IBM 7030, which was in parallel, in some respects, one of the first, doing four-pipe calculations. That started to deal with things that were more like climate.
What were the other groups? Were there any other groups?
Well, there were two in the United States, two others. One was one that just got started at UCLA, E.L. Mintz, who was a student of Jac Bjerknes’s. They got started about five years after we did. And about that time, Chuck Leif at Livermore.
And abroad, were there any groups?
Well, none at that time. It was a little later, in the middle or late sixties. That's the way it started up. And so this business of general circulation models wasn't climate to begin with, but started to become climate when we started looking in greater detail as to what came out of the multi-level models.
You were starting out just trying to get a steady state.
Well, that was the first thing. It was a statistical steady state study, of course, it wasn't a real steady state, or else it would be deal. After it comes into some sort of statistical equilibrium, you then sample the statistics, the fluxes and wind system, thermal system and water vapor system — in the middle sixties. Slowly we started adding complexities to the model. First vertical structure and precipitation processes, essentially water processes. We got detail in the boundary layer as a result of this, things that a conventional climatologist would look at. I remember, I gave a lecture in the middle or late sixties to the Climatology Commission of the WMO, a whole bunch of old-fashioned climatologists. I came and I was asked to give a talk. It was interesting that they were very interested. I essentially said the same thing that Kennedy said in his Berlin address: “Ich bin ein climatologist.” They looked and said, “You a climatologist? You're a modeler.” That was really the beginning of the very rapid tramsotopm/ [Interruption: telephone rings.]
We were talking about your address on climatology. I think where we really were was, where things were going in the fifties.
OK, so GCMs, that term was not coined by me, it was coined by, I think, people at NCAR.
Now people are using it for what they call the Global Climate Model, as well as General Circulation Model.
Yes, that's right. An all purpose acronym — it’s whatever you want it to be. It's like a Schmoo. You know what a Schmoo is?
Yes. Al Capp (in “Little Abner” comic strip).
Yes. Anyhow, these started to look like climate models, as well as general circulation models. Of course they were still general circulation models. We were beginning to understand how the lower stratosphere interacts with the troposphere, in very important ways energetically. You can't account for what's happening in the troposphere without taking into account what's happening in the lower stratosphere. This was one of the results. For all practical purposes, all of the guys like me were effectively dynamic climatologists.
At what point did the GCMs become a climate model? What's the distinction?
It starts giving structure in the lower atmosphere. It gives information besides the circulation.
It's not that you're tracking climate changes over time or anything like that.
No, no, no. As a matter of fact, at first we weren't talking really so much about climate changes. We were trying to understand the contemporary global weather. The first departures, not departures really, but adding on things we'd like to look at, had been started effectively in the middle sixties. There was the CO2 question. Now, the CO2 question turns out to be a very old one. It goes back much before our work, it goes back to Tyndall, to about the time of the Industrial Revolution. People then already began to suspect — there was Tyndall, there's Chamberlain, Arrhenius and so on. This is a question that came and went from time to time. There was Callendar in the thirties.
Before we get into that, I want to ask you, as you got into climate, if new groups with particular interests started coming in.
Oh yes, definitely. The British started being interested. NCAR was interested in the early sixties.
They had somewhat different interests from the original ones?
Well, in the beginning, everybody figured, gee, the first thing they want to do is see if they can reproduce everybody else's results. So there was a period of self-education. Sometimes self-education went on for a hell of a long time before you really did anything new.
Another question about this period, still going over it superficially, what were the limiting factors? Was it computer power?
That was one of the limiting factors, but a lot was done even without it. See, you're not trying to do anything in real time. You can always afford to take five times longer than necessary, if you're willing to be patient. It's not like the requirements of a forecast operation, where if you take too long to make the forecast, there comes a point at which the event is over.
Was it limited by — ?
It was limited because it was a learning process. It wasn't obvious how to build these models.
It was not theory per se but the actual model building.
Yes, it was a model building process of a lot of trial and error, how to deal with water vapor, how to build a radiation model.
Well, parameterization is the sum total of all of it. It's the sum total of all the non-hydrodynamic, large scale hydrodynamic properties. Radiation is a parameterization. Boundary layer structure is a — permanent structure — parameterization, condensation processes.
That's just a word for all the problems.
That's right. I mean, you have equations that look like the classical hydrodynamic equations. Then you have terms which are a single letter, usually because you don't know how to describe it. You have R for radiation, but it's very very complicated, and you may have P for precipitation, but it's very very complicated, and how you relate this to, essentially, processes that happen on a resolution much less than the resolution of the grid, in some cases down to molecular size —
So this is where your real difficulty comes in.
That is what the real problem is, with climate and model building.
Yes, still is.
It's essentially the name of the game, literally. Let me back off a little. We recognized the whole class of problems we had to deal with, in a the GFDL, and one of them was how to build a radiative model, a sub-model essentially. Radiation takes a good portion of the time. Radiative transfer has two main components, the long wave solar radiation, which is broad spectrum, and very complicated, and then the incoming radiation, the outgoing radiation. It’s specialized, depends on the properties in the atmosphere, what happens to this radiation. So one of the first things I did was to get somebody working on radiation to do this, Syukuro Manabe. I got him before he had his PhD, brought him over about 1968 or 80. He built a radiative model. In the middle sixties, one of the hottest subjects was weather modification. And our August committee of National Research Council asked what was hocus pocus and what was real. This is after about 15, 18 years of speculation, largely irresponsible claims. A lot of hopes were built about weather modification. A lot of Soviet people were interested in it. Some American groups were interested in it. [Edward] Teller was interested in it. In fact, Teller was on that committee, and I was on the committee. So the whole question of natural versus anthropogenic change came up. Let's see, I had been involved with Revelle a little earlier on this, I was just a young guy and I was involved in writing a report, but this committee was headed by Gordon Dunn(?) in the middle sixties, and a guy by the name of Ed Tobb(?) who was from the [National] Science Foundation, a bureaucrat from the Science Foundation, was on this committee, a very very smart guy. And he said, "What about all this business of CO2? Do we really know how CO2 changes and affects the atmosphere?" I said, "Well, there's a whole history on this, a lot of hand waving and a lot of back of the envelope calculations." I cited him chapter and verse, from 1960-something. I said, "But there's something that can be done now that could never be done before, namely, one can make a very accurate radiative calculation in a way that it's never been done, and it's mainly because such sub-models are being constructed for the first time."
Radiative sub-models, in connection with climate models. And so I went back to the lab, and asked how about Syukuro [Manabe] if he would do a very simple radiative calculation. He said, "Why just make it radiative? Let's make it radiative convective," which brings in a little hydrodynamics through the back door, and it turns out to be very essential, because it provides a very powerful negative feedback to the system. Very important. [Interruption]
Well, the heating is taking place at a very particular place, so convection matters.
Well, especially because the heating is below. The heating makes the atmospheric column convectively unstable and transports heat away from the ground, and redistributes it into the upper troposphere.
Where it radiates, yes.
And cools off the ground. So it's a negative feedback. So he had constructed a model with a colleague (although he was the primary guy), fellow by the name of Strickler who had a radiative convective model, and he did the simple experiment of doubling CO2 uniformly in the column.
That's where the habit of doubling CO2 came from?
Leaving everything the same. And that was the first really definitive one dimensional calculation. It's a one dimensional calculation, in the vertical only.
This is before one had a real clear signal from the Hawaii measurements, that the CO2 was increasing?
No, no, that was already known — I'm pretty sure it was.
Yes, I guess it would have been.
That's the Keeling stuff, yes. I'm pretty sure that it was already known, a much shorter record, of course. But the doubling was just a convenient thing. And then the idiot number that comes out of this is the temperature. You make a temperature calculation for the entire vertical column as a result of radiation and convection. You know what I mean by an idiot number, like an idiot light? You look for a one-parameter description of a very complicated process.
In this case it would be ground level temperature.
Ground level temperature. And then what you do is, you contemplate this one number, which is done for a mid-latitude calculation, a column, and translate this to be the global mean. Now, this is one of the big mistakes that were made, is to talk about a global mean. It made sense at the time, because that's all we could do. But this practice has been carried on, and we still talk about global mean temperature, and it's a fictitious number. Unfortunately there's no way of measuring it. And it misleads people. Anyhow, he did that calculation and it was published. I have the publication, the report of that committee.
This was not an internal but a committee report.
A committee report. So that was the first time when people started looking at climate change, but not climate change in the secular sense, but climate change due to a discontinuous change in some of the key parameters which are normally specified. The thing about CO2 is, it is so thoroughly mixed in the atmosphere that there's virtually one number that describes it.
Unlike most of the things.
Most things. OK, it does vary some in the vertical, and it will vary some on the horizontal especially on a seasonal time scale.
Yes, but over the long term —
— long periods. There are very few such numbers that are well described by a single parameter. So what you're looking at are two equilibrium states. They're both equilibrium states, in statistical equilibrium. And a lot of bad experiments have been done by others, where they started off and didn't have enough computer time, and came to equilibrium too quickly, said they had a new answer and they didn't. That calculation was not only more accurate but a discovery of the negative feedback in convection. See, the convection in the ocean is a positive feedback, because heating occurs at the top, and it's a stabilizing property, which means that it's a positive feedback because it stratifies the upper layer of the ocean, and then therefore a small amount of heat change is a very large temperature change. It's just the opposite in the ocean from the atmosphere. So this was still a one dimensional calculation. Some years would follow before one made a three dimensional calculation where you not only got back the same number (you could only get it back by taking a global mean) but you also got back some of the structural changes, some regional changes, vertical structure. Oh, and that first calculation also gave the very simple, straightforward result that just the opposite happened in the stratosphere, because in the stratosphere the CO2 was giving increasing emissions of CO2, when you increase it in the column, you cool the stratosphere.
Cool the stratosphere.
You know all this. And that was the first time that calculation was actually made, and the cooling incidentally is much greater, well, significantly greater than the warming, of course the density is much less. So these started to look like climatologists, and although the objective of… (off tape) …originally the first purpose everybody agreed to was to extend the range of experiment as a practical science, to whatever its limits might be. The second purpose is now, if you say, what is the second objective, was it to understand climate change, climatic change, no, it was to understand contemporary climatic structure and essentially the general circulation.
People weren't really thinking that much about climate changes.
No, this was a convenient afterthought when everybody realized it was an important question to ask. Certainly the third objective, but nobody will admit it.
I have to be careful, because obviously you can tell me a lot about this and I really just want to try to get a quick overview now. Let me just ask you certain questions.
OK, go ahead.
Tell me what some of the main, just hit on three or four of the main developments and the main groups of people involved. Let's not take it past 1970. Certainly let's not talk about recent things.
Well, one of the important perceptions that was validated by these complex comprehensive models was the snow-albedo feedback, a positive feedback, which was enunciated by Budyko in a hand-waving argument. I don’t remember when it was. It could have been ‘50s or ‘60s. And this was validated by building in snow cover as one of the properties of the model. Namely, if you have precipitation where the temperature was cold enough, we called it snow and said there was snow cover.
Changed the albedo.
That's right. And that's where that feedback came, so that was important to verify it. That's another correction, and not only a correction in terms of the global effect, but also even more with important regional consequences. Then two more groups started in the United States. I'm not sure the order of it.
Yes, I can find these out, I just need some clues.
One was a group at Oregon State University developing the ?? ...And another is a group headed by Jastrow, now headed by Hansen. And that was a different group from the other. It was made up completely of non-meteorologists. This was both a strength and a weakness. It was a strength because they brought in great knowledge about fundamental physical processes, because of where they came from, but they were very naive certainly in the way they used them, how to construct an experiment, what’s reasonable, the intuitive things. So they did some silly and in some cases misleading things.
You mentioned the British meteorological service.
No, there was a meteorological service, but they had a long tradition, going way back to the fifties, of numerical modeling. They had some starts and stops on this, but they were building significantly, approaching the problem a little bit differently. The problem in all of this is that there’s a strong tendency to imitate.
Mainly, if somebody is known to be successful, they do it his way, rather than doing it a different way that's better.
Was there a lot of borrowing of codes?
No, not much. There is some of that, but not — as a matter of fact, yes, if I remember correctly, both the Argonne and the NASA group borrowed the basic framework of the [?] model.
Any other groups abroad that were —?
Well, only recently in the last twelve years or so, there's the Max Planck Institute.
No, I'm thinking of the older period.
Not in general circulation. Numerical forecasting, yes, but not general circulation. One of the side consequences of all of this was that the general circulation model, rather than the short-range numerical forecasting model, led us to the path of medium range forecasting.
Uh huh, of course.
Mainly because, beyond a few days, many of the processes that are important for very long-term viability of the atmosphere also operate to determine atmospheric variability.
Was this anticipated at all?
Not really. Von Neumann at one time sort of divided the problem into three parts, intuitively. This was done in 1952 or '53. But it happened by accident. And it happened here — no, it happened in Washington. We built this nine-level model. It was the first time anybody had built a model beyond two or three levels, with radiation. An energetically self-sufficient model. And we were looking for ways to test it. What could we test it against? We were testing it against the general circulation characteristics, and decided that — the lore in those days, in the early sixties, was that you couldn't predict beyond two days, for the atmosphere.
Yes, that is very erratic.
But it's an essential part of this. This is where it gets into play. This is the early sixties no about 1959. So we decided, what the hell, if it's going to work for long periods, it should also work for short periods, and the best thing the model can tell us is that a lot of the things that are built into the model don't have much of an effect. The model should be able to tell us this, and at the same time —
— by varying things —
— whatever the predictions are that we should also then verify the variability of the model. So I know I talked about these results in 1960 in Tokyo. We made forecasts for a couple of days, and it told us things that we hadn't seen before, in precipitation forecasts that came out.
For how far ahead was it?
This was for a day or two.
For a day or two, using your GCM.
That's right. We got structure in the lower part of the atmosphere as a consequence of the equations. And that was really quite encouraging. But the purpose for doing this was purely to test the general circulation model, the climate model. And so within the next few years after we were able to do this on stretch [computer] — when the model started running, I went on vacation. They didn't turn it off, and it ran on for four days, and instead of completely falling apart, there was a lot of predictability[?], there was a lot that was wrong, but there was predictability. So there must be inherent predictability or we wouldn’t be getting this much. Eventually we made changes in the model, trying to get its prediction better, to four days. It’s a famous paper. Those results were given in Moscow in '65. We improved the model, ran it on other computers, and ran it out to ten or fourteen days, where in some cases we found some predictability still. What we did in the middle sixties already came to the notice of such people as Charney and Eliass and some of the others. The idea of course had already been talked about earlier, in response to a political motivation in the early sixties, it was being talked of in the middle sixties. They interested me in this; that’s when Charney tried to get various groups, my group, Harakarwa [?], Jeffries[?], to do full experiments with the model. But we didn’t use our most sophisticated model. We couldn’t run it long enough. We had to get an equilibrium...[?]. But it was then obvious that there was some inherent predictability provided by the general circulation model, up to ten, fourteen days. This was the basis on which the European center [for long-range prediction] was organized, which was around 1967, ‘68. We started talking it up. It became a reality in ‘70, I guess, something like that.
Sort of serendipitous?
Yes, well, that's the way a lot of things go. I don't know, I can get you the dates on all of this. I may be off by a few years. But our parameterizations were in fact the ones that were first used by the European center when it started. So you see how a lot of these things come in through the back door. It is serendipitous, in the sense that skill comes from seeing an accident, recognizing it as something there. It's not that we're so smart that we knew where we were going. The skill comes from recognizing it.
Yes, well, there's a lot of that in the history of science. “Chance favors the prepared mind.”
I hadn't heard that.
Hadn't you? “Chance favors the prepared mind.” It was a physiologist who said that. Well, tell me one other important development in the time left.
One other? Well, in a way the CO2 question has been a driving force, as an application. It's turned out —
Well, since the first convective radiative calculation in the middle sixties, it's been a driving force. You can see what Manabe has done, a good part what happened at the GFDL in terms of climate modeling, had to do with that. You’re talking about the geophysics history. I'll give you an example of an interaction. We got started in '55, and it became evident within a couple of years that the techniques that we were developing should work in the ocean too. Except that the ocean was much less understood, much less observed than the atmosphere. The atmosphere was only fortuitously well observed because operational systems required observations — not very high quality observations, but lots of them. And the other thing, it became obvious that we couldn't talk very long about the atmosphere without worrying about the ocean, because that's the long term memory of the system. So fairly early on, '57, late '58, I tried to recruit a well-known oceanographer. I worked on this Japanese guy — what's his name? He recently died. He said he was interested, and he said, "How long?" I said, "A minimum of two years. It will take you one year to understand the problem and a second year to do it." He said, "I can't do it, my wife is ill, I can't stay away that long." He couldn't do it. Then I tried to work on a German. He said he was coming. It turns out that his game was, he was using my offer to get a promotion. So I finally ended up going after a meteorologist who had been working in a oceanographic institution, Kurt Bryon, and I told him what I wanted. I wanted him to apply methods to well-known oceanographic problems such as the Gulf Stream problem and things like this, and also to start thinking in terms of coupling it to an atmospheric model. He did that, and he of course did the baratropic[?] problem, got latitude gryres, and then he dealt with a wind-driven ocean. He started working cooperatively with Manabe. Before we left Washington, which was ‘66, he already had some preliminary coupled model work dealing with the program, which has very slowly evolved. A lot of his students, and now a lot of his competitors around the world have continued. The importance of ocean coupling is an essential.
Yes, that's a bit one.
You can't compute the general circulation of the atmosphere otherwise. In the case of CO2, the ocean is important in still additional weight because —
— as a reservoir.
As a reservoir, and an important reservoir, one which is possibly capable of a positive feedback having to do with —-
— capable of doing all kinds of things.
— yes, well, I mean, the solubility of CO2 is altered by temperature and it’s possible to think of positive feedback. As a matter of fact, this was first enunciated in 1899.
Oh, is that right?
By Chamberlain, when he was trying to understand ice ages.
He was a geologist.
Yes, I know.
He was president of the University of Wisconsin. Anyhow, the oceans are very very important to the CO2 problem. But the point I'm trying to make is that the CO2 problem, amongst other applications of these models, has been one of the driving influences.
I hadn't realized that. I have to stop now, because otherwise I'll keep you talking all day, and I think I've taken up enough of your time for now. Let me ask you, though, this is all very interesting, do you mind if I keep this and make it available eventually to other people?
Yeah, it's OK. I didn't say anything incriminating.
OK, I just wanted to do that.
See also: J. Smagorinsky, "The Beginning of Numerical Weather Prediction and General Circulation Modeling: Early Recollections" Advances in Geophysics 25 (1988): 3-37