Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
We encourage researchers to utilize the full-text search on this page to navigate our oral histories or to use our catalog to locate oral history interviews by keyword.
Please contact [email protected]g with any feedback.
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of Warren Washington by Paul Edwards on 1998 October 28 and 29,Niels Bohr Library & Archives, American Institute of Physics,College Park, MD USA,www.aip.org/history-programs/niels-bohr-library/oral-histories/33098
For multiple citations, "AIP" is the preferred abbreviation for the location.
In this interview, Warren M. Washington discusses his career in climate research. Topics discussed include: his childhood and family history; race relations; William E. Milne; meteorology; Fred W. Decker; Stanford Research Institute; Hans Panofsky; Oregon State University; computer climate modeling; numerical forecast modeling; United States Air Force; Akira Kasahara; National Center for Atmospheric Research (NCAR); Chuck Leith; Courant Institute; Aksel Wiin-Nielsen; Phil Thompson; Walter Orr Roberts; Geophysical Fluid Dynamics Laboratory (GFDL); Yale Mintz; Akio Arakawa; American Association for the Advancement of Science (AAAS); Francis Bretherton; University Corporation for Atmospheric Research (UCAR); Department of Energy (DOE); Burt Semtner; University of Michigan; Suki Manabe; climate change; global warming; Joseph Smagorinsky; National Oceanographic and Atmospheric Administration (NOAA); National Advisory Committee on Oceans and Atmosphere (NACOA); Fred Singer; Anne Gorsuch Buford; Larry Goulder; Intergovernmental Panel on Climate Change (IPCC); NSFNET; National Science Foundation; Congressman Obey; affirmative action.
We’re here in Warren Washington’s office at the National Center for Atmospheric Research in Boulder, Colorado. It’s the 28th of October, 1998. Diane Rabson, the archivist at NCAR, is sitting in on this interview. This is Paul Edwards. So we’re here at the beginning of an oral history interview that’s going to cover your entire career. I note that there are several other long interviews with you, notably an oral history conducted by Earl Droessler about eight or nine years ago that is available through the NCAR Archives. So I’m going to go over ground that has already been covered, but to focus more on other things. Nevertheless, we’ll just do this in a kind of chronological sequence and we’re going to get some other things that you’ve talked about with other people, of course. Let’s start at the very beginning with your early life. This is something that Droessler spoke with you about at considerable length. I gather you grew up in Portland, Oregon. Did you live there all your life?
Yes, I did until I left for college. I was born there and I grew up there. I went to Jefferson High School, and then went on to Oregon State College. It was a college at that time — Oregon State College. Now it’s a major Oregon University.
You told Droessler some interesting stories about how you got interested in science. I’m especially curious about how you got interested in meteorology and weather, that sort of thing. I gather the first thing was a chemistry teacher.
I didn’t have any great interest in meteorology at first. I really got interested in chemistry because I had a very good teacher in my junior year of high school. She was an excellent teacher. She stimulated students to do stuff on their own and not just be passive in the class. And I think that’s where I really started thinking about a career in science. In my senior year, I took physics and I enjoyed that even more. So when I basically went on to college with my desire to major in physics.
I don’t know. I just thought that it was always intriguing to me to kind of understand about nature. I thought physics was very logical science because it really teaches you how things work in the world.
Before we get too far into that part of your life, let me ask you about something — I’m raising this because it’s something you’ve done a lot with later in your life, the promotion of — helping minority students and scientists develop their careers, something you’ve done a lot with in the last two decades. What were race relations like in Portland when you were growing up and especially around this issue of black people in science?
Let me go back a little further. My mother and father went to college. My father went to a Black college in Alabama called Talladega, which is one of the Black Colleges that he’d talk about. He graduated in 1928, but like most educated people, they really couldn’t get jobs in the South at that time. I think he had hoped to go to medical school, but he didn’t really do that either. But he was very fair-skinned and he kind of felt caught in a bind living in Alabama. He got a job as a guard in a private school in Seattle for a while and then he decided to move to Oregon.
Do you know why...Oregon?
I think the idea was he thought things would be a lot better for him in Oregon. I am not sure why he moved to Oregon. He went-- he worked various jobs in the early 1930s he got a job working for the Union Pacific Railroad. One interesting thing that most people don’t realize is that most of the waiters — the Redcaps — were college graduates. They could live a pretty good middle-class lifestyle — they could buy a home, they could make lots of money off tips and so on. And my mother’s side, my grandfather came from Richmond Virginia and it appears from the census records that they were not slaves but had their own businesses. The family moved to California. His brothers went earlier than he — they were in the Gold Rush in the late 1800’s. He and his mother move to San Francisco and were in the famous earthquake of 1906. My grandfather Morton was employed a [?] as a Pullman car conductor that went from Oakland to Portland Oregon — that’s how he met my grandmother, Bessie Morton. There were many who had gone to college at the turn of the century. Bessie Morton had 15 brother and sisters and many of them when to a black college in Texas. But they weren’t able to ever use their college education in the normal sense of being able to get a job that required that sort of thing. In spite of that — This may sound confusing — but in spite of the fact that they couldn’t ever benefit from it, they always encouraged their kids to go to college. Like my mother went to the University of Oregon and her sisters went to Berkeley, and this was in the early 1930’s. So there was a strong sense to push young middle class black people into going to college. What turned out in my case was kind of accidental. I was working during high school as a dishwasher in the hospital in Portland, at the Good Samaritan Hospital, and I asked the dietician where I could go to college. And she said, there is the Good Samaritan Hospital in Corvallis and that she could try to get me a job there. So she called and that’s how I ended up at Oregon State. So it wasn’t any great thought about which school I went to. But what I like to tell my kids and grandkids that you could go to college fairly easy on the West Coast because the tuition at that time was $47.00 a quarter. So there was no reason — if you worked for $1.00 an hour, you could easily afford to go to college, work yourself through college. That was very typical.
What did your parents study when they went to college?
My father studied history, and my mother, I’m not sure what she studied. I think she wanted to be a teacher. But she never did be a teacher. Because black students at the University of Oregon could not stay in college housing she lived with a family. The family became hostile with her so she did not finish her education. At that time, I think that most women stayed home and took care of the kids. I had four brothers, so were five boys and I don’t think she really entertained working much until we grew up.
What about race relations in Portland?
I didn’t get to that. It wasn’t pleasant. In fact, it wasn’t terribly unpleasant either. I did later visit the South and I found treatments of blacks much worst and more overt.
You saw how it could be worse —
— it could be worse. In Portland I was called the typical epithets, and my father bought a home in a part of Portland where there weren’t very many Black families. I remember my mother telling me after I was grown that my father had to sit out on the front porch with a gun and so that people wouldn’t try to force us to move. He wanted to establish his presence in the neighborhood, when we first moved in. I think, though, that I basically grew up in a mixed neighborhood; as years went on, more and more Afro-Americans moved into the neighborhood. But I never considered racial issues to be an overwhelming problem. I think I got enough support at home emotionally as well as from friends. In fact, what was interesting is that the middle class Afro-Americans — probably not that unusual for places like Seattle and Portland or San Francisco or L.A. in those years — were all kind of working-class families. It seemed in that generation all kids were expected to go to college. Not to say all, but many were expected to go to college. One of the disappointments for me was a pattern seen in many good high schools…and Jefferson was a very good high school, in that students do not take advantage of education opportunities. As I took my physics, math, and chemistry courses, there were other minority students who did reasonably well in their coursework, didn’t feel confident enough to go on to college. And that was a disappointment to me, because if I was able to succeed, I’m sure that they could have. And they avoided going off to a place like Oregon State. Oregon State at that time (as I said in my earlier interviews) couldn’t have had more than eight or ten Black students or minorities. Most of them were on the football team. There was only one Black family living in Corvallis at the time.
Wow! Was that uncomfortable?
No. As I said, I grew up in kind of a mixed neighborhood, so I had white friends and Black friends and so forth. I think I learned how to cope in that world. I think the programs which are like the NSF supported programs to attract underrepresented students, which we have here at NCAR in order to bring minority kids in the atmospheric sciences, can be successful — I think one of the things is that minority students need to sometimes learn how to live in a multicultural world, a world that isn’t all Afro-American or Hispanic or whatever. I think they need that kind experience. I’m not saying it’s bad to go to an all-Black college or mostly Hispanic college; I think you just need to learn how to deal with the real world.
So: you’re in undergraduate college at Oregon State and you’re studying physics, and you have this roommate who’s on the football team. I seem to remember that from your other interview, is that right? Yes, in my senior year I lived with two football players. It is true, they both did understand why you want to study physics.
Yes. If I can go back to the earlier topic: one of the things — I don’t think of it this way now, but you don’t know who your roommate’s going to be when you get to college, your first day of college. I walked into my room, I guess later, and the other student came in, from Eastern Oregon. I was the first Black person he had ever actually seen. He had seen movies and so forth. He and I got to be good roommates; he’s an osteopathic doctor [now]. It was an interesting experience.
Are you still in touch?
No. But that was kind of interesting. He looked at me and I looked at him... Where do you want to go at this point? How I got into atmospheric science?
Yes. And also anything significant about your undergraduate years that led you in that direction or —
I don’t think anything special. I did specialize a lot in mathematics as well in physics. I took a lot of courses in mathematics. I could have almost have had a degree in mathematics as well as physics. I think that really kind of made it easier for me to get into the modeling and theoretical side of atmospheric science, because I felt very comfortable with that part of it.
Did you have any exposure to computers as an undergrad?
Yes. In fact, Oregon State had a very famous mathematician there, at least he was famous — his name was Milne. In fact, if you look up the solution of ordinary differential equations, there’s something called the “Milne Method.” And he established a very strong applied mathematics department of at Oregon State. He was able to attract a bunch of applied mathematicians, world-class mathematicians, and they in turn were able to attract getting one of earliest mainframe computers. And I remember a programming course; it was either in 1955 or 1956 on something called an “ALWAC,” which was similar to the ENIAC that was made — it was a later generation of one of those early electronic computers.
I just came across that name somewhere recently.
It was a typical vacuum-tube computer with paper tape and you programmed in hexadecimal machine language.
Was this something you did as part of a course?
Yes, we took a course on how to program for this — You know, when something like FORTRAN came along, which I only found out about after I got to Penn State, FORTRAN was a blessing. To write out the equations in a form close the equations.
FORTRAN didn’t exist in the mid-fifties.
No. I don’t know when it was —
1957 was when FORTRAN was written.
Is that right? That was a wonderful invention for many of us who toiled under more primitive programming systems.
What year did you graduate?
1958, withy Bachelor’s. I stayed on for a Master’s.
And the Master’s was in Meteorology. The B.S. was in physics.
That’s right. Well, they didn’t have a formal meteorology program. In those days, meteorology was usually taught as part of a physics curriculum. So they had one physics professor there and his name was Fred Decker. That’s probably how I kind of got sidetracked a little bit because at the time I finished up my Bachelor’s, I had taken a few courses in meteorology. They were kind of advanced level courses, but they only had a few courses. You could get a Master’s in something called “General Science” with an emphasis on meteorology and that’s what I did.
What pulled you in that direction in physics? There are many ways to go in physics...
I don’t know. I think part of it [was that] I got a job. You know, up until that time I was washing dishes all the way through undergraduate and my first year of graduate school. And I got a job for $1.00 an hour working for this Fred Decker. But it was an unusual job which kind of got me into meteorology. He had this project with the Army to do weather radar scans over the Coast. They put this radar at the top of the highest mountain in the Coast Range, which is kind of a shallow mountain, 3,000-4,000 feet. And the only way you can get there is to hike up in the winter. Because they could get the equipment up there in the winter, the fuel for operating the radar and heating of shed was taken by truck in the summer. It was mostly snowed in the winter. There is a funny little story. I shouldn’t branch off the question on these little things. The weather radar was right at the freezing line; it would often snow as much as three or four feet overnight — heavy, wet snow at the mountain peak. I was in a little room about this size [indicating his office], one of those Army trucks that housed the radar console. There was a gas powered generator outside and the rotating radar disk and a little hut that you stayed in. Right next to it was a huge air defense dish that was used for looking for low-flying enemy aircraft. The military wanted to make sure there weren’t any airplanes flying at a low altitude to attack the U.S. One night it snowed hard, I’m sure it snowed maybe six feet that night, and I heard this crunch outside, just missed my little hut. This huge, like 150-foot antenna fell, fell off the pedestal. What happened was that it iced up and the wind was really blowing hard and it came down. So I called up the Air Force with may radio and told them, “Your antenna just fell down.” I couldn’t believe the words I heard back. He said, “Oh, OK, we’ll do something about that. Let me call you back.” So he called back and he says, “We’re coming up.” And I said, “You’re what?” Nobody would drive up to this place, there was no driving up to the peak. And sure enough, three or four hours later, I heard the sounds of these huge snowplows which were, shooting the snow off the road, one shooting off one direction and the other shooting off in the other direction, a huge crane was also coming up there. The defense radar was completely unmanned. Some general in Washington must have said, “Get that thing back up there!” They came up there with about 50 troops, and they got that thing up, put it back, took the snow and ice off antenna and got the radar going, but — they must have gone through some parts of the road that were 20 or 30 feet of snow...But I relieved by not getting hit by this huge weather defense radar…it was quite a day on top of mountain by myself and see the Army do its thing for national defense.
I can see why you ended up with an office job.
That did have some effect on me. I would go up on Fridays and come back on Sunday. You know, there wasn’t any television or anything else. It was a good place to study, though. All I had to do was make sure that the generator that powered the lights and kept the radar kept going so I could take picture of radar images. Every once in a while, the generator would poop out and boy, it was dark up there until I could restart the generator.
So that got you interested in meteorology, at least to some extent?
As the end of that following spring, Fred Decker, my thesis advisor, heard that a team of modelers was looking for a summer student at Stanford Research Institute.
Yes, Fred Decker. They had heard about me from Dr. Decker, apparently he told them about my theoretical thesis had been on mountain waves in clouds. I don’t know if you’ve ever seen a cap cloud over a solitary mountain. I was studying a cap cloud that was in a precipitating cloud.
Over a mountain peak?
Over a mountain peak. Trying to understand the waves because the physics of is a little different because it’s actually a moist adiabatic lapse rate inside the cloud, so the conventional lee or mountain wave theory only applied to a dry adiabatic process. So I wrote a Master’s thesis on that subject and used the few observations to verify the model. So Decker got me the job in the summer of 1959 at Stanford Research Institute (SRI) working for Manfred Holl. Holl got his Ph.D. at UCLA under Professor Holmboe, who worked with Bjerknes. He was a theoretician. It was very interesting job at SRI; I was working on various theoretical things, getting some experience. Also, I learned something about how to do objective analysis. i.e. how to put station data on a horizontal mesh. At that time, objective analysis was a big issue.
Talk about that a little more. What did that mean at the time, and what were you involved in?
At that time, there was a lot of discussion about how can you get irregularly spaced station data onto a grid?
Why? This is something I’m very interested understanding. Because of numerical forecast models?
That’s right. In the early years of modeling, people started to worry about it. In fact, my thesis adviser at Penn State was Hans Panofsky. He was actually the first one who ever did it. He worked with the famous Jule Charney and if you look in the Compendium of Meteorology, you’ll see reference to Hans Panofsky in Charney’s chapter. Basically what he did was a rather simple thing: you have a two dimensional array which is in the “x” and “y” direction and you have irregularly spaced data. And you fit a double Fourier series, made up of sines and cosines, to the array and then you can use the function then to get the station data back onto a regular two dimensional grid. So it’s just a curve-fitting approach. And that is actually the first approach that was used, if you want to call it objective analysis, at Princeton University, working with the famous scientist John von Neumann and Jule Charney.
In the first experiments in 1950.
That’s right. This was probably like 1946-47-48.
Before the first computer experiments in 1950, the ENIAC computer.
But the group started, with another computer that was part of military.
When did you first become aware of numerical forecast models, computer models in general? Right away?
Yes. Let me just finish up. I finished up at Oregon State, I applied to a bunch of graduate schools, Penn State offered me a job, $200 a month which we accepted. I was married to LaRae at the time and we had six week old baby, Teri. The rent was $90 a month on the apartment so it wasn’t a lot of money. In fact, it seemed that all the graduate students were getting surplus food, you know, the peanut butter, spam, lard, dried rice and beans. One of the interesting things was I wanted to get in-state tuition status, and the county was strictly Republican. And the registrar of the school was the chairman of the Republican Party. So it was widely known that if you wanted to get an exception for in-state tuition, you could register as a Republican. So for a brief period in my career, I sacrificed principle and I registered as Republican so I wouldn’t have to pay out-of-state tuition.
I was asking about when you first became aware of numerical forecast models?
So what Hans Panofsky’s research — I should mention that he’s the “Dumb” Panofsky —
The “Dumb” Panofsky — the one who graduated early from college?
No, the reason wasn’t because of that. He got one “B” and his brother got straight “A’s” at Princeton, and both brother received there PhDs at very young ages. The project that he and Professor Duquet asked me to come there for has a graduate research assistant for was to do objective analysis for the stratosphere. It was funded by the Air Force. The stratosphere was even more difficult than in the lower part of the atmosphere because you had much less data. So we spent a lot of time experimenting with different methods of objective analysis. I had some built-in dynamical assumptions such as geostrophic wind; — the methods nowadays tend to be more statistical methods. The statistical method works fine when you’ve got lots of data, but if you don’t have much data you probably want to use something more to constrain the features that you’re trying to do the analysis on. One of them assumes something about either height gradients or wind fields or whatever. And so after I sort of worked on that for a while, we needed to test it — In other words how do test the objective analysis and use them in a computer forecast model. So I began to put together an atmospheric model of the stratosphere for a several day forecasting. Hans Panofsky was very much in the old school working with students, which means, “Let them do whatever they want to do and don’t give too much guidance.” Certainly, he did not want to generate clones of himself. Some researchers try to generate clones of them. So this attitude gave me a chance to kind of move off in different areas. I’d go see him about maybe once a month or every two months or something to chat, very friendly, he was a warm type person, easy to approach. He would never say, “You’re stupid. You didn’t do this right, or stop doing what you’re doing.” He’d always be encouraging in whatever advice he gave. So that was a great environment for me. As I got further into my second year or so working on this project and finished up my coursework and so forth, I needed to use a computer larger than what was capable at Penn State.
You were doing programming for this work yourself?
By then programming in FORTRAN?
Yes, FORTRAN, right. At that time, all they had at Penn State University was an IBM 650, pretty small computer, but it was much better than what I had at Oregon State, by far. So I arranged with the Air Force, and I think maybe Phil Thompson was involved in this because he was the one who kind of doled out the money in the Air Force to researchers — he had several people working for him, but he was the key person. In fact, the Air Force funded most of the people at that time. You know Ed Lorenz, Jule Charney, Norm Phillips — they were all funded through Air Force grants. The Air Force was much like NSF is today in the field of meteorology. So they arranged for me to get some computer time on a new computer at the Courant Institute at NYU. I took the train from Penn State to Manhattan to work on the computer at that time. In fact, I’m pretty sure that I briefly met Akira Kasahara at the Courant Institute in early 1960s.
I was going to say that I noticed in his file that he was at the Courant Institute in 1963 or so, around the same time.
That was a very exciting time because there were people there — I’m not sure if Richtmeyer, I think he was still there at the time...he was a very famous applied mathematician who worked on fluid flow problems. Actually, Richtmeyer came here [to NCAR], he was here for a couple of years. But he wrote actually the first good textbook on numerical methods for solving fluid dynamics flows. And that book was well-quoted, was kind of on the premier textbook on numerical methods for fluid dynamics. But that whole school — Courant, who we had actually met, too — had a very interesting custom, which I think any institution would really benefit from. At four o’clock in the afternoon, they would go up to the top floor and look over Manhattan, and they had blackboards on all the walks and chalk. They had coffee and tea and cookies, and a place for the graduate students and faculty to just talk science. It wasn’t a seminar, but just place to informally chat. You had the opportunity to meet some of the great minds of applied mathematics there.
Where is this place located?
At that time, it was right across the street from Washington Square. I don’t know where it is now — it may have changed. Keep in mind that in those days applied mathematics and physics were considered closely connected. Applied mathematics was, probably more than even now, into solving physical problems. A lot of those people at Courant Institute, including Richtmeyer, worked on the Manhattan Project and other kinds of World War II work that was in support of trying to solve real scientific problems of great importance.
A little sideline here. During this period [of getting your] Master’s and Ph.D., you said the Air Force was throwing money at this area. Did you ever meet Air Force officers; was there any contact with the Air Force as an institution, people show up to seminars, things like that?
I don’t think you ever actually saw anybody in an officer’s uniform. I did notice that there is one nice picture of Phil Thompson [from UCAR Staff Notes Monthly article, summer 1998, I believe], which is the one I’d ever seen of him in military garb. Every time I saw Phil, he was just in a regular suit or in casual garb. And you ran into these people in New York because the American Meteorological Society (AMS) used to meet almost every year in New York City for its annual meeting. So that’s where I met for the first time Chuck Leith and Ed Lorenz, in 1962. All the people who were doing this kind of research were going to the annual meeting. The annual meeting was much smaller than it is now, of course.
I’m glad you mentioned Leith’s name because he was working on a GCM by that point, actually before...did he talk about that at all?
Yes, we talked. In fact, after I had gotten my Ph.D — if I could skip ahead just a little bit — I came here to NCAR in September, 1963. Akira and I started working on the model, I think, in May of 1964, just thinking about it. Then Chuck invited me out to Livermore in the summers of 1965 and 1966 because we hadn’t gotten our new computer at the time which was a CDC 3600. Chuck invited me out to Lawrence Livermore National Laboratory to get our model running on their computer... Chuck was another kind of father figure for me around here. So you could get everyone who was doing atmospheric modeling from NCAR and LLNL in a single room.
What about the others who were working at that time? Mintz and Arakawa, Smagorinsky and Manabe were already at work on GCM’s —
In fact, they were way ahead of most other groups. But they had taken a rather conservative approach. Smagorinsky’s group stayed with hemispheric models for quite a while, until the late 60’s, I think. The approach that UCLA had done was to start with a global model but only two levels. By sticking with only two levels when they went global, they were able to actually get it to fit inside an IBM 7090-type computer of that time. Memory storage was a real big problem in those days. We had to make compromises one way or another and different researchers did different compromises.
Tell me about your first experiences with these people. Leith, for example. You said you met him in New York. Did he talk about his model then?
Yes. I don’t think we were able to say much. As I said, I think I first met Chuck in 1962 or 1963 AMS meeting where he described his work. I probably learned more in the bar from him than I learned elsewhere.
He has this film he made of his model —
Yes, Chuck actually made the first animation of general circulation flow. Our group made the second. The NCAR archivist has copies of these early animations. They have great historical significance.
I’ve got one myself. When do you remember seeing that?
I interviewed him a while ago and he said he had gone on this kind of “grand tour” with that film, showing it everywhere. People were just fascinated with it, the moving images and the model output.
Chuck Leith was the first.
And you saw it in 1964-65?
Yes, something like that.
At an AMS meeting?
I don’t know where I saw it. I don’t recall. He may have just come and shown it to us here. In fact, it stimulated us to get this cathode-ray tube equipment called the DD-80 because I’m pretty sure that once we saw that film, which was one of the major reasons for us to get some of our own equipment. I’m not sure if he used the DD-80 system, but they had something very equivalent at LLNL... Now Chuck’s style — and you probably picked this up from talking to him — was a little different. Chuck is a very inventive kind of guy and he figured out how to do all this stuff on computer in — I won’t say it was in languages other than FORTRAN and the reason that he had to do that was that in the early days, the cores were so small on computers, much less than we have in our PC’s these days, that you had to figure out how can you store everything that you need to store. Chuck had used a rotating drum memory on the LARK computer which was one of kind; his model wasn’t written in a general program language such as FORTRAN. It was written in machine language I believe. I think you had to know exactly the rotation speed of the drum memory and how the data gets put on there in some fashion. This may seem strange for today’s programmers , but the way we had to do it here at NCAR was similar , I came up with a scheme where we had two tape drives, and we started doing the calculation at the South Pole, and marched to the Equator, and then switched tape drives and then went from the Equator to the North Pole, while the first tape drive was re-winding. And then when we finished that, we would start the re-wind on the second tape drive. And trying to figure out the timing of all of this, I can remember that people would come in and say, “How do you do that so crudely, how can you work out the timing?” Trial and error. Well, we wore the equipment out. It was never intended to be used in that way. It was intended you would write a tape occasionally, but not to be used a scheme 24 hours a day. I remember that later we switched over to a disk. We had these big disks, you know, the big rotating disks, and we did the same thing and we would break down the disks after a few months of constant banging on them. We used the same principle: writing on certain tracks of the disk
So you almost had a physical map of the Earth on the tape drive.
That’s right. You could look at the computer lights (note in the early days the computers had lights on the console) and you would kind of know where it was at in the computation by looking at the flashing lights. One of the people who were involved with that, if you want to talk about the technical stuff, is Bernie O’Lear. He’s still here and he’s the Assistant Director of the Computing Center. (Note Bernie has now retired from NCAR)
Why don’t you tell me a little more about graduate school? You went to the Courant Institute at New York University. How long were you there?
I only went there to use the computer.
— and sort of go up for AMS meetings —
Yes I also went to New York for the Annual American Meteorological Society (AMS) meeting. So I left Penn State University in August 1963. I finished up my thesis before coming to NCAR, but I didn’t get my formal degree until the spring graduation of 1964.
So you finished up your thesis in 1963?
But you graduated in 1964.
How did you end up at NCAR?
That was a strange circumstance because I applied for a number of jobs. It came down to two jobs: One was in Monterey, CA at the Naval Postgraduate School. This job paid $12,000 a year, and NCAR paid $9,000 a year. My wife at that time was from Palo Alto California so you can see she had an interest in returning to California. We made a decision to go to Boulder Colorado.
The way the choice was made partly due to a 1962 conference in Standstead, Canada, and I can remember one night I went to have beer with a number of very prominent meteorologists, and I asked them what I should do about the best choice for a job. The bar was kind of an interesting place because on one side of the table you were in Canada and on the other side; you were in the US... Those at the table were Hans Lettau, Aksel Wiin-Nielsen, Julie London and Bernhard Haurwitz. I had gotten to know these gentlemen over the years as a student and so forth. Aksel said I should come to Boulder. In fact, he later hired me. However, he and Phil Thompson had a falling out in between the time that I was hired and the time I arrived at NCAR. So Aksel had left and joined the University of Michigan and Department head of the Meteorology and Oceanography.
He had a falling-out with Thompson? Do you know what it was about?
I don’t really know much about it except I think Aksel’s more of a kind of take-charge type person, with strong sense of direction. Phil was more easygoing, wanting to say, for example, “Hire the best people and let them figure out what the scientific problems ought to be.” Whereas Aksel was more of a director type personality. It could have been something more than that. I suppose if you dig in the tapes you could find that out. But it was a bit of a disappointment for me because Aksel was a very good person and I was disappointed he wasn’t here when I arrived. But I wasn’t terribly disappointed because there were a lot of other good people — Akira was an inspiration, and Chester Newton and Harry van Loon. Doug Lilly was here at that time. So there were a lot of good people around.
Were you hired at NCAR to do anything specific? What was your job title?
NCAR was pretty informal in those days, but as Phil said in his description of me, I was really hired to work on the modeling. But he never himself actually said that to me. I understand that from kind of later remarks. Maybe he did say something, but Phil was kind of an indirect person. He wouldn’t come out with, “I want you to do such-and-such.” It just was not his style. I think I got more direction of that sort from Walter Orr Roberts. Walter gave me some advice; he said, “I really want you to work about half the time on helping us get started on modeling. The other half, work on what you think is important.” It turned out that actually both of those subjects were guiding principles and remained the same thing. But for some people, you could be working on two separate things.
What was your Ph.D. thesis on?
It was on this objective analysis, and finding ways to do objective analysis. One of the interesting things I had kind of discovered something in my thesis, which still gets quoted in literature after almost forty years, going on forty years in a few years. It’s that I asked a fundamental question: typically, the meteorological data that you have is either winds or temperatures. And temperatures, you get the height fields. And which of those quantities is most important? Now people kind of had some theoretical basis for that, and as part of my thesis, I was able to show by looking at something called the potential vorticity, that if you’re looking at features in the tropics, it’s better to look at the wind field. If you’re looking at features in the higher latitudes under fairly large scale, it is better to look at the height field, or the temperature field, and then you can get the winds indirectly from the height field. So I came up with a very simple theoretical equation using something called the Rossby radius of deformation, which can tell you on which observations you ought to be using and under what circumstances. So I published that in Tellus in 1964, even at the time that my thesis came out. What’s interesting about this paper is I think a lot of people didn’t realize the significance of that, because it was always, “We need to measure both.” But in a sense you probably don’t need to measure both; you can really get as much of flow patterns, especially at higher latitudes, when looking at the temperature field alone. In the tropics, you definitely have to use the wind field and not depend on the height or temperature field. So it comes out of a very simple little equation I had in that article, a two-page article. In fact, the head of the Forecast Center in Washington came up to me a couple of months and said, “Gosh, I recommend that everybody read your old paper, every young person till they can figure [it] out.” It’s just nice that it was cited a lot.
That kind of durability is —
But the paper is actually peripheral a little bit to my thesis, except in the fundamental way of what should you measure and use in forecast models.
So you’re hired at NCAR, and the advice is that you’re there to help do some modeling. Kasahara was here, too? Did he come at the same time?
He came a few months before I did. I think he came in the summer of 1963. A lot of people came in that summer.
I’m going to be interviewing Kasahara on Monday and Tuesday next week, so I’ll go over some of the same things with him. But I’m very interested in your collaboration with him. So I’d like to hear how that got started and why you decided to do what you did and what the key choices were, and the kinds of models that you decided to build.
First of all, we made a decision that we wanted to go global rather than a hemispheric model, right from the start.
Did you spend some time looking at other modeling efforts before you —
— so you’d already talked about going out to UCLA once.
Did you have much contact with GFDL?
Yes, in fact I spent a little time at GFDL, in a visit with Joe Smagorinsky, and I gave some seminars there when I was a student. So I knew what they were working on. And certain Akira did, too. Now we were influenced a little bit by several factors. First of all, let me just kind of explain. At that time, most models were hemispheric models. Now Chuck Leith’s model was in pressure coordinates, a pressure-coordinate model, and he didn’t include mountains, so it was a flat earth. The way his model was constructed, you really couldn’t put in mountains because of his pressure coordinates, unless you put them in some kind of simplified way. Now we’d already known that GFDL had put mountains in, but they were having lots of difficulty with how you evaluated the pressure gradient term in the momentum equations.
So they were using a height for vertical coordinate system?
No, it was a sigma coordinate system, but they were as you may know, [the] sigma coordinate system has the sloping surfaces. It turns out that if you take the pressure gradient along the sloping surface, you make very large gradient errors, because most of the pressure gradient that drives horizontal motions is horizontal; it’s not along a slope. So if you made a small error in a slope pressure gradient, you made a large error in the horizontal winds. So we thought we would avoid that by trying a different technique. And we didn’t completely avoid that. We used a Z-coordinate system. We were both influenced by the work of the Courant Institute, especially by world class applied mathematician, Peter Lax; he devised a numerical method for solving fluid flow equations called the “Lax-Windroff” method for solving the equations. And at that time, in the early 60’s, there was a lot of effort put into how to make these equations stable, so they don’t blow up on you if you run 1,000 time steps or 2,000 time steps. In that scheme, which was actually used also by Norman Phillips later on, it always damped out the grid scale noise. It didn’t allow the grid scale noise to affect the solution very much. We also looked a little bit at the new scheme that Arakawa had come up. Arakawa had a different scheme that conserved the vertical integral, but the global integral of energy and things like that. So if you summed up the finite difference equations, energy would be conserved. And it was kind of a novel trick (as I explained that discussed in my text book on climate modeling that I wrote with Claire Parkinson) where any scale smaller than a grid scale gets folded back into the solution. So it acts like a negative diffusion term so that energy just can’t keep going down. If energy goes down to the grid scale, all that energy gets folded back into the larger scales. Another big problem at that time — it still is a problem in climate models — [was] how do you prevent the model from becoming hydrostatically unstable. If it becomes unstable, in other words the temperature gradient in the vertical is too steep. It’s as if you’ve got denser air over lighter air, and it tries to overturn. In fact, Akira (he’s too modest to admit it) figured out an easy way to do that, which he used in his hurricane model in the late fifties, which was further developed by Manabe at GFDL. And the trick is that you superimpose a constraint. The constraint is basically is that any time the model wants to become unstable, either with moisture as part of it you but a constraint that prevents the vertical rate of temperature change to not be unstable. — temperature profile, the lapse rate, so it’s stable again. At the same time, you conserve on the total energy in that process. Basically, that’s what called “convective adjustment,” which is widely used in the early 1960’s. The only exception to it was Chuck Leith. And Chuck had a novel way to solve this problem, too. Chuck basically said that when you got close to an unstable atmosphere, you just increase the vertical diffusion, so that you make it stable again. So if you had very warm air at the bottom and cool air at the top, if it got to the critical place, you would just mix those two air masses as if convection takes place. The upper part of the atmosphere warms up and lower part of the atmosphere cools as if you had some giant mixing takes place... The only difference is the convective adjustment is instantaneous and the vertical mixing used by Leith is a slower process.
So, tell me about meeting Akira and how you started working with him?
Well, he was the senior person, no question about it. I was like a postdoc; NCAR didn’t have Postdocs then. But we all had the same appointment NCAR had at that time one-year appointment policy. So they couldn’t fire you as a scientist without giving you one-year notice. It was until the 1970s that we put together an appointment policy close to the academic one. I think that Akira and I just had a series of discussions, then we went to Phil Thompson and Phil endorsed what we were doing, and said to us, “This is what we hired you for”—[that] sort of thing, and we just launched in to it. The first layout of the model was done by Akira, actually. We studied Richardson’s book more carefully because we thought that Richardson’s model didn’t have all the problems that the sigma-type models were — some of the problems that GFDL was having were substantial. I might add that Mintz and Arakawa didn’t have this problem that GFDL had. One of the reasons they didn’t have it was they only had two levels in the vertical. So their mountains never went up to very high levels. So their mountains were always kind of smooth, smooth-type mountains.
Appalachians, not the Rockies.
That’s right. I think that that helped to keep this problem from getting exaggerated, but it was clearly exaggerated in the early models of GFDL. In fact, the worst place was up near the stratosphere, even though the mountains were further down, and its effect on motions was actually way above the mountains. So our approach was that if we had a bunch of layers and the air ran into the mountains it would stop, as it there was a barrier. Only at the top where you had a mountain that poked through one layer would we have to do the sigma kinds of things. So our approach was to build it that way. I should point out, too, that one of the reasons GFDL didn’t go to a global model was that you could put the atmosphere of one hemisphere on what we call a “polar-stereographic grid,” so you had no singularity at the pole. You essentially transformed on the globe, the Northern Hemisphere onto a flat plane, and they you would just solve the equations. We didn’t have any special problem. But if you put it on a latitude-longitude grid, then as you approached the North or South Pole, the longitudinal distances became very close to each other. And that shortened on your time step. What we did and Chuck Leith was we kind of coarsened on the grid as we got closer and closer to — and it wasn’t ideal because it always led to some kind of discontinuity to the solution. But it was a pragmatic sort of solution.
Speaking of Leith, I think he came to NCAR in maybe 1965.
He did, I think...not sure of the date
I think it’s 1965 or 1966, somewhere close to that. When I interviewed him, his picture of what he did here was that he had gotten interested in other things; for his own scientific work that he was in charge of the modeling stuff that you guys were doing, but he really wasn’t involved. Is that accurate?
True was not directly involved but he was kind like a consultant. Chuck is kind of a Renaissance type person, and he was interested in turbulence theory and he was interested in numerical methods. He worked on a number of things which weren’t global modeling. I think part of that probably came out of the fact that he didn’t have a meteorological background. He was a physicist who [had] worked with Teller in those early days. He learned a lot; he had even gone to Sweden to learn meteorology for a while and learned a lot, but he never kind of felt comfortable say, for example, in writing an article on the results from a general circulation model. He never did that.
There are only a couple of pieces about his model.
So he didn’t publish a lot in the formal literature. That just wasn’t Chuck’s style.
So in terms of the family tree I was showing you here earlier, he worked this model starting in 1958, worked on it for a few years. So he started around 1958 and then worked on it until about 1963 or 1964 and then just kind of dropped it, moved along. So in terms of influences on you or anybody else, it was really more just kind of seeing the movie and knowing that this could be done —
I think it was more than that. I think he was able to show first of all that you could get realistic solutions, which are actually shown in his movie. Also, he had explored this pressure-coordinate system. And he had shown finite differences of the type that we used here could solve on the equations in a reasonable fashion. So I think he influenced some other work, especially the NCAR work. But that model really didn’t go past that point. And it couldn’t because his models weren’t written to be run on anything other than the very specialized computers they had at Livermore. And you know our driving force was to build a model that could be used by the university community and we had to stress documentation, making it user-friendly, giving some help to the users, making it more or less easy to run on the NCAR computers.
Was there any concept on that particular issue that you would make a model that could be carried elsewhere, run elsewhere —?
Not really. It was really built for running on just NCAR computes
— people who had come from some other university and work up here.
I don’t know if it’s in the papers I gave you, but I in the early history of NCAR there was a document called NCAR Quarterly a description of the direction of our overall modeling effort, making it a kind of a laboratory that could be used by the university researchers. So there’s a strong sense of that when Akira and I started putting the model together. [Brief discussion about Thompson document.]
And how would you characterize that?
I would say just a blueprint. He talks about the need to understand clouds better and equations, and NCAR try[ing] to put together an effort that it will provide to the community a tool that could be used.
Tell me a bit about how your modeling effort here related to the ones at GFDL and UCLA. You did look at what they were doing, but my send is that you then worked more or less independently and used a few tricks from the other modeling efforts, but didn’t really focus on duplicating there efforts.
I think there was a strong sense of trying something different. Not necessarily the same that we were doing was greatly superior to what they were doing, just following different paths. So almost every aspect of the numerical schemes, the way the precipitation was handled; in the early models we didn’t even have precipitation. How we did the radiation, because in the early days, if you did the radiation right, it would take all the computer time. So we used kind of a radiation chart, approximations basically. So I think every element of the models were all different at all different centers.
Did you guys write the code yourselves?
Oh, yes. I think I wrote the first version of the atmospheric model. Although we quickly hired two or three programmers. Bernie O’Lear was, I think, the first programmer that we hired. The programmers were much more proficient in the software than I was. But I was involved always in writing code, up until about 1970 or so. At that point, I gave more guidance than actually writing the code. We had this little routine of using the computer time each night — I think we were given like twelve hours at the computer every night. And Gloria Williamson, Dave Williamson’s wife, she would run it, and then in the morning, Akira and I would come down at a certain time, probably around nine or ten o’clock and we would look at what ran overnight. This usually wasn’t more than three or four days. In those days, running a 30-day run, or a 50-day run was a long run. I think in terms of publishing something, I ran the first experiment that was July simulation, because I was able to show in 1970 that you get the monsoon and you get this low-level jet off the East coast of Africa and you get a reasonable simulation that could be compared with observations. But most of the runs up until that time were constant January. And the reason that people wanted to look at January more than July was that was the time of the year the Northern Hemisphere had stronger storms, and we wanted to simulate those. Then, obviously, we went from constant Januaries to seasonal cycle. In fact, if you look at Smagorinsky’s group’s early simulations, all of those were with annually-averaged solar forcing. So there was no diurnal cycle and there was no seasonal cycle. And that changed sometime in the early seventies. Everyone I would say by 1972 was starting to do that. Now the only pioneer who didn’t do that was Mintz and Arakawa. Even in 1964, they had seasonal cycles in their —in fact, I don’t think they ever had an average annual cycle, I mean a constant annually-averaged forcing simulation. I think the reason for that was that Yale himself was such a meteorologist; he’s an observationist, basically. Arakawa was more of a modeler. And he just insisted that it was probably ridiculous in his view to run a model that doesn’t have seasonal cycles.
Now I want to talk a little bit about the climate change issue which you may have been completely unaware of at the time. But there was some policy activity on that, even by then. I think there’s a conservation foundation report on carbon dioxide-induced climatic change in 1963. The President’s science advisory committee wrote on in 1965, so of course there’s Keeling’s Curve and things like that. I know why I wanted to ask this. Because you just said that one of the standard experiments during these years was do the month of January and compare it. Much later, in the seventies, that standard experiment becomes carbon dioxide doubling... and it’s just interesting how these kinds of norms develop. So I’m curious about any awareness you might have had at that point in the sixties of the carbon dioxide question.
I think we had some, but it really wasn’t geared towards that. In fact, on this talk that I gave last week at the Weinberg Auditorium was interesting from an historical standpoint. Alvin Weinberg was the head of Oak Ridge National Laboratory in the late sixties. In fact, he became temporarily the United States energy czar; under the Carter Administration, he came to Washington. What was interesting about it was he invited me out there to Oak Ridge to give a talk, and what was interesting about that was he was into nuclear energy as something Oak Ridge National Laboratory was trying to push. He wanted to show that use of energy can change the environment. Now at that time, this was 1970, we didn’t have a coupled model. All we had was an atmospheric model. In fact, I don’t think anybody had a fully-coupled general circulation model. Suki Manabe started the single-mix layers and we were about a year behind him, published the early work in 1980. So what he [Alvin Weinberg] asked me to do was, “Could you put in thermal urban heat island effect?” I told him, “I don’t think it would make much difference.” So he gave us some estimates of what the population was going to be in — I forgot, I have to look it up in my article — but it was probably 50 years from now or something like that. We had some estimates what thermal heat island effect was for certain metropolitan areas. So we put that into our model. And we didn’t see much difference. But it was a fixed ocean temperature, so the only place you would see it is over the metropolitan areas. There was a little difference. So it actually led to something that your colleague Steve Schneider got onto, and Bob Chervin. After we looked at the two plots, you know, “Gosh, I can’t see much difference.” And so, I think it was Steve and I and Bob said, “Well, maybe we need to get some measure of these differences that we’re seeing.” Because there were some small differences. Were they significant? And we did the first work on statistical significance with our climate models in doing perturbation experiments. What we did wasn’t terribly sophisticated, but we essentially applied a student t test, with small sample size. How big a sample do you need to prove significance? You know, it may show up as a signal change over the standard deviation sort of thing. And I think Steve and Bob Chervin kind of expanded on that and came up with more sophisticated measures of statistical significance. But it all came out of this one experiment that we did on urban heat islands. And of course now for the global warming and greenhouse effect problem, people have gone to very sophisticated schemes using empirical orthogonal functions and pattern recognition, fingerprint methods to see if the signal’s coming out significant. But they had the same problems, in that you see a difference in the model and you have to divide by something that’s like standard deviation so that you can get signal-to-noise, they’re all based upon the same thing in principle.
The carbon dioxide issue wasn’t really a force in your modeling effort in the sixties.
I think it was kind of in the back of our minds and as soon as we had a simple coupled atmosphere-ocean models, I think we were able to explore that. And there were some real surprises there, by the way, Paul, in terms of how the community looked upon these experiments. NCAR at that time had very strong-willed oceanographers... still does. And they always said you couldn’t do it. So their point of view was, “Come back to us in twenty years and couple oceans to the atmosphere.” What Suki did was something which I thought was very novel; we did something very similar. He coupled a simple ocean mix layer that was like 50 meters thick of water to the atmospheric model and it turns out that you capture the first-order effects by doing that because in terms of the way the climate works, storing heat in summer and releasing that heat in the ocean to atmosphere in winter, is the first-order effect of what the oceans really do. The second-order effects are to put the currents and vertical mixing. The first order is that you can capture the seasonal cycle.
— damping effect —
Yes, damping the overall changes in climate from summer and winter. For example the reason that the monsoon shows up like it does over India and Asia, is that the oceans don’t warm up as much as the land surface does, and that causes the monsoon. So you don’t really have to have a very sophisticated ocean model to capture that as long as you can store the heat in the upper ocean that you store in the summertime. And so I would say that we were able to capture 60-70% of what the oceans do by just a simple mix layer. Oceanographers don’t want to hear me say that. But I think for someone who looks at the larger picture, it’s probably true. Of course, we’ve gone to much more realistic models that are able to capture the climate system in much greater detail.
Let’s go back to the first models you built with Kasahara. The first publication of that is I think 1967, is that right?
Were people other than you using it for anything before that?
No. I think we were the first. And things happened very rapidly in that time span because the paper you made reference to was a dry model; it didn’t have moisture. We added moisture about that time. We added mountains. We published a big paper, Akira and I, in 1971. I think it was the biggest paper that had ever been published in the Journal of the Atmospheric Sciences: 60 or 70 pages. I remember that Will Kellogg complained about the page charges and I think it came to some huge amount of money. And we added moisture, mountains and a land surface, simple hydrology. At that time, by about 1971, we had a fairly, for those days, a fairly sophisticated model.
Did you have any sense of being in competition with the scientists at UCLA?
Oh, yes, but friendly. Yes, sure. Smag was kind of a closed shop. He was kind of a tough guy. I’ve known Smag for a long time. I remember the first time I went there. I think it was almost a job interview, if I could just skip back a little bit. This was 1962. The Mona Lisa picture had just come to the National Gallery and GFDL was right across the street from the National Gallery on Pennsylvania Avenue, in an old building. We were walking over to the Mona Lisa and standing in line. (Kind of disappointed after we saw it because it was so much and dark).
Was there bullet-proof glass like it does now...?
Not sure. Smagwas definitely the boss in charge of all of that, and gave a lot of direct charge to that group. Our mandate was entirely different. And that was to kind of make this model available. We had many external early users of our models.
Tell me who were the most important of early users.
...It would be people like Kutzbach and Bob Gall, who’s here at NCAR (he’s a director). People doing baroclinic studies, kind of “what-if” studies in terms of temperature anomalies in the oceans. Jerry Meehl and I carried out some of the first experiments on if you change the ocean temperature like in El Niño region, how would the atmosphere response be in the tropics and higher latitudes? But I’ve got all of those people listed in various places, if you want to [see them]... You see, NCAR had during the [open?] had a different culture. We had review committees coming in. We had advisory committees to give us advice, we had to do all this kind of I would say “center”-type activities, where GFDL was kind of more of a closed shop. You didn’t learn very much from talking to them, until the publication came out. We had to work in a glass house, sort of thing here at NCAR.
My sense of UCLA is that they are also very open but especially Arakawa, because he really took it over after the first ten years or so — what he was interested in was research — build something functional that can be used for ongoing climate studies and is well-maintained and well-documented.
And probably something a little less than state-of-the-art. Because we always had this philosophy and it makes sense I think for anybody, just like Bill Gates and Windows, is that you come out with a new product but you’re still working on new things as they come along. So there’s always the decision about when do we freeze the model, clean it up and all of that and at the same time, there’s research going on trying to making improvements in certain aspects of the model with time, or adding new features.
That makes a good segue into another question. Tell me about the generations of this model of yours and Kasahara’s. I’ve sort of marked them down as 1, 2, and 3. If you look at the text, it says what I’m talking about by that. I’m not sure that’s how you would have divided that up.
One of the articles I gave you was the article which summarized the history of the model. And I think that you saw that.
It’s probably where I lifted this information from.
I think that’s probably the most accurate one...
Let’s come back to that one. Did you ever get copies of any other models to work on here in the sixties?
I think that we had copies of the UCLA model. I don’t think we had copies of any of the other modeling groups.
What did you use that for, if you did use it?
Well, I think we basically used it for those early experiments that I talked about with GARP and that we needed to understand enough about the model so that we could make changes in the initial conditions. I don’t think there’s any problem. I think both Mintz and Arakawa shared that model with us in terms of giving us the code. It was written in pretty good FORTRAN. I don’t think we ever made the model run here ourselves, but others may have run it here. Remember that NCAR was at that time starting to provide computer access to the larger community, so people could bring models here and put on the computer. We wouldn’t do it for them, but they could do it on their own.
A very important question, in terms of the kinds of things I’m interested in is data. I’m interested in things like what data sources you may have used for parameterization and for comparison of model results with observations. And why you chose the ones you chose.
Can I go back one step on that? Because that became a real issue. We went to a global model — at that time, it was an atmospheric model, so we needed to have ocean temperatures over the entire globe. No data set was put together. If you went to the library, you’d have these huge atlases put together by various people over the years. But they were all like regional oceans: Indian, North Atlantic, South Atlantic, Pacific and so forth. So this was very perplexing to us because we didn’t think we would have to get into this. So we hired an undergraduate student and what she did was draw all of these onto a big map — hand-draw — and then Harry van Loon would be the final judge as to where the contours ought to be changed, because there were discontinuities. You go from one map; the five-degree line would be in a different spot, and so we would say, “Harry, where should we make this?” And he would kind of put it in the maps. Now what was interesting about that was once we published that — it was never formally published, it was published as a [NCAR] Technical Note. Because we gridded that data and the way it was gridded was somebody would sit on that map and then: “That’s “26.5,” the other person would write down for that (latitude and longitude 26.5), put it on punch-cards, and that dataset propagated around the world. In fact, other people published it as their data although we put it together. And it lasted for many years as the observed global-ocean temperature dataset. I mean, nowadays we have much more sophisticated datasets coming from NOAA and so forth, but in the sixties, no such gridded dataset existed. So we had to put it together.
What about for the atmosphere?
For the atmosphere, we were able to get data from the National Meteorological Center and other places. And Harry van Loon and others had put together these hemispheric datasets. Ort, at GFDL; Harry van Loon for the Southern Hemisphere. Of course, Ort eventually covered the entire world’s ocean with some of his colleagues, but it was hard to get those datasets. People were willing to share them, but they were easily put together. If you have a few minutes, you may want to go over and talk to Dave Baumhefner, because he put together the first global weather datasets of the observed weather for a particular day. Up until that time, they were mostly either hemispheric or regional datasets... It was pretty much the same way as we did on the sea surface temperature. He was a student of Krishnamurti at UCLA, who was a synoptician, and so Dave had a great sense of how these data contours should be drawn and so forth ought to be, and how you can generate global datasets. But it was tough in those first days, because you didn’t have it. You just had to put together datasets from regional maps pieces.
Did you ever have interactions with people who were data collectors? You’re talking about some of them here, but I’m thinking of instrument builders, the NASA people who were putting up satellites, say, telling them what kind of things you’d like to have.
I think we had dialogues with them. I don’t know why if the date collectors research actually fed that much directly into what they were doing, but they certainly knew what we were doing. In the early days, when they used to come out and visit with us, there was a guy by the name of Jay Winston, who was at —
Actually, NOAA I believe
NOAA Satellite Program?
Yes. And I think Dave Baumhefner would be a good person to talk to about how in the early days we were trying to put satellite data into the models by first of all taking the radiances from the measurements and inferring temperatures. But I think in the early days Norm Phillips was one of the first people to work on this in National Meteorological Center (NMC). The early seventies. He found that it was worse to put the data in than to leave it out, which kind of made the satellite people unhappy.
“We put an instrument up there and you don’t want this stuff —”?!
I think it was partly that the instruments were so crude and also our methods for putting them in there. Because remember this was data that was striped data. You know, as the satellite came over the globe, it wasn’t like where we were used to getting it on the radiosondes and the surface data. Remember that they’re measured twice a day. Two balloons go up, so you have a synoptic picture that that’s consistent worldwide. But now you’re dealing with the satellite data which comes in thin pieces and doesn’t seem to fit with these twelve-hour schedules. But all that’s been kind of worked out over the years. Now it’s assimilated in the models fairly well. We did a lot of work on those early assimilations, mostly with Dave Baumhefner’s involvement.
It’s getting close to 5:00 PM... let me see if there are any other questions I want to ask before we stop. Maybe this would be a good question to close with: we’ve talked about a lot of the differences between your model and the GFDL and UCLA models. How would you characterize what the most important distinguishing features were —? Heat, modularity —?
You mean for the NCAR effort versus what other groups were doing?
I think a model that was modular and reasonably well-documented for the community; I think those are the two things.
What technical differences in the way it was put together? I think that we probably had somewhat more simple physics in our model, but part of that was to keep the model computationally efficient. You know, there’s been a fundamental change in how models are put together now. In the sixties and seventies, the effort was to really make models economical so that you could run experiments with them. Which meant simple physics, maybe not including some physics, and using as coarse a resolution as you could get away with. I think those were real driving forces so you could run an experiment in a reasonable length of time. The way it’s being done now, if you look at the CSM or our modeling effort (I don’t know if you’ve seen any of what we’ve been involved in lately)
I haven’t paid much attention in the last two or three years.
These modern models are basically very high resolution with virtually everything done to, I would say, to the “nth” degree. The radiation, for example, and the way the clouds are treated, and how the convection is treated, and we have 30 kinds of vegetation. And we have 20 or 30 different soil types. We have ocean models now which are able to capture the El Niños by having half a degree resolution right near the Equator with lots of vertical resolution in the ocean. And sea-ice models, our ice model, is 27 horizontal kilometer resolution
A 27-kilometer grid?
Yes. So everything is in there and we’re exercising these massively parallel computers that were the direction that the Department of Energy was going with respect to its computers.
So you’re still beating them to death.
Yes, but we’re running — like some calculations we can run on computers capable of doing 20 gigaflops, which is more than all of NCAR has got here. In fact, I don’t run with NCAR mostly, I’m elsewhere.
At the DOE labs. So, modeling has gone from what I would say is pragmatic, simplified to highly-sophisticated extremely detailed interactions. And now our models include even eddies in the ocean. So it’s a much different modeling world from the early days.
This looks like a good place to stop for today. We’ll come back to all those differences tomorrow.
Later in my career Walt Roberts became President of the AAAS. The AAAS Board was made up of people who were very prominent scientists, but who were well into their 60’s and 70’s and 80’s. The consummate “old boy’s club,” with very few women. He did something brave. He invited each one of the board members to bring a young scientist from their own institution to a board meeting and let them observe the board. Roberts invited me and I came and the board members met and criticized the Board on the next day. They used to meet on Saturday and Sunday because they had academic appointments, and didn’t want to take off time from teaching. The Board immediately set up a committee to institute recommendations for Board changes. Interestingly enough, out of that call came some institutional changes. Because at that time there were protests and bomb scares at the annual meeting because of the Vietnam War.
Where was this meeting being held?
It was in Boston.
What year was this?
1969. There were a lot of people questioning whether science was for military purposes, and a lot of protests and so forth. So the Board was scared, the Board of the AAAS —they weren’t keeping up with the times. What we recommended, and it was approved by the Board, was to have scientific sessions on things like psychology of war, and environmental issues, and increasing the role in the society, and nominations for minorities and women to the Board. So AAAS came out of a sleepy period into an activist period. And I’m very proud of that, because I was chairman of the Youth Council, and I was 33 years old. (That’s what they considered youth.) But I think it led to a more activist kind of organization now; in terms of programs for women and minorities, things really changed after that. Still now, the AAAS has been I think one of the leading scientific societies in trying to tackle some of these very difficult societal issues.
What about prior to that meeting while you were here in Boulder? Were you involved with the NAACP or any other organizations?
Well, it turned out that — we all these small bits of history. You jogged my memory because in the late 60’s, early 70’s, the Black Panther Party was very active; here in Denver [there was] a very active — in fact, the police raided and shot up the whole place down where the Black Panther Party was active.
It was a violent time.
In fact, what’s kind of interesting, if I can jump forward a decade or so, I got divorced in 1975, and I re-married three years later. My second wife, Jo Washington, wife died of breast cancer in 1987. And as if kind of going back to history, I met a lady a year or so after Jo died. She was the sister-in-law of Bobby Seale, and we dated for several years, but we didn’t get married. She moved to Denver from Oakland. I probably have been considered kind of a moderate, Paul. Not that I’m moderate about the zest for equality and proper treatment of people, but in terms of tactics. So, anyway, in the field I’ve probably just seen as a quiet pusher of issues such as increasing the number of women and minorities in the sciences —
Were there any other African-Americans working here when you came to NCAR?
No. In fact, that’s probably one of my disappointments is that we haven’t been able to recruit at the scientists’ level any other, all these 35 years I’ve been here. I can see that’s going to be even harder in the future because our culture, the scientists having complete power over appointments, much like a faculty. We are insulated from the pressures of the real world. In the academic world, you wouldn’t think it would be right to not have a few minority professors and female professors. Here at NCAR our scientific staff is very happy with that, and we don’t have that kind of pressure on us except for the National Science Foundation, because they keep asking why we don’t get more people here in that field. And one of the questions for a place like NCAR could be is whether, the administration could be more encouraging about it. I think at the directors’ level and the president of UCAR level, they’re very sensitive about it, but on the other hand, I don’t think they actually want to push the staff about it too much. At some point, it might backfire because I can tell you as a National Science Board member, that if a director comes in and explains their program to the National Science Board and they say they don’t have any programs to try to bring more diverse people into the field and don’t do some educational things, we would actually say no to them, on funding that center, or put them on notice that if they don’t do something about it — I think Anthes and Serafin understand that as President of UCAR and Director of NCAR, but I think down in the trenches, they don’t see it happening. And quite frankly, for the eight years I was Director [of CGD], I complained about it. I said [that] it’s just not right for an institution that’s a national center not to be more aggressive in trying to recruit diverse scientists. But this, Paul, is going to be even a worse problem as time goes on because the numbers of minorities who are going to graduate school is actually dropping. You already know what’s happening in the University of California system.
Yes. It’s a tragedy.
And we’re talking about small numbers, we’re not talking about huge numbers of people.
So the pipeline is drying up —
Well, drying up to a certain extent. I think the one real success story is with women. I think clearly. And as they migrate more into the ranks of the people making the decisions, faculties and so forth, I think that will help. I’m not saying all males are bad or anything. But I don’t see any radical changes around here, so that I’m 62, and I’ll be retiring in 7 years or so. I don’t know whether there will be another Warren Washington here or not.
A couple of more questions along these lines. One of the things you talked about at some length with Earl Droessler was the “directorial style” of the various directors at NCAR. A very interesting discussion from this other oral history. I’d like to go over this same ground because partly this question is motivated by what’s happening today. Many scientists I’ve talked to in climate science are jealous of some of the European centers — Max Planck and Hadley — especially because they seem to have a — they’re not jealous of the style, they’re jealous of the result. The style is much more top-down, directive, “you’re going to do this in the following way, we’ll divide the task like this and just work on what you’re assigned.” Here, apparently has been for the most part a much more open American academic atmosphere, each scientist in charge of their projects and just kind of go where you want to go. But NCAR has in a certain sense fallen kind of the middle there, because of the Community Climate Model projects. So I’d like to hear a little bit about the various directors of NCAR and how they’ve influenced the institutional style of this place.
I’m not sure I can remember all of the history of this. But clearly as I said to you yesterday, Walt Roberts and Phil Thompson were a more laid-back style, expecting more of the initiative to come from the scientists themselves than being directed. Actually I think that this led to this thing called the Joint Evaluation Committee (JEC).
What was that? It was mentioned in here, but it was never actually explained.
In one of the reviews — NCAR has to go through these periodic reviews like any other NSF program — it was felt that there wasn’t strong enough focus on certain projects. That we were a fine institution in terms of the quality of the staff and so forth, but we were not focused enough so the — I don’t know whether it was the UCAR Board or the Foundation or some combination. I suspect it was a combination. The reason I say [that] is because it was joint. “Joint” usually means under a cooperative agreement is that both the UCAR Board and the NSF management set up a committee to come in and essentially do an audit. Not a financial one but a scientific one. And out of that, there was a lot of criticism of the style. Walt was asked to step down.
When was this?
1973. Phil Thompson was asked to step down. They were instructed to hire a new director of the institution and a president of the institution. Ironically, when they picked out Francis Bretherton, he took both jobs. And he even took on the job of director of this division [CGD].
Which was called what at that time?
“AAP” [Atmospheric Analysis and Prediction]. Actually, there was another director here. It was Will Kellogg, he was director at the time, too.
Director of NCAR?
Director of the division. The director of NCAR at that time was, I think, John Firor. So they both sort of stepped down. Francis took all three positions. He was very energetic. Have you ever met Francis Bretherton? He was a very loud-talking, very forceful, direct person. As a result of this process, they got the type of person they wanted and it turned into a much more focused organization. Also, as another part of this, Francis set up an appointment system. Up until that time, everybody had a one-year appointment system and appointments were usually renewed.
A kind of tenure concept.
Actually, I came through that process fairly well, which is kind of surprising because it made me a senior scientist at a fairly young age. I think I was about 39 when I was made senior scientist. What they had was a scoring thing based upon papers, importance of the research, university collaborations, and working in the wider arena. As I remember, you got a maximum of five points for each one of those categories. And so if you got a score of 17 or 18, that was really high. If you got a score of 5 or 6, you were generally asked to leave. There was actually a small committee and Akira was a member of that. And that’s something you can ask him about. Akira and Chester Newton and a few other senior people who were asked to carry out this evaluation. I think it was kind of a dark period for NCAR.
What focus did he want to have?
I think on the modeling side, he wanted a much more directed effort. One of those documents I gave you a copy of where I wrote the history of the early modeling up until 1975 — I think I wrote it with Akira — it was on his orders that we were asked to put together the history so we could know where we were going to go in the future. He drew out a roadmap for our modeling. And I think — I can’t say it was bad, but it was a different style than we were used to. Because we used to have lots of meetings and he was a very forceful person as chairman of the committee. I can remember that most people looked down at the meetings so they wouldn’t establish eye contact because Francis would immediately ask you to go off and do something. I think Steve Schneider was around at that time. I don’t think Steve challenged him, but I think he was rather dubious of some of the things that Steve was working on in those days.
What was Bretherton’s background?
He either came from Oxford or Cambridge, and he was at Johns Hopkins. He was basically a fluid dynamicist who did some of the fundamental work on how gravity waves propagated and absorbed energy and so forth in the atmosphere. He was basically a large-scale fluid dynamicist, a theoretical-type person rather than a modeling-type person. He would tend to solve his problems by analytical methods. But he understood the fundamentals of modeling and so forth. I don’t think he had much appreciation for things like radiation and cloud physics, and those aspects. But for the dynamical parts he was certainly was well-versed in those areas.
So how long was he director and who succeeded him?
That I probably can’t remember exactly. I think he started in 1973 or 1974, he was probably here maybe six years, six or seven years. His successor was Bill Hess.
Was what that like?
Another big man with a booming voice, but a different style. Actually his background was physics, high-energy physics. He ran the research part of NOAA…its Environmental Research Laboratories (ERL) including GFDL. He was the next level up. So he had some knowledge of atmospheric programs, of atmospheric chemistry and modeling. But nothing detailed. He was kind of more of a director who was not involved as much as Francis Bretherton did in the science. Francis actually would talk with the scientists in detail about their research. Whereas I think Bill Hess was more of someone who interfaced with the outside world in Washington and so forth.
And after that?
After that, it was Rick Anthes, who’s the President of UCAR now. He appointed Bob Serafin, who is the Director now. They’ve been President and Director for over ten years. There’s probably one step in there I’ve forgot. When Francis was director, he finally gave up being the director of this division. Anthes came in as director of this division and then went on to be director of NCAR for a few years, and then President of UCAR. I can say I was on all three search committees. So if the staff had any complaints, they can complain to me. I’ve been a benefactor.
You’ve been a director of this division.
For eight years.
I think when I first met you, you were still director.
I gave that up at the time I became a member of the National Science Board (which is the governing board for the National Science Foundation).
I’m going to come back to that later, but let’s go back to the seventies. I’d like to spend a fair amount of time on this if you’re able, which is the climate model you worked on with Kasahara went through three main phases. You talked a bit about that yesterday. Then NCAR began to work on the Community Climate Model. My sense is that your original model was more or less abandoned and the other model came out of the BMRC, which is now the BMRC in Australia and the ECMWF.
Let me explain that. Even in the late seventies, I was approached by the DOE to do some greenhouse studies.
Was that part of that carbon dioxide assessment program?
Well, Fred Koomanoff was director at that time and yes, that was about the time. So they gave us a grant. I forgot how much it was, but it allowed us to hire some programmers. Because at that time I wasn’t in the mainstream of leading the atmospheric and climate modeling. I kind of withdrew a little bit because of other duties Most of that was taken over by the Climate Modeling Section, which is still there. In fact, I wrote a paper with Dave Williamson on probably the last of those early models. But then Dave kind of took it on. When we started doing the coupling, we really felt that maybe our models needed to take a look at using this new spectral method of solving the equations. But we didn’t have any expertise here in that. So what we did was I had some money, so I invited some of the people to come and visit. And Bob Malone from Los Alamos came up, who was a physicist who knew about spectral methods, and Eric Pitcher came from, I think he was at the University of Miami at that time, to spend a year. And Ramanathan was here as a Postdoc and a young scientist. The three of them came to see me one day and they said, “We’d like to put together a spectral model.” I was open-minded about it and felt like it was time for us to look at it seriously. So what they did was they started by using — we invited another scientist over here, his name was Kamal Puri — he was from Australia. And what he had put together was a model using so-called Bill Bork’s Method of Spectral Dynamics, and was an early Australian model and they had the GFDL radiation in that model. I forgot the computer that we had at the time. It must have been a 7600 or a 6600 or whatever and they got it up and going on the computer.
So the same one as the Australia model —
Right. It was brought here. And Ramanathan felt that the radiation wasn’t up to par, so he completely wrote that whole radiation model over again, and we have something called the “Orange Book”...which describes this model. It’s a huge book — I put it together with all these articles. If you want me to, I could find it.
I’d actually like to see that.
It still made reference to it. But all it was was pasted together, all kinds of stuff about this new model that we’d put together. And I got various people to add little notes. It’s not like a polished document. It’s like a working thing. It turned out that when they ran the simulations, it ran faster than our own model. The simulations looked pretty good. So, I started using it with Jerry Meehl, and we used it up until like four or five years ago. It was an R-15, low resolution model. I coupled Burt Sentner’s world ocean model to it and put in the simple sea ice thermodynamics model. We had been using that for a long time. In fact, up until the CSM came along and the PCM — which I gave you a copy of yesterday — that was the only coupled model around that was fully coupled. I don’t exactly know the time periods, but probably like in 1984-85 or so, Dave Williamson became interested in this model, in spectral models. He and Jim Hack essentially re-wrote CCM model 0-A, which I think is in your —
Yes, it’s on that chart.
[They] cleaned it up, I don’t think they changed anything fundamentally, but they re-coded it and made it more efficient and all of that. I remember that there was a real struggle to…they didn’t change any of the physics — to make sure that the new model gave the same simulation as the old model. I think they got pretty close. Then, that became the first CCM.
What about CCM-0-B?
That’s the re-coding of the Australian model.
My understanding is that there was some influence of the ECMWF work, the early ECMWF work on this... I’m citing an article from Ramanathan from 1983 here. He must have said something about — I don’t remember at the moment where I got this, but the initial code for CCM-0-B came from an early version of the ECMWF.
— on how the spectral dynamics were done was a little different between CCM-0-A and CCM-0-B. I think Dave had visited the European Centre and felt that was a better dynamical framework. He’s around, if you want to ask him about the differences.
You continued to use CCM-0-A all along, until the last few years.
Yes. And the reason was purely pragmatic, almost like what Suki Manabes situation was when we had these long runs. The worst thing for us, when you’re doing coupled ones, is for someone to come along and say, “Here’s a new version of the model.” And it’s just taken us a long time to make the runs so there’s a tendency to kind of stay with the old for maybe a period that’s too long. But that’s going to be less of a problem in the future. In the future, we can run, for example, with our model we can do a century run in three or four days if we have 500 processors, or if we have 2,000 processors, we can do it in one day. I don’t think there will be as much reluctance to switch over, but when you’re running experiments where you get only one simulation a year in a couple of days, you kind of hate to make changes, even though people come in running and say, “I’ve got a little bit better this, little better that,” you kind of say, “OK, but I’m in the midst of an experiment and I don’t want to change right now.”
I was just going to go back to what happened — so you just left behind the old finite difference model, is that right?
So it just really came to a dead end.
Did anyone else ever take that model? Obviously the people involved in UCAR were able to use it.
Did it have influences beyond NCAR?
It’s hard to say. I don’t think anyone actually picked it up in the same sense as we picked up the Australian model and used it. I think, since it was so much into our computer system, there was tendency to not want to yank the model out and take it someplace else. There wasn’t any real benefit to do that. I think that we made a conscious decision after a lot of soul searching to phase that model out and go to the spectral model. Obviously there was some overlap, you know, so the people who had ongoing research — but we did make a conscious decision to stop developing that model. Then after a while, we even stopped supporting it, because it took people to keep it up.
Why is it called CCM-0, as opposed to CCM or CCM-1?
I don’t know. You probably asked a good question, but I don’t know. I don’t think it was conscious.
When did it start to be called the “Community Climate Model”? And did that have a relationship to this issue of directorial style? There’s an implication in the name that we’re now doing something that we’re all working on, we’re all focusing on.
I think so. I think it probably went back to that time when Francis was director, to try to unify efforts. I don’t think we had a lot of multiple efforts here. The only time we had the multiple efforts is when we had the existing older model and when we went to spectral.
Of course, there was Starley’s model in the last few years, the Genesis Model.
I think that’s pretty close to CCM-1. He really picked up CCM-1 and added features to it. The radiation, all of our ecology, or land surface and to a certain extent, Starley wanted to be his own boss, so to speak, he didn’t really like working in the community mode. On the positive side, he interfaced really well with the paleo-climate scientists. He understood the models and he worked closely with that small community to tailor a model that they could easily use. But keep in mind that most paleo-climate people are not from a dynamical background. They would want to use a model, but they didn’t want to probably put a model together on their own. So Starley would have yearly workshops with the small group of people, it became a widely used model, in the paleo-climate community.
Is it still in use?
I think it’s used in a few places, but internally we had our political battles with the new director of CGD, Maurice Blackmon. He felt that Starley’s effort was too separate from the CCM. See, I have mixed feelings about it because I tolerated these two efforts when I was director, but it’s an area of if you’ve got the user too separated from the builder of the models, then I think you can get into problems. I think the present arrangement is one where “Here, I’ll give you a model, but don’t come and ask me a bunch of questions about it. I’m not going to do much to help you configure for your purpose.” I think that’s a certain amount of that. What we’ve done is we hired Bette Otto-Bliesner and she’s been our paleo-climate person with the CCM and the CSM. Her responsibility is to work with paleo-climate community. But they probably have less involvement than they did with Starley. With Starley, they could go in and say, “Well, we’re going to need to be able to change the mountains around or where the continents are very easily.” And Starley, working with David Pollard, configured the model so that people could do that sort of thing easily. Whereas the approach that we have now is probably a little more difficult, if you wanted to make a change in the model, that sort of thing. If you think about it, for most of the purposes of a climate model, you’re not changing the continents around. Or you’re not changing the vegetation types back to some prehistoric period. So it’s a matter of how much user involvement do you want. So I think we lost a little bit of that. Although I think Bette’s doing a good job of trying to bring it back. Starley’s going to be leaving NCAR; his appointment’s ending. I don’t know where he’s going to go yet.
Tell me about the issue of — going back to the Kasahara-Washington model — the issue of oceans. You told me yesterday about your first attempt to find a sea surface dataset and put that together. What happened later in the seventies as you began to try to make this ocean more sophisticated?
We hired some very high level capable oceanographers. Two senior people were Bill Holland and Jim McWilliams. Peter Gent came a little bit later than that. Bill Arch is there now. He’s another just recently-appointed senior scientist.
Do you know when they were hired?
Were there any ocean people working here before that?
Yes, but they weren’t — I think he was hired about the same time, but — if you go way back, Jim O’Brien was hired to get started. But Jim was a very contentious soul. Have you ever met him? He eventually went to Florida State where he built an empire, of students and so forth. We hired another fellow by the name of Elliot Schuman from Harvard, but he never quite worked out and he kind of sidetracked with some interesting problems, but never built a model. When we hired Bill Holland and Jim McWilliams, they were interested in various kinds of what we call balanced models of the ocean. These were like quasi-geostrophic models where you make the — on the geostrophic sort of thing, the approximation of various ways. But they weren’t terribly interested in working on the climate problem at that time. They were more into isolated ocean dynamical-type problems. We also hired a young scientist who we hired by the name of Burt Sentner.
I’ve met him.
In fact, if you want to get Burt’s history of ocean modeling —
I have it.
He gives the history of how ocean modeling has progressed. Now Burt brought his model, which was really a UCLA re-coding of Kirk Bryan’s GFDL model. In fact, there’s a UCLA technical note that’s very popularly used that describes this model. Essentially what he did was just clean up Kirk Bryan’s model in terms of the code. I don’t think there was any fundamentally different physics in it. It was basically the same.
This was when?
I’m not sure. I think 1972-73. I think right after we hired Jim McWilliams and Bill Holland. So somewhere in that time frame. I took Burt’s model and made a five-degree version of it. In fact I did the coding myself. I didn’t change the code; I worked out the interfaces between the ocean and the CCM-0-A, which meant that you’re essentially passing wind stresses, heat fluxes in between on the two and also the precipitation minus evaporation which goes into the salt flux. So I coupled those together — I had a very simple flux coupler which would do it. And that’s what we used. And actually Jerry Meehl came on at that time and we wrote a paper, I think in 1982, describing that model. So that we used for a long time, up until almost a few years ago.
So the one that you coupled it to was the Kasahara-Washington finite difference model?
Originally we did. But we quickly switched over to the CCM model A... In fact, on this let me make a correction. In 1980, we actually did an experiment, a general circulation experiment, with a coupled atmosphere, ocean and sea ice model. And I think that was used on the old finite difference model. You’ll see if 1982, at the end of my vitae, is that orange book I was talking about. Then, there were some papers in the eighties, with the version that had the spectral CCM-0-A. At that point, Jerry was going from a student assistant to a scientist. He got his Ph.D. and has grown a lot in the field.
So Holland and McWilliams got around to building an ocean GCM later, is that right?
No, they actually have never done it. They’ve been, even with the CSM — the Climate System Model — what they essentially did was take the GFDL ocean model and add isopictic diffusion along isopictic surfaces. And they’ve added this parameterization called “KPP” that Bill Arch came up with, which a vertical is mixing scheme for the upper boundary layer. So under certain conditions, when the buoyancies are right, the water mixes down and mixes temperature and salinity properties. It’s always been a GFDL model. Now they didn’t pick up Burt’s model. What they did was they picked up another version of the GFDL model. But as Burt explains in his article, Kirk Bryan’s original model was re-coded by Burt and then it was re-coded at GFDL using some of Semtner, so there was cross-circular changes built into it. It becomes kind of hard to distinguish who did what, but it was mostly between GFDL and Burt’s group. NCAR never really took part in that. NCAR pretty much took the model off the shelf and made some changes to it.
What about Bob Chervin’s work? He worked with Sentner, just improving that?
Right. Bob’s substantial contribution was in being able to re-structure the code a little bit, to make it work on massively parallel computers. And the Cray X-MP and the Y-MP — he was able to re-structure the code so that everything stayed in core and used all the processors at the same time. Bob was very good at that sort of thing. But I think that model’s coming to a dead end. Even Burt himself if going more and more over to this parallel ocean program, “POP.”
Where did that originate?
Los Alamos. It’s interesting that that model also has its origins on Burt’s model and GFDL, but they took a look at that model, and they asked, “How can we get this model to run on a massively parallel-type computer?” They had some very bright capable people there and they re-structured the entire code. So it could run officially on the Connection Machine and now on the Origin MT-3 computers. It’s probably now the fastest-running model available.
What’s the resolution —?
Well, I just got a simulation they faxed me yesterday of 1/10th of a degree.
In fact, two weeks ago on Friday I was at the White House, arguing for the new supercomputers for DOE and for NSF with the examiners, the OMB examiners for NSF and DOE. And she said, “I would really like to have one of those figures showing those small-scale eddies in the ocean model to help convince my bosses higher up at OMB that we need a computer of this magnitude.” So when I got out of that meeting, I sent an e-mail off to the people at Los Alamos, “Get in overnight mail some real nice graphics of ocean eddies out of this 1/10th of a degree model.” It’s a model that we can’t afford; it runs 100 hours to do a year. But I like the fact that they’re trying to do some very pioneering things. It’s not a practical one for climate studies, but at least it shows what you can get and allows you to really make detailed comparisons with the observed ocean simulations.
I want to hit a couple more things about the — just random things. You were at the University of Michigan briefly. This was in your C.V. Did you actually go there, or was it remote control teaching?
In the sixties and seventies, NCAR scientists were supposed to spend some time at universities. We had a program that was called “affiliate scientist” or “affiliate professor” program, and Aksel Wiin-Nielsen asked me to come and be an affiliate professor. I accepted; Akira did the same thing at some other place — he went to Texas A&M, I think. Or maybe it was Utah. It was expected of us. It was a program where you would spend a semester there every three years and several visits throughout the year, working with students and helping teach a course. Aksel was the chairman of the department and asked me to come back at that time. I taught his course, just like earlier, when I showed up, he had left. I thought it was good experience. The reason it kind of failed for me, I think — this NCAR program — was that my wife, LaRae Washington, was pregnant and we had kids in school. It was just a difficult thing to do, when you have that kind of family situation, you know, kids in school and so forth. My impression about why it failed is that it was kind of impractical in some sense. I mean I don’t think people minded going off to teach. If you do it during the school year, you’re really separated from your family if you have kids in school. You’re not going to take your kids out of school to go somewhere for a semester.
Did you learn anything important by doing that?
I don’t think it was bad or anything, I thought it was an interesting humorous part of my life. The way it was humorous that Aksel put me into it and got me as faculty resident in the dormitory, Markley Hall. 4,000 students in this one dormitory. I would get knocks on the door — I wasn’t in charge, they just wanted some faculty around because of some problem. So I remember getting knocks on the door — some girl would say that her roommate was going to commit suicide in the cemetery across the street, could I go look for her, get my flashlight and go — remember this was 1969, there was so much student unrest —
The University of Michigan was a big focus of that, too.
In fact, they had floors where they had nothing but Afro-American students. They were experimenting if all the Afro-American students stay together. I remember — I was probably a little bit of an old fuddy-duddy at the time because I remember talking to the Afro-American students on a number of occasions on their floor and saying, “You guys have to study. It’s OK to protest over the problems of discrimination and exclusion or whatever, but you can’t make that a fulltime occupation. You’ve got to do your studies.” And they were talking about the slights they had gotten and all kinds of things, and I was kind of, as I said, an old fuddy-duddy, and I said, “Yeah, OK, I’ve heard all that, but study.” A lot of them didn’t study and they didn’t survive in the academic environment. I have mixed feelings about that era. It was certainly a very active one in terms of campus politics.
I’m going to be teaching at the University of Michigan after this —
That’s what I thought. Have you got a faculty job there?
I’m going to be at the School of Information, which used to be the Library School. It’s now called the School of Information and really interesting, very interdisciplinary place. [Short discussion about Paul’s new job.]
In fact, I was up there this spring. I gave the Martin Luther King Talk at Michigan; they had something like six or seven of them. I gave the one for the physics department.
This was in one of the things you gave me. You had written up a little bit of this, said it was a huge lecture, 700-800 students.
At my talk? It was a large group of people. I talked about climate change, which is what I talk a lot about these days.
Let’s to that next. Looking at the history of your papers here, one of the things I see is that if you can divide a career into phases and if the early part of your career in the sixties or early seventies is model development, pretty straightforwardly, just working out the model, and then in the early seventies you start to do some application studies. A lot of those are about thermal effects of human energy use on the atmosphere. Then, much later on, not until the early eighties, but at that point quite a bit of it, CO2 doubling studies. Tell me some more about the thermal effects work in the seventies. You said yesterday how you got involved in that. There are quite a few papers on that particular issue and I’m curious about it because it’s not something I’ve seen anybody else doing work on —
I don’t know. I think it was accidental, what I talked with you about yesterday, about Alvin Weinberg getting me started in that direction. I really didn’t write that many papers on thermal effects. I just wrote that one paper that was in the Journal of Applied Meteorology. I think that’s the only paper I actually wrote on it unless I’ve forgotten something. 1970 or 1971, I think.
In 1975, you have a letter with Steve Schneider on the energy and climate to Science, and 1972 you have one in the Journal of Global Meteorology on effective production of thermal energy.
I’d forgotten about this. I guess I did write some papers in those days. But can I just make kind of a separation here? I think it was in 1978 that we got funding from the DOE, and it seems like all of my work after that in some way has been connected with coupled atmosphere-ocean models. And the purpose — OK, I was just going to say that in the late seventies, my work really turned towards coupled atmosphere-ocean models, because of the funding we got from the DOE, and the emphasis I was supposed to pursue was, “What is the effect of the ocean on climate change due to increasing greenhouse gases?”
Was this a grant proposal you wrote, or something you were asked to do?
Well, they called me back to Washington — the people in DOE — and asked if we would send in a proposal in this area. The reason is, they felt they needed to have more modeling groups involved in the CO2 problem, and they certainly were aware that Manabe and Curt Ryan were doing some work in this area. But NCAR wasn’t involved. And there was some reluctance for NCAR to get involved, because we did not normally get money from other agencies.
Francis Bretherton had real worries about getting funds. In fact, he adopted a rule, which has been relaxed now, that the P.I. [principal investigator] had to be a senior scientist. So they weren’t motivated turning into a “soft-money shop” sort of thing. And we couldn’t use any of the project funds to pay salary. So, basically, when we got the funds, we used [them] to hire programmers and students. And we did hire one student which worked out fine: that was Jerry Meehl. He went on and got his Ph.D. But then, the rules have relaxed at NCAR now, and it’s roughly 50% of its funds.
And those are DOE, EPA, anybody else?
I would say some money from NOAA has come in for various people. Not me, but Starley got mostly EPA funds. But anyway, even in 1975, I started thinking about coupled models. And so one part of this was that we started working on a sea-ice model. Up until that time, the sea-ice models were very crude. In fact, Curt Bryan had one as part of an early experiment that he did in 1969 with Suki Manabe, which was just kind of a model that treated the sea ice as just a simple layer that would kind of grown and decay in a very simple kind of fashion, and the ice would move around as a function of the ocean velocity. So we started working with Burt Semtner, because he had done some work on the thermal dynamics of sea ice in trying to simplify it. Because ice models up until that point were like 500 layers in the ice, so he simplified it down to a smaller number of layers. In fact, Burt’s model is still being used in most of the ice models. Then I had a student who I worked with, Claire Parkinson, who came here from Ohio State, and she worked out in a very thorough way the thermal dynamics of having new ice and old ice and what happens in detail when the leads open up, how the ice grows and that sort of thing. So she came up with an energy-accurate way to do this. The elements of this are still being used now elsewhere. When the new science of ice dynamics is now, as in the model of which I gave you a copy of the parallel climate model and other models, treat ice as lasting viscous fluid: when the ice tries to converge, it crushes, goes to a certain limit. But then when you’re in a sort of divergent wind field, the ice tends to break. So the ice doesn’t have any tensile strength. Then, it only resists being crushed. I would say that those ice models now have gone from simple to pretty good simulators of true ice dynamics. But anyway, we started in 1975 and kept on going. Versions of that ice model are in the CSM also.
Were there other groups doing that at the same time that you were aware of?
Paul, this is kind of a funny thing to me. I don’t consider myself completely a “Renaissance”-type person. However, I’ve always found that when you go to the specialists, and they get bogged down in what I would call “details,” and if you’re putting together a climate model and you’ve got to be able to make reasonable choices about approximations and simplifications, so I’ve always been of that sort. I’ve seen cases where you’ve worked with specialists in various parts of climate modeling, and they’ll get bogged down in the details [and] you never get anywhere. They aren’t willing to make an approximation. And the psychology of that is based upon — if you’re a radiation person, for example, and you go back to your colleagues, or go to the meetings with your colleagues, and you say, “Well, we approximated the optical properties in this way or we put in the clouds in this way.” They would feel very sheepish about making those kinds of approximations. It always took somebody who was more outside of the field like Manabe or some of the people here, [who would say], “Oh, well, we’ll just make clouds a function of relative humidity,” even though we know the formation of clouds involves a lot more factors than just relative humidity. So we’re much easier as climate modelers just to make the approximations. And our colleagues in the various sub-specialties can criticize us, but we have to make compromises to make it run on the computers. And I was able to do that very easily. In the ice field, and ocean, I don’t probably come up with any great new ideas, but at least I can carry it the next step, and not feel sheepish about if I make too big of a compromise.
Your philosophy on that sounds like Manabe’s.
Yes, very simple.
— because contrasts that’s pretty obvious Arakawa’s — Arakawa’s obsessed with the convective adjustment, of how to get the clouds to work, very interested in getting a very detailed scheme for that. Manabe is like, “How do you know that works? Is that real?” You sound more in his direction.
One of the interesting things is that there’s probably a tendency for us and GFDL to put in something and run the model out and look at it and say, “Gosh, we didn’t do too bad a job.” Whereas probably Arakawa is more into: “even if it looks pretty good, when you look at the details of how the precipitation is occurring, it wasn’t doing it right, still.”
But see I like both approaches. I would never put down what Arakawa has done. He’s led to greater understanding of what happens when you’ve got a grid square and you’ve got a few big clouds somewhere in the grid square and lots of shallow clouds, all competing for the moisture and the heat. And how does that work out? How do you put in that kind of mixture of cloud types? But our approach has been to go from simple to a little bit more complex. Whereas, his approach has probably been to start with more complex and try to simplify.
We were talking about some phases in your career. You were telling me about working in coupled sea-ice models and ocean models in the seventies. Now, was it the involvement with the DOE program that got you started on the CO2-doubling experiments?
Was this done as part of that project?
This is a whole other subject that I’m about to spend some time on in our last session this afternoon, but I’d like to get started with it a little bit now. In the seventies, you began to do a lot of community service on various things, things like the NAS panels on climatic variation and so on.
That’s the period when the climate change issue starts to be a pretty big deal in science. You probably see it cropping up in all the different laboratories. I’d like to hear about your memories of the origins of that issue for you and how you became involved in working on it. Because the CO2-doubling studies are part of that, but before this, mostly what I see in your literature is that there are not applications to climate change except for this thing on thermal contributions.
I don’t know exactly the time, but there was an Academy committee put together. I forgot what it was called, but its chairman was Yale Mintz and we actually met out at GFDL. Smagorinsky was on it, Manabe, myself, Larry Gates was on it. And we were supposed to put together a plan. I think if you ever found that document — I think it came out in 1975 — it was actually written by Larry Gates. Yale was never one who would ever write anything. Maybe that’s too harsh, but he published very little in his career. It’s not a bad document, but I remember, in order for us to get convergence on the document so it could be put together for Academy review, Larry had to go around with manuscript in hand and stop in each place and sit down with the scientist for a few hours and get them to agree with whatever he wrote. What’s interesting is Larry is going to be here next week, I think. He’s been asked to do the same thing for the DOE on the supercomputers for the DOE; he’s been asked to put together a document for how he would use the supercomputers. And Larry’s very good at this. He’s very organized, he knows the science well enough, he has a logical mind, and he writes very well. He’s the ideal person. And he did that. I mean, that report came out for the Academy back in 1975, I think. It’s an excellent document. It actually laid out the climate part of GARP. Even though it was an Academy document, I think the elements of it were adopted for the World Meteorological Organization. I think it was kind of a blueprint for a long-term strategy.
Did you have a sense from participating on that panel that there was any urgency about the climate change issue, is this something that really needed some work? Or was it more off in the future —?
I think there was a lot of — as even now — a lot of caution with this subject, but it was perceived of as being a very important part of climate research, and climate modeling had a heavy role to play in it. So I think the people at DOE were kind of cautious about it; they were asked to sort of look into this, no question about it. They had taken on the lead role in the federal government, but they didn’t want to be painted into a position of “we haven’t done the science.” We’ve got to do the science; there was a strong stress on doing the science. I think the modeling community up until that point really hadn’t thought much about how we could improve coupled climate models. Up until then, even from the early experiments that Suki did — Suki actually did the first real coupled runs and we came along a few years later and some of the other groups, Hansen and some of the European groups came along in the early eighties also. I don’t consider that we were working on the “Manhattan”-type time scale. It was more just “it seems like we have to get into this.” If I can go back just a little bit, Paul, I think that that’s why I felt, instead of going on to new atmospheric models that hadn’t been checked out and worked on, that I started doing coupled ocean runs with an older version of the model where we had established climatology and we had published papers already in. So I didn’t have to kind of prove what the model was; I was just adding one new feature to it in a kind of systematic way in a simple mix layer, for example. I think people tend to forget how important that is, because when you add too many effects all at once, you get so confused about what’s leading to the model doing this or that. I still think that scientists need to change one thing at a time in these complex models, in order to keep it understood. Did you ever see the quote at the end of my book that I took from Smagorinsky? Smagorinsky in 1969 in a publication talked about the culture of a climate modeling center, and that people need to change only one thing at a time and need to work in an environment where the time to publication can be a long time, and most in academic institutions, you couldn’t sit around for three or four years working on a new model and expect to get promoted. And even to a certain extent, that’s part of our culture here. For young scientists, for example, my advice to them is to be very cautious about getting too heavily into modeling. Even though our directors will all say, “Work on that.” But after the four or five years and they’re ready to move up through the appointment system — I sat in on the appointment systems for fifteen to twenty years. I know the questions that come up. “Well, I know he’s been working on that big problem, but — what scientific problem has he solved?”
— shows results.
I think there was a tendency for the people who did have senior scientist appointments to be biased against modeling, because many of the people who have senior scientist status were people who had singular discoveries in their careers that gained them their reputation. And they value more highly those singular discoveries than working on the pragmatic building of a tool. It’s not much different than people who work on telescopes or work on field programs. It’s a very similar sort of thing. What kind of happens in the history of an organization is that those people tend to get delayed in getting their appointments, but the people who do singular advance up the scheme and they’re in a position to pass judgment on the people who have slowed down. So you get this kind of bias situation where the senior people are —
— saying, “Well, I don’t understand why it took them that long to get that model going, or that field experiment data.” I think that’s one of the risks in our appointment system. I see it happening all the time. I mean, people in the Climate Modeling System have struggled to move up the appointment ladder, although, in my opinion, they may have done a very valuable service to the organization by working on building these big models. But they get the feeling that they’re not getting rewarded for it.
You’re just reminding me of something Smagorinsky said to me in interview with him. He started working on that GCM, I think, in 1958 or 1957; his publication was in 1963. He thinks that now that was a mistake because, of course, he feels like he hasn’t gotten enough credit for being a pioneer, which is hard to understand since he’s gotten plenty of credit. But still, he says that he wishes now that he published some things sooner.
He was a bit of a perfectionist; because he had done a lot of research, but they didn’t publish it because they thought the next set of experiments would be better. There were some flaws in what they did earlier. I think this is a good question to ask Akira. Because Akira was into the culture — in our partnership — where he pushed harder for us to publish in those early days.
I think it was the culture. He felt it was important that we get out some early results. Keep in mind that another cultural change is happening, Paul. You can’t write a paper anymore just on climate simulation out of a climate model. In the early days, you could do that—in the sixties and seventies, maybe even in the early eighties. Now you have to write a paper on some scientific problem and you can throw in all the other stuff—like CO2 or something like that. You can document the model but a pure model kind of documentation is very hard to do, because the reviewers will say, “There’s no science in that.” Even if it’s a better model, because you haven’t solved any scientific problem. You haven’t attacked any scientific problem. So there are ways to get around that. For example, Larry Gates has a journal called Climate Dynamics, and it’s got a lot of us modelers [in it]. Manabe’s on the editorial board, I’m on it. So we’re more tolerant of just interesting papers. But if you send it to Journal of Atmospheric Science or something like that — the Journal of Climate — there will be a tendency to say, “What’s new here?”
It’s interesting because of what it says about the culture is that the tool has been accepted as an experimental environment. You’re not interested anymore in describing the tool; you just want to do the experiments.
Right. In the early days, Smagorinsky published the Monthly Weather Review, which was a NOAA journal in those days. One of the reasons was that the people who did the reviews of the paper understood that this would be a place that — it would be a friendly environment for the general circulation modeling results. So people published finite difference schemes, and early simulations.
You, too, some of the early —
It was a friendly place. The editor knew not to send the paper to some theoretician who would look down upon it. The reviewers were knowledgeable people in the field and you understood it. Which points out, Paul, the editor is crucial in these journals. I think when a journal gets to a certain size and covers an enormous number of fields; the editor doesn’t know the people who are sending in the papers, doesn’t know the reviewers, probably has this long list and says, “Oh, we have to send a paper today.” So for a while, let’s let him do the review. ? I think that sometimes leads to a mismatch. In fact one of the problems GFDL have and we have here, you get a review back and get the feeling that this person doesn’t know what you’re talking about, is not in the field, and is saying things that don’t make any sense. I think that Larry Gates and others who were managers for a long time of journals in this field really know who to send it to. So it gets a good, fair review from someone who’s knowledgeable. That’s harder and harder as the journals get larger and larger. The field used to be so small; Paul that people kind of knew who did the review. I could pretty much tell Manabe’s reviews or someone else. You could tell by the style, the words that they use. Often, people signed their reviews. I did that a lot, signed them, so that if people had any questions —
Since they were going to guess anyway, might as well.
But I think all of us benefitted from the review process because it really helped us make our models better.
Maybe we should stop for now. [BRIEF INTERRUPTION]
So we’re back for a third session for this oral history interview. There are a number of things I’d like to talk about in this session, sort of leading up to the development of the community climate model into the kind of work you did in the 1980’s. Much of that work —your C.V. has a very long list of committees of various sorts that you served on. And I’d like to start by asking you just to talk about some of these. Not all of them, the obvious ones. But the first one is: you were asked to serve on the President’s National Advisory Committee on Oceans and Atmospheres under the Carter Administration. You continued through the first term of the Reagan Administration. What was that, and what did you do?
It’s a committee that has had a long history, ever since NOAA was developed. It used to be that ocean research spread out over a lot of agencies and they put them all together. It’s illogical and came up with actually ESSA. There was something called the Stratton Commission and it’s one that outlined how to do this, this was probably way back, going back thirty years or so. I don’t know very much about that history, but the NACOA was an unusual advisory committee, because the people were appointed to it jointly from Congress and the President. It had to cover not only the science issues, but fishing, law of the sea; all of the issues had to deal with oceans and so forth. And it had great power. It met twice a month.
Really? In Washington?
I think I took three or four trips to Washington.
I remember reading that in the other interview and thinking, “He must be making a mistake.” That’s impossible: six years twice a month?
Yes, well, maybe there were a few months we didn’t meet, but it was — it probably advised on the administrator of NOAA more than anyone else, but whenever the ocean issue came up in Congress, they wanted to make sure they had input from NACOA. So I learned a great deal on that, outside of the climate theory and all kinds of complicated issues, from aquaculture to fishing to treaties and so forth. And I testified in Congress many times, on various issues.
Why were you chosen?
I think it was probably because I was an Afro-American — The people who were on NACOA were basically very senior people. I was probably 40 at the time, I guess. 39 or 40. But it was a great learning experience for me.
The beginning of that period, 1978, the Carter Administration, was very much into the synthetic fuels program. I know the DOE’s interest in carbon dioxide assessment partly came out of the issue of synthetic fuels and what that was going to do to the atmosphere. Was that something that community discussed much?
Not much. It was pretty much oceans emphasis, with some atmospheric emphasis, too. But they wanted to have someone who had kind of a scientific background with a computer modeling side, so I probably filled that niche.
Did you ever get into climate change issues?
There was a bill that was passed; I think it was called the “Climate” —
There was the National Climate Protection Act, in 1978.
I think that was it. There was some sort of research agenda items. But we talked about and we actually saw legislation before it was enacted. You see that was the Congressional side of it. The Congress would either ask us to come to hearings or to comment on pending legislation. We had input directly into Congress, which is a little different than most presidential advisory things. In most of them, you’re doing the bidding of the President.
The executive branch instead of the legislative.
Right. But when Reagan became President, he appointed a lot of what I would call naysayers to the committee, kind of right wing. Fred Singer was appointed at that time. He also appointed Anne Gorsuch, and she was fired from being head of the EPA.
She was on that committee?
That’s right. In fact, I have a letter somewhere in my files from the White House — because I had to resign because my wife came down with cancer. I was replaced by (I always make this joke) someone better qualified, Anne Gorsuch.
Better qualified to do what?
After she was appointed, it was clear that Congress was kind of fed up with the political appointees who weren’t real qualified to be on this committee. Up until that time, it had been pretty much non-partisan; the President, who actually nominated always consulted with Congress to come up with names of credible people in the field. But when she was appointed, the Congress kind of got fed up, because it was a Democratic Congress at the time and they said, “This has gotten too political.” So they abolished it. I don’t know exactly when it was abolished, but 1986.
Let me pursue that just a little bit further into some other things you may have been involved in. One of the ironies of the climate change issue in the U.S. is that it really became a mass politics issue during the Reagan Administration, which was the most anti-environmental administration since — who knows. You’re telling me that there was a deliberate effort to pack this committee with naysayers —
There were a number of them.
Others that you remember?
Tell me who they were if you can.
Well, I don’t remember the names. I could probably figure it out sometime.
Any of the famous ones, Pat Michaels or Richard Lindzen or —?
No, no. Fred Singer was — in fact; Fred’s had a long history of being a naysayer. But at that time, remember the air quality/air pollution was a big thing.
Not ozone, but the Clean Air Act was up for changes.
Yes. And we had big battles over that. He was claiming it was natural processes — so forth and so on. Much like he says in his book, Hot Science. Have you seen his latest book?
No, I’ve just looked at it.
I have it if you want. Fred was kind of an interesting guy for me to deal with because he knew enough science. He had been the head of the satellite part of NOAA, and he knew the science and so forth. So he would always come back with what I call scientific questions and doubts that he raised to a much higher level than what the community felt comfortable with, which is what he’s doing also with the CO2 problem. But is there an aspect of that you want to know about? I think Fred was probably the worst offender he appointed. I remember that he appointed Charlie Black. Do you know who he is?
Shirley Temple’s husband. I got to know him fairly well. In fact, we would walk to Georgetown after the meetings. Actually he lived in Atherton, CA.
He was in the business of working on equipment, sort of ocean equipment, for smaller boats. Sounding data and sounding equipment, that sort of thing. His father was the president of the power company (PG&E) back in the twenties and thirties so I think he was quite wealthy. One interesting thing that I worked with him on was a subgroup —apparently there are a set of seals that are hunted up in Alaska. And under some 1870 treaty or something, we buy the seals from these Indians; it’s their only source of income. Then the seals are stored in South Carolina —
Are you talking about the pelt?
Yes. And the only buyer was the Russian Army, because they used [it] for the bills of their caps. The Indians were making a fortune off these things, because I guess when the treaty was signed, the government agreed to buy these seal pelts. Then people used them for hats at the turn of the century. But the market disappeared, [and] we still had this agreement to buy so many of these pelts. He went to look at these villages. Each one of them had a big boat and they had television and antennas, and they were all wealthy because they only worked a few days a year when the seals came in. So the government was trying to figure out how to get out of this treaty and get back things back into the free market. Because they were filling up the warehouses in Carolina. I must have been on [that committee] for six years.
Yes, your C.V. says “1978-84.”
If you’re asking why I’ve served on all these committees, it’s probably more than I’m just an Afro-American. I’ve learned a lot about how Washington works and I’ve been around in various forms, so people keep asking me to come back, I guess.
Actually, I’m more interested in the effects of these committees on the legislative process or how the climate change issue fits. In 1978, the National Climate Protection Act is mainly aimed at kind of climate monitoring for agriculture and insurance, legal issues, things like that. It sets up the National Climatic Data Center and so on and so forth. There is some climate change in that legislation too, but it’s not the main focus. Then, in 1986, there’s the Global Climate Protection Act, which is focused on climate change. And it doesn’t have a money authorization attached, but it does say, “The Congress has found that there may be a serious problem with human-induced climate change.” Now that’s 1986, and you were off the committee by 1984, so I’m curious about what you might remember about the legislative maneuvering during that period, because you appeared before Congress a number of times. In particular, did these contrarians have significant impacts on slowing this down, were they effective when Reagan appointed them, or not?
I don’t know. The paradox was — and this was true with the Bush Administration, too — that they certainly didn’t want to move towards mitigation. So their strategy was that the uncertainties were so large that we couldn’t do anything on the mitigation side. That left an opportunity for them to say, “Oh, but we’ll put more money into research.” So their strategy was to satisfy environmental concerns that they would put more money into research.
So in a way they shot themselves in the foot because by buying more and more research, they created more and more interest in the issue and more confirmation of the —
But I think that interest would have grown anyway. But it certainly delayed doing any mitigation just by saying the uncertainties are too large. If the uncertainties are too large, we need to do more research to get the uncertainties down. I mean, I give a lot of talks on climate change, I emphasize that we probably know enough already to start to take some reasonable steps on mitigation. Because if you wait until it’s worse, then the mitigation steps have to be more severe.
There are certainly a lot of things that could be done without a whole lot of social and economic impact.
That’s right. In fact, you’ve got an economist there at Stanford who will tell you that it’s not clear that there’s going to be a lot of bad, adverse —
You’re thinking of Larry Gouler?
Yes. So I think that’s an arguable thing, of course, but it’s not clear the kind of thing the GOP or the right wing kind of philosophy that the sky’s going to fall if you increase [inaudible]
Let’s go on in that direction a minute. One of the things that’s very interesting about the climate change issue is that it was until rather recently very much theory-driven, that is, models show that this is likely to happen, but the data record is so poor that it’s not really possible to even tell what’s happening, at least to the point of distinguishing signal from noise. At what point did you start to feel the way you just expressed yourself to me, that there is enough evidence that some mitigations could be —
I think early on. Actually, when I read some of Suki Manabe’s work, especially going back to 1967, his work on just the simple one-dimensional range convective model. It essentially showed, I think, what I would call the first order of things. I’ve written, I don’t know if you saw this, this climate modeling book —
Your textbook with Claire Parkinson or the climate system modeling book?
The climate system modeling book. But there’s a section I wrote on CO2 experiments. And I think that that was really what I think convinced me that it was a serious effect. And by going to three-dimensional modeling, we can get a little bit better handle on seasonal and regional effects. I never thought after reading that paper that we would get anything that could counteract it. It was always a question of amplitude and timing.
So not “whether” but “when.”
And I think that’s true of a lot of scientists who’ve kind of looked at it. If you can prove the effect of what I’d call simple and straightforward models, you can’t expect it to be sort of negated in a more complicated model. Unless, you’re extraordinarily lucky. For example, as the globe tries to warm up you increase the amount of clouds in exactly the right amounts so that the earth balance is still close to zero. I just don’t think that that’s reasonable to expect. If you read carefully, most of the arguments of the naysayers about global warming, is that they speculate about some of the negative feedbacks in a way that sounds compelling, but when you scrutinize it by model studies or observation studies the system doesn’t hold up [?]. The other thing that has occurred is what Gore likes to show; one time we visited, showing this correlation between CO2 concentration, going back to prehistoric times, and temperature change. Even that ought to convince most people that there’s good correlation between CO2 concentration and temperature.
Do you remember any episodes in which you were testifying before Congress or talking to other policymakers when you felt like there was a turning point? Were you thinking about that?
No, I don’t think so. In fact, I think it was about a year ago that I was testifying under strange circumstances. The reason I say “strange circumstances” is a number of us, about four or five of us, were invited back to Congress to testify on IPCC, results of the IPCC. Well, this Congress that has just finished wouldn’t hold hearings on this. Just have not held hearings on this process at al. In fact, they held relatively few hearings compared to previous Congresses.
Probably too busy with scandals.
No, I don’t think that was the real issue. Typically, the Congressmen or Senators who were sympathetic to an issue would hold a briefing. It would be held exactly the same way as a Congressional hearing. It would be in the same buildings — the Senate or House office buildings — and would be held the same way. Except it wouldn’t be convened by the chairman of the committee. That was the standard practice over the last few years, to hold these. They would invite the press and others to come. Typically, the people who would show up would be maybe a few members of the press.
Does that mean that it wouldn’t be written into the public record this way?
It could be put into the public record, but it wouldn’t be part of the formal Congressional hearing. I think this is a way for issues like this to not be formally discussed, and the only people that would show up wouldn’t — It wasn’t always Democrats. In some cases, it would be Republicans who were partisan, or had views concerning the environment. But it was kind of a strange circumstance, because usually when I testified on other occasions, it was a regular formal hearing called by the chairman of the committee.
Speaking of the IPCC, were you involved in the beginnings of the group?
Probably not. I took part in some of the chapters and went to some of the meetings and I never wanted to take a leadership role. Just because I was doing so many other things. And actually my colleague, Jerry Meehl, took on increasingly more and more responsibility as lead authors and things like that. He’s heavily involved in it right now. I just kind of felt I was already doing too many other things and I just would say no. Maybe I had a personal reason, too, and that was one of the weaknesses of the IPCC process is that they try to reach consensus, and they go in for marathon sessions like Clinton had to do with the Israelis and the Palestinians.
Everybody’s locked into a room for a week.
That’s right. And the most talkative, argumentative people in the community tend to hold forth. There’s a weariness that sets in and people are willing then to give in.
“All right, already. Let’s go ahead, let’s leave. I’ll sign it!”
...So I think that’s the flaw of the IPCC process. On the other hand, even if you get this thing written up, what probably saves it from being dominated by unreasonable people is it gets sent out for review, and the overall authors of the chapters and the overall volume can easily detect in the reviews when they’ve gone too far, or things are off base, or things are modified at the very end by the key people. In fact, that’s one of the questions that Fred Singer had. The wording was changed, he claimed, between what he agreed to by a large group of people —
Oh Chapter 8 business.
I think if you look at the way any report is written — academy reports or whatever — a third of the people have already gone to the airport, to catch their flight. People are tired but they really at the end are the general conclusions and so forth. Then, weeks later, out comes the final wording. It’s probably sent out again and most people are tired of it at that point, argue tiny differences in the language at that point, and it’s done. That’s the typical committee process. I don’t think there was any deep conspiracy as he tends to want to make.
This may be something that was rather trivial for you, but you were a member of the San Diego Supercomputing Center Computer Allocation Committee in 1986. The reason that I’m interested in that is that’s the beginning of the NSFNet that became the backbone of the Internet. It looks to me, from an outsider’s perspective, like the weather and climate modeling communities had a considerable amount to do with the establishment of computer networks. Not again in the sense of the Internet itself, but in the sense of there’s a lot of data exchange, even way back to Faxing of materials long before FAX machines were in common use. So I’m curious if you have intuitions like that yourself, if you think that was an influence in what the work on the Supercomputer Allocation Committee had to do with the climate community.
I was asked to be part of that, and I’m not quite sure why I was asked. There are two aspects. One is that NCAR was kind of a leader in those days in the sixties, in the seventies, in requiring Cray Computer — we purchased the first or second one — [with] Cray computers, we were kind of on the leading edge. The other centers were established because NCAR was really supposed to deal only with the atmosphere and the oceans, so most of the users came here. But then the other fields felt they weren’t getting enough in the way of computer power, so NSF set up four centers and funded those. However, those centers had affiliations with specific universities, and obviously San Diego, with UCLA and Scripps, and ironically University of Wisconsin. The reason I mention that is the scientists at both those institutions looked upon doing their climate modeling, or part of it, at that computer center. The University of Wisconsin’s was Kutzbach’s, and getting computer time there as well as here. And UCLA and the Scripps people getting computer time there. But normally I don’t think they could get large amounts, like a third of a machine or half, because I think there was a resistance to saying, “Well, hey, how come you’re running that stuff here. Why don’t you run it at NCAR?” I was on the committee, so I… But they gave away modest amounts to these other institutions. We had a two-tier formula, one where people just sent in proposals from anywhere in the nation and then where people from these universities got special treatment. I never thought I was very effective on the committee because we had one, a geoscientist like myself, another was a chemist, one was a biologist and a physicist and we’re all looking at proposals in all these disciplines.
Comparing apples and oranges.
We sat around the table and made some decisions, which probably weren’t too bad, but— it’s hard for people to judge things outside them. That was a curious — I should have probably turned it down. I got off when my wife got sick.
Starting in the late eighties, you became involved in a lot of committees on climate change stuff. One of the ones I note here is the NASA Greenhouse Detection Project Science. What was that? 1989-1991.
I don’t think it did much. I went to a couple of meetings, but I don’t think it did much. Apparently they got some money and they needed to have a working group kind of help guide direction, so they had a couple of meetings, but I don’t think it did much.
Let’s go back to the supercomputers for a minute and jump to the present, because I want to hear about [how in] the last two or three years, there’s been a lot of controversy around NCAR’s decision to purchase a Japanese supercomputer, which was then turned down. Sounds like you’re involved in another iteration of that process.
UCAR contracts with the National Science Foundation to run the Center. And it has more latitude than say any NSF Center. An NSF Center is really managed by NSF, or part of NSF. Whereas NCAR has this interface with UCAR management structure. So we did make the decision to freely compete for the manufacturers. So we set a benchmark code including the CCM-3, or CCM-2 maybe at that point. Ocean codes and other codes that we run here at NCAR. And we competed and the Japanese computer came out the best at that time.
The NEC. So we wanted to sign off that computer.
How much was that going to cost?
I think roughly $30 million, that sort of thing.
A single computer?
...let me explain about that. In fact, I was on the National Science Board at the time all this started. I couldn’t directly deal with NCAR because of conflict of interest. Whenever NCAR came up, I would have to leave the room. But I did learn a lot in talking to colleagues on several occasions. The U.S. Global Change Program funded a computer at NCAR at roughly $7 or $8 million per year, so you wouldn’t have to pay for or more for such a computer. And NEC came out the best. The Congress — after NCAR made the selection there were some hearings and they accused NEC of “dumping” — selling the computer at less than cost. And Cray got very aggressive about it. The district that Cray is in is Congressman Obey’s. Maybe you’ve heard about this.
I’ve heard versions of it, but not in this detail —
It’s Congressman Obey, who is the Minority Ranking Member of the Appropriations Committee.
“Minority” in this case, Democrat.
Yes. But a Democrat has a lot of power, especially the Ranking Member. Because in order for the budgets to get passed even through committee, there is some give-and-take between the majority party and the minority party about things.
Cray is in Minnesota, is that right?
Yes. And there’s a committee of the National Science Board, called the Committee on Programs and Plans, which I serve on, and which votes on the authorizing of anything more than $9 or $15 million. It depends on how it comes to the Board. So we have to vote. So we voted to go ahead on that, but we put some special wording in there, left it up to the discretion of the Director to go ahead with the authorization to spend the money or not. This is where we get into things that probably shouldn’t be quoted...but I’ll go ahead and say it… In private discussions between the Director and Congressman Obey, he threatened NSF with a cut that would be many, many times over than the cost of the computer. Now the Director felt in a moral bind. If he went ahead, the NSF’s overall budget could be cut substantially over an issue, or should he just go ahead and do this on principle so NCAR could benefit, but the maybe the “rest of science” could get hit. In the normal budget process (this is probably something that’s not widely known), the budget comes out and NSF doesn’t usually have a lot of earmarks. We get occasional earmarks, like Senator Stevens wouldn’t fund the Polar Cap Observatory because it was in Canada. He was mad at the Canadians because of some fishing issues. He wouldn’t fund that so there was a line item in the budget: “No appropriation of funds for the Polar Cap Observatory.” So the problem is for the Director is there is a lot of unspoken—even though it’s not an earmark—where you go in and just you and the Congressman and the Director talking about what might happen if you do certain things. So the thing that kind of saved this moral dilemma for Neal Lane as the Director of the National Science Foundation, was that this issue came up to the Fair Trade Commission—I’m not sure if that’s the right word, because maybe it’s the one that deals with foreign countries. And they ruled that it was dumping. So the Director felt relieved and could say, “Purchase denied.” I think NCAR learned a lesson out of that, too. At least for the upper branches of the organization. It’s not worth the battle, because it held it up for several years. We went through this and there’s no advantage to making Congressmen mad. It allows too much mischief. For example, NCAR’s a line item in the budget. And what really happens — I’m on the Executive Committee of the Board so there are four of us who vote on the budget for NSF. We put in all these numbers and it goes to OMB and OSTP. But then it gets massaged on the Congressional side. And they have a hearing on the appropriations, but numbers are never talked about in great detail — here they just talk about, “Is it the right balance?” Then, just like what happened a week or so ago, there’s a 4,000 page document which everybody opens up and looks in to see what’s in there. There’s no rationale for why NCAR should get $55 million in astronomy, $100 million in social sciences, you know, $20 million—those numbers are just there. And so it benefits everybody to be on good terms with the Congressmen and Senators. Unfortunately, a little bit of schmoozing is done, as long as you’re not sacrificing all of your principles. I mean, for a purist, it’s not a very pleasant picture. But coming back to the computer issue. Here at NCAR, a lot of our scientists were really devastated in terms of the decision.
I know there’s been a feeling here for a while that the European modeling centers are sort of pulling ahead. One of the things that’s holding us back is the lack of computing power here.
Right. And I think that’s kind of where my efforts differed a little bit from the Climate System Model, where we were always funded by DOE. Jerry Meehl and I were primarily funded to carry out CO2 experiments, greenhouse runs. What we’ve changed to a little bit is that the DOE has always gone toward — not always but in the last five years — has gone towards massive and parallel computer architectures. So, as part of our work, we combine the two. In other words, we take elements of the CSM, for example, and the CCM-3 and the newer ocean models and the sea ice models that can run on massively parallel computers, and adapt those to the type of computers DOE has gotten, and NCAR’s really gotten. Because NCAR’s got this Origin 2000, 128 processor computer, and that came in under this special funding for climate system, which is a multi-agency computer actually. It’s not an NCAR computer. It’s a computer run by the U.S. Global Change Committee. It’s the same kind of computer we were trying to get with the same funds that were used for the NEC. So, it’s allocated by a joint group that represents all the agencies. So it kind of benefited us greatly, Paul, because our models ran on that type of computer right from the start. It came in first day we were running it 20 hours a day and we were able to take advantage of it. The CSM is starting to move a little more in that direction. So there’s going to be a little bit closer structure between the CSM and the CCM.
We’ll come back to the CSM in a minute, but tell me the end of the supercomputer purchase story. What’s happening now? Aren’t you now at a point of finally being able to buy one?
I think we’re kind of in a limbo right now for a little bit. The $7 or $9 million — I forget what it is — is kind of building up, so we do have some money.
Sort of the annual budget of the USCRP for climate projects -—
So it’s in the budget and it’s coming every year. We’ve used it for purchasing these parallel computers, but also Cray-type computers, the type of the older Crays, they’re something called a J-9. (I’m not sure what the J stands for.) They’re vector-type computers. Whereas, massively parallel or multi-processor-type computers with lots of processors, so you can get different kinds of codes on them. I think our strategy has been that: to have these two parallel paths. But we have the money to purchase another computer. Now Bill Buzbee stepped down as director a few months ago, about a month and a half ago. We have a new director, so he will —
[Al Kellie]. He’s from Canada, he ran the Canadian Computer Center in the Climate Office, the Climate Program up there…I’ve never met him. I think people want to sit back and see how he wants to proceed. But we’re probably going to put out another bid for a computer, but I think they’ll just buy American. The NSF should have a lot more money, too, in the budget in future years for supercomputing as well as DOE.
Let’s come back to the CSM and actually maybe go back a little before that and talk about the various generations of the CCM. We talked already about the CCM-0-AB. There’s CCM-1. What were the directions that CCM moved in? Obviously, ocean-coupled is one. I’m thinking though of social orientation, that is, did the project work on the CCM change the NCAR community in any way?
I think there was. Even when I was director [of CGD], making the CC was the highest priority for the division, and a lot of resources went into making sure that was reasonably well-funded, in terms of programmers and so forth. The culture changed around here, because up until that time, if you were a senior scientist — when I first came, back in the sixties, when Akira and I first came, we had lots of programmers and what we would call “associate scientists.” It was kind of like a ratio of one Ph.D. to one associate scientist and programmer. As the years went on, budgets didn’t grow but the salaries kept increasing from inflation or [?] the [Vietnam] War; because people went up the ladder. We started cutting back on support. When Francis [Bretherton] became director, we had to focus more on climate modeling as an organized activity. So the programming support left the individual scientists and went to the climate modeling, at least in this division. So if anybody wanted a programmer, he’d have to go out and get outside funds, which is the kind of thing I did, getting DOE funds. So we’ve kind of gotten into the mold now. You can’t get any help. You can come as a Ph.D., but that only gives you a fishing license. It doesn’t give you any guaranteed help. You can’t get any programmer help unless you’ve got outside funds. So we’ve gone closer to a University [?] mode — you hire a professor —
Does this mean that people end up doing more of their own coding and stuff like that?
Some. I would say probably that usually means they write grants to other agencies. You can’t write it to NSF, but you can write to other agencies. You can see how that on its own can cause some feeling: “Well, the institution is not helping me out that much, I’ve got to bring in my own support.” Not that much different from the academic —
It does sound very familiar.
I think the difference might be that being a professor, you’ve got to teach a course or two. Here you probably should do something that’s helping out the overall effort, and that’s where the director will say to a scientist, “It’s OK for you to do this other stuff, but you’ve got to help out on some of these larger questions.”
Well, what about technical directions on the CCM? Generations of it, what changed, what got better? What was integrated?
Are you talking about the atmospheric models or the coupled models?
Let’s talk about the atmospheric models first. When we went to the later versions of the CCM, 3 or 2, what you really got there was much improved radiation and cloud parameterizations, over previous [?] solutions. Boundary layer treatment was more substantial. The numerical schemes totally didn’t change that much even in the later schemes of the models although Dave Williamson just tested something called the “Semi- Lagrangian Technique.” We don’t use spectral technique.
You don’t use spectral technique?
We use kind of a Lagrangian approach, but it hasn’t been proven to be superior. The other modeling groups at GFDL and Hadley Centre haven’t gone over to it at this point. So I would say it’s yet to be proven. Of course, Dave [Williamson] is pushing it, because that’s where his area of research is, but it isn’t like the revolution that went in between finding a different [?] scheme and spectral, which most of them switched over to… So I would say that that’s really improved the atmospheric component very well. And I’ve already mentioned that the ice models have come along to be more realistic in later versions. The ocean model’s biggest innovation has been this parallel ocean model that Los Alamos has. Even the people at GFDL are starting to use. So what you’re seeing is two separate models, the same physics. One runs officially on massively parallel computers, the other one runs on a large vector computer. There are some other differences, too. For example,[?] POW the Los Alamos one, is not well-documented and is programmed for ultimate efficiency on such computers. The GFDL one, MOEM, has been designed to work with many, many users on the outside. So they’ve tried to make that more of a community model where people can send in suggestions for changes to MOEM’s options. And there’s a user’s whole page where people can grab pieces of code to change and do this and that. So you can imagine with that kind of code, it’s not going to be terribly efficient because you kind of take into account ease of use and criteria, to give them some way to — whereas MOEM is entirely different. They’re constantly tweaking to get the maximum efficiency.
Tell me about the CSM. When did that project start, what does it integrate —?
The smaller project that I had for a number of years was pretty much “Mom-and-Pop” compared to the CSM. A group of scientists, led by a number of people around here, felt that we kind of needed to have a larger, more integrated approach. I’m trying to think about when that actually got off the ground. I think about six years ago… In fact, have you got a copy of the Journal of Climate that has all the articles that just came out?…It’s a big thick volume, that has a little bit of the history of that model…It has the structure of the scientific steering committee as a working group of all these separate areas, and they have a retreat up in Breckinridge every year. There are 150 people. So it isn’t me and Jerry sitting around the table with a group of programmers figuring out what the next thing is to do. It reminds me of the difference between Scott and Amundsen, who went to the South Pole. Which approach is the most efficient [?] Organizing a huge infrastructure or several people putting on backpacks and heading out on their own.
Does this imply you have a choice on that?
I think climate modeling outgrew the days of small groups. And in fact even when I talk about our continuation of the parallel climate model, our paradigm is different. Our paradigm is working closely with the people at Los Alamos, here at NCAR on the CCM, and Burt Semter’s group at the Naval Postgraduate School. So the paradigm is one of distributed interaction. Whereas the older paradigm has always been, “Let’s get a computer, let’s get a staff and staff it up into all these three different areas of atmosphere, ocean and sea ice. And land surface. And then we could put the model together.” I think the other paradigm is over the Internet these days. We can all work jointly on the same thing. You don’t necessarily have to be co-located. And that’s a shift in paradigm. To a certain extent, the Hadley Centre was the old paradigm. The European Centre (ECMWF) is like the newer paradigm, where scientists come and then go back to their home institution and keep working — logical.
Well, obviously another thing that’s happening and must be going on in the climate systems model project, too, is the move to integrate more and more kinds of processes in climate model —
— vegetation, agriculture that just has to have logic for people in different disciplines. I mean, one of the things that has interested me as a historian looking at this field is that — because what it looks like to me is that the project of the model climate becomes a kind of magnet that draws people from different disciplines together and forces them in a way to talk the same language, because they all have to work with the same artifact…building this model. So ecologists have to start talking about grid plots that are much larger than the little plots they’re used to working on in field studies. Chemists have to scale, so on and so forth. Do you have any experience with that kind of interaction?
Tell me some stories about that.
Something came to mind when you mentioned that. You’ve probably seen my book, “An Introduction to Three-Dimensional Climate Modeling.” The motivation for that book was we were interacting with lots of different people in different disciplines, but they had no idea what goes into climate modeling. Maybe I’m exaggerating, but they just didn’t know what goes into the climate model. So our purpose for writing the book was where we start with and go almost step-by-step through what the equations are that were used in models that were of vintage mid-1980’swas to kind of open up the book, so to speak. Now this book up here on climate system modeling that Kevin Trenberth edited is probably a bit too much. It also suffers from like reading scientific papers, where it shows the equations. But for a novice —
Where did that edition come from?
Well, you can go back to the original journal article, but then it usually doesn’t have even enough in the journal article to explain where that equation comes from. I’ve been trying to update in the book that Claire Parkinson and I wrote. It’s been a struggle because so much has happened in ten years in the field. But I still think there’s a need to explain what’s behind these climate models, and explain them in a way so people can understand. Not every nuance, because you can’t do that. But the main themes, like what goes into the hydrological, what goes into the ocean model, and why you would make this approximation versus some other possible conception. I guess I think there’s a need to bring people into the field and this gets back to what you asked. They need to understand a little bit of the whole and not just their little piece and how it plugs in. I can remember in the early days — have you ever heard this guy “Hibler”? Bill Hibler who was at the Cole Research Laboratory in New Hampshire in the Army. He’s the world expert on ice dynamics. Actually we’re using his techniques in the latest versions of our models here — crunching of the ice together and how it interacts. When we first came out with a model, that model took more time than the atmosphere and the oceans. We sat down and talked to him. He would say he wants to use my model. We would say, “Bill, ice is important, but we really think the atmosphere and the ocean are more important, so that’s where we should put more of our computational time.” He was a purist. He felt he had solved the ice as a plastic, semi-solid material, and that’s why he had put all this computational effort into it. It was only after a long time that he realized that nobody was picking up his ideas because they were just too complex to use with climate models of that time. I’ve run into that kind of philosophy with people who work in the hydrological area or in the area of vegetation, because they have been working very diligently on very detailed component models. But they were impractical for climate models because of their computational cost, and we just had to say — So I think there’s a need for people to understand how they fit in, and they shouldn’t have a lot of disciplinary bias that their part in the overall climate equation is the most important factor. There needs to be a sense of balance about how much each factor has to be put in in order to get reasonable simulations. And that’s a common kind of judgment call. I don’t think there’s something like a scaling argument or something like that that will automatically tell you what’s important and what’s not important.
Let me go back to the question I’ve asked a couple of times about earlier periods about datasets, especially in the last decade or so, there’s been a lot of satellite data available and much, much better global datasets from the surface and other re-analysis of previous datasets and so on. Has that had much of an impact on your work? Have you changed parameterizations or anything —?
I think so. I would have to say not only me but there are a number of people here who’ve looked at the diagnostics from climate models [who] are much more adept that I am — Jerry Meehl, for example, and Jim Hurrell, who is in Kevin Trenberth’s section [of NCAR division, Climate and Global Dynamics]. [Jerry and Jim] have looked at climate models in much more objective kinds of ways, and they wouldn’t have been able to do that five, ten or twenty years ago because the data had such large uncertainties that the model was somewhat close — you wouldn’t really know if the models were right from the data. Now that we have much better global coverage and the analysis more global, more scientifically sound, and we have satellite data that’s part of the datasets, we can actually run the climate model—take like a five-year average or something, or ten-year average and subtract the model simulation from the observed data and see where the biases are. That allows us to be more quantitative in terms of what is coming out of the model. It used to be that if the winds were blowing in the right direction, the rainfall was about right, the pressure was right — that’s pretty good simulation. But you’ll find us using words similar — I mean, in the same ways we were doing twenty years ago in our papers. “Looks close”, “very similar to”, “matches well with”— those kinds of words. I think that’s OK. But we can get more quantitative, and not only look at the meteorological variables, but look at the top of the atmosphere to fluxes and compare those to satellite data. We can look at charts, at the wind speeds over the ocean. We can look at the eddies in the ocean and compare those with some limited datasets. We can do things now much more detailed and quantitative inter-comparisons.
Is there anything these much more detailed datasets have shown that was just completely wrong in previous versions of the climate model?
Let me give you one example that comes to mind. We had so few ocean observations in the past that we thought the oceans looked pretty much like the classical picture, with the major ocean currents flowing in the Gulf Stream, other currents in the tropics and counter-currents and all of that. We discovered by higher resolution that eddies in the ocean were ubiquitous. And we found eddies everywhere, of different sizes. And that the structure almost looked like a turbulent fluid. [TAPE STOPPED TEMPORARILY]. [Demonstrating]. You can see the eddies….eddies are everywhere. And we’ve obviously can’t get that exact picture from the oceans. But we can go to certain locations and have buoys and other things that can give us an idea of the scale of these eddies.
— look like a bunch of rivers flowing in the middle of still water.
And I think that was the image we had. In retrospect, I’m sure people didn’t think it would be quite that simple. But we’re able to simulate now. For example, on this Gulf Stream, let me just read this out to you. Previous models had the Gulf Stream separating at the wrong location and actually going too far east here, whereas the real Gulf Stream actually goes in this direction here. And we were only able to achieve that by going to very high resolution. Even the model I’m using now doesn’t have the Gulf Stream quite right because we don’t have this high resolution. You get this by breaking eddies and shedding off of eddies as the Gulf Stream meanders, so it’s important that we get the higher resolution. See if I can think of any other examples. By going to higher resolution, we can really separate the land surface both in terms of topography but in terms of vegetation types. And see the role of the vegetation types. Earlier models didn’t have the Great Lakes. They didn’t have the Mediterranean. We have it in there now, dumping salt into the Atlantic.
??? resolution model. What about data impact on models?
Well, I think it’s gotten easier for us because now we can compare our models with the data in much more detail and see where it’s going wrong, if it’s going wrong. In most cases, by increasing the resolution and physics processes in the model, we’ve gotten answers closer to the observed. And it’s been almost an experimental kind of approach. You just increase the resolution and find out to solve the problems. So you look in more detail. For example, on the Mediterranean: our early models had very thick layers of vertical and so the most dense water comes out of the Mediterranean, through the Straits of Gibraltar. But what happens in nature is that salty water comes out from the Mediterranean and it drops down several hundred meters because it’s highly dense, and then it moves horizontally into the mid-Atlantic. But what we were doing was mixing that water over thick layers so it wasn’t quite as dense. So the water actually then started going around the North Atlantic gyre, started circulating at a different level. So there’s an example, once we identified what the problem was, we were able to improve the North Atlantic circulation. Another example in the ocean (I should probably cite some other examples, too): have you heard of the “Jin Sea?” Jin Sea is up in this region right here, and what is important for the Gin Sea is that there are some ridges up here which are very important so that when cold water comes down out of the Arctic, it goes over a sill and sinks very rapidly and then comes down here, and eventually goes into this conveyor belt circulation, that talks about. If you don’t get this on that water mixing down properly — it isn’t actually mixing, it’s actually like a stream that’s dumping down. If you mix over too thick of layers, you still don’t get that kind of future, and the circulation just doesn’t work properly. In the early models that we ran with copied models, we had maybe four levels. Now our models, and GFDL models and CSM models have 30 or 40 levels in every …, so we can get the physics correctly. I won’t say correctly, but still [garbled] wouldn’t be quite right. If I can just kind of shift: even up to the IPCC 1995, most ice models didn’t have any transport.
So the ice would just form and sit there.
Right, right. And we now know from the buoys they put on ice, and from satellite observation of the limits of the ice, we can look at the ice move on a day-to-day basis now. And the data is stored at some ice centers that archive this data. Now we can do very detailed comparisons with the sea ice in our models and see if the ice is going the right direction at the right velocities, and the right [???] , which is something we couldn’t do without good data. So data is really a big difference. You’ve heard of this thing called GPS —
Sure. Global Positioning System. Yes. But did you know we now have a satellite system that’s being experimented with that will give temperature over thousands of points every hour? Let me see if I can explain this. NCAR’s heavily involved in this, mostly UCAR. The signal — you know, GPS sends signals back and forth. Those signals are a function of the density of the air and you can infer the temperature if you had some independent data, so you can infer the temperature structure. What frequencies are these signals on?
I don’t know.
Similar to the microwave sounding units?
I think something in that range. We’ve got these GPS satellites floating around all the time.
Which have nothing to do with weather. But you now understand you can use the signals for this purpose —
Right. So there’s a satellite going around that measures these signals. On the satellite side, as I understand is like a [?] put up several of these small satellites and you would be able to get temperature structure of the atmosphere every hour in a very two-dimensional sense at a very high resolution. (I forgot what the resolution is). And that’s going to revolutionize, because with that kind of data, you can put that into the models. You have to take some care putting that in because it doesn’t go in as a three-dimensional dataset because each satellite makes the measurements as an orbit, so you put it into the model kind of like the orbits of the satellites. And it will give us a better idea of the temperature structure. Remember what I said to you yesterday? If you know the temperature structure outside of the tropics, essentially you know what the wind structure is in a geostrophic relationship. And you ought to be able to update. That will give us a much better initial condition for our models for forecast purposes, but we can use it for climate too because it will improve on European Centre analysis, for example. There are a lot of things happening now on the technology side. The satellites of EOS will be up by the turn of the century, and they will be giving us new datasets, too. In fact, after lunch I was walking with my tray and I ran into Tom Carl. Do you know Tom Carl?
I saw him last week…
He’s worried because he was asking me could I serve on Jim Baker’s advisory committee. After EOS is done, NOAA is expected to be the long-term archival place for EOS data, and there’s no provision for storing this huge amount of data in the NOAA archives because there’s no budget for this massive amount of data. I think that they’ll get this worked out. If you spend something like a billion dollars a year on satellites—maybe it isn’t that much; I think $600 million a year over 15 years. Why wouldn’t you spend any money on proper archival storage? And it isn’t just a matter of storing it — it’s making it accessible to the scientific community, and it shouldn’t be expensive or hard to use.
I had a very impressive visit to that place [?] [NCDC] last week.
Good. You know, Tom’s an old friend of mine.
I was at Aspen summer school with him for a week a couple of years ago that Steve Schneider organized. I got to know him, like him a lot. But they were showing me these Doppler radar tapes they have there; data from a single station for three days takes up a 200 megabyte tape. It takes five hours to read. So if you want a kind of big picture with all the 1600 stations, accessing it is a really big project.
There’s a company here in Boulder — have you ever heard of Storage Technology? They were started by a neighbor of mine, wish I had bought stock in it! I think their profits last year were $120 million. And now, they built a $10 million exit off Highway 36… the state wouldn’t do it, but they went ahead. Its claim to fame, Paul, is that it builds high-speed storage-capability equipment. So every computer center in the world that has a supercomputer has one of their silo units. Remember this is kind of a staged thing because these computers run so fast nowadays — data gets dumped off the computer and goes into what is called a silo. The silo looks like an old farm silo. Inside the silo there’s a thing that goes up and down and grabs these tapes.
The robot tape —
— and stores them in a fairly high-speed way. Then, since a lot of data doesn’t have to go to archival, but is just used temporarily and goes back and forth — the steps usually are from the computer to a disk, a large disk, and then to the silo. And from the silo, it goes to archival storage systems. They build all this stuff. IBM does too, but IBM is more on the archival side. But the silo is crucial to these supercomputers because otherwise you’re dumping this data out at such a rate that the disks run out and you can’t keep up, so the silo is a fast interface with a terabyte’s capability. We’re going to get to petabyte capability in a few years. Now what’s important for what you just mentioned (“it takes five hours to read these tapes”). If they had a silo down there, it wouldn’t take that. The silo was able to — it’s very high-speed for a tape drive. He’s talking about archival, you know; once you break it up it ought to come up very quickly in a silo system. I can show you the one downstairs if you ever want to see it. Let me know.
OK. I’ll take a look. A couple of things I want to ask about in terms of projects you were involved in. One thing I’ve noticed is around the late eighties, you were involved in a whole bunch of modeling comparisons. That’s obviously something that matters a lot more for the climate change issue because it tells you whether the models are just going off in all different directions, or whether there’s actually something convergent. Tell me about those experiences.
I’m not sure I know what you’re asking.
Let me be more precise. [Refers to a Warren’s book?].
[indicating] That’s still ongoing. Obviously in the early days it was done with general circulation models that specified ocean temperature so that you were looking at just the atmosphere itself. Jerry Meehl has actually taken part in more of these things now than I have. What Jerry heads up now is a coupled model inter-comparison study (Called CMIP, Coupled Model Intercomparison Program or Project), where they’re using coupled models in relation to the atmosphere alone. There are now many different types of intercomparisons which give a range of uncertainties in climate and Earth system modeling. It’s come a long ways. I think it’s been very healthy for the field. It’s done in a way that does not ostracize the person’s modeling groups, whose models seem to be less good than other models. The reason for that it is that often modelers themselves find out that, “Gee, our model has this problem or that problem with too much rainfall, temperatures too warm or too cold, whatever. And just by some relatively small changes, they can put it back into where it’s competitive with the other models.
How were these inter-comparisons done? Was this something that involved an actual meeting of all these authors? How did you do it?
Well, the DOE’s been the one that pushed this. In fact, they eventually set up this group at Livermore, you know, the PCMDI. I forgot what those initials stand for. (Program for Climate Model Diagnosis and Intercomparison)
It used to be called the Atmospheric Modelers’ Inter-comparison Project…I know exactly what you’re talking about.
The DOE’s funding that program, and they have a suite of capabilities there. They can either take the models and run them with the authors if they have enough computer time, or they can take the data from the modelers. (I believe now they are just obtaining the data) There are agreed-upon protocols on which variables they’ve asked each group to submit. And they’ve originally specified the ocean temperatures they wanted everybody to use so there wouldn’t be a difference caused by ocean temperatures differences. Most people have agreed to take part in that; I think they find it’s a useful exercise.
Did you learn anything in particular from any of these models?
Yes. I think that we learned — the ones that Jerry and I were using had some very strong biases in the older generation models, and most of those biases have been improved by CCM-3. So CCM-3 model is competitive with virtually all the other models — the European Centre models, the Max Planck — in fact, I think it’s better than Max Planck. Well, I’m not sure. It depends on which version because the reason these inter-comparisons keep going — I’m not sure how often they’re done, every three or four years or something of that sort — is because there are new additions to the models, so people have added new features. For example, a new feature that people are adding is liquid water concentrations in the cloud microphysics.
In the atmosphere?
Both in prediction and climate models, as a way to get a better handle on cloud processes.
One thing that one might mention about projects like that is the sense that the modelers — you know, if you think back to the early sixties, GFDL was a closed shop, it was a different atmosphere in which modeling groups were developing models more or less independently. The model inter-comparison project invites sharing of things to the point that one could imagine everybody going off in a wrong direction because kind of a herd mentality develops. Somebody does something that looks good on—and everybody else does it, too —
That’s the danger; that everyone will migrate to the same physics, or the same way of doing things. I think that’s a danger. It’s true more so now because of the heavy investment. In other words, if you wanted to write a proposal to NSF to start a new model, they would say no. They wouldn’t even entertain it. What they’re trying to do is consolidate. I think for the enlightened program managers of these various agencies, that they need to look at if somebody sends in a proposal for another numerical scheme or better cloud treatment or different sea ice treatment parameterization or some other innovative thing, I think they need to make sure that’s given a fair hearing in the review process. Because if not, the field’s going to not be looking for new ideas. And I’m worried about that, because with CSM especially. Because you know that when you need to have a bigger structure, there’s a tendency for the quiet scientist who may be extremely good, who has a new idea, of not being heard. I can just give you an example of where that — I can give you several examples, but the most notable one was when Bill Bork—Canada, worked with the Australians on the spectral model — most people didn’t think spectral models were attractive in a finite differences world, and if he would have said early on, “That’s a bad thing to do, let’s not put any support or money into it to test it out,” we would have lost a huge innovation. It wouldn’t have come naturally through some big committee thinking, “Well, should we work on spectral models or should we put more investment into finite differences or we should do this or that.” I don’t think it comes out of that kind of committee mentality—innovation. (I shouldn’t preach on things like that.)
Closing up for today, I have two more things I want to talk about. They take a fair bit of discussion. One is: we’ve talked already about some of the affirmative action minority model scientists have, and things you’ve done. I’d like to hear about the last ten or fifteen years. You’ve received a number of awards for your work in the area of more committees to improve the situation. You were saying a few minutes ago that it’s not — I don’t have a very specific question here, but I would just like to hear more on your views on the present situation respective of African-Americans in science, how you —
How I feel about it? Let me put it in the broader prospect, not only Afro-Americans but other under-represented groups. I think we’re in a double bind to a certain extent. Everyone realizes that the numbers are quite small, and at the same time there are more attacks on affirmative action. Affirmative action does work and is a way to help solve the problem of under-representation. If you go to a place like GFDL for example, or one of the Washington laboratories, if you look at the statistics, they’ll say, “You know, you’ve got 25% Blacks or Hispanics.” But they’re mostly janitors and secretaries and so forth. And the number of people representing the higher end of the job spectrum just aren’t there. And we’re in a situation where culture has changed to a certain extent. This gets back to something I said to you a couple of days ago, or yesterday I guess it was, I grew up in a kind of integrated neighborhood that was relatively small number of African-Americans. And the high schools were extremely good. And they didn’t have all the disciplinary problems and so forth. So when they applied themselves, they could probably do reasonably well in science. We’ve gone to a system where there’s been more segregation to a certain extent. The inner cities now, places like Detroit, Los Angeles in some places, and other cities, don’t even teach physics in most high schools now. And they do have chemistry — it’s usually an elementary kind of chemistry, or biology. So when those students do come through those schools — especially in the inner cities — they don’t have the skills to go into science at the college level. Unless they take remedial preparation, and usually that’s extra-curricular — most of them don’t want to bother with it. So even though maybe the population of students is going up a little bit, the ones who are going into science are going down. That’s the problem. And I consider that deplorable because — I’m talking about absolute numbers, not percentages — because the number of minority students is probably increasing over twenty or thirty years ago, so there ought to be a higher percentage that’s going on to college. I think that’s a little discouragement on my part. Not to the point where I’m not willing to put in effort — I am willing to put in effort, I do put in effort. As a matter of fact, I make sure that every minority student that comes and works here, they spend time talking with me and getting some encouragement, advice on what graduate schools to go to, and who to contact and that sort of thing. So I still do a lot of this kind of mentoring with graduate students, undergraduate students, and young professionals in the field. I consider that part of my obligation. As far as I know, I think I’m the third Black Ph.D. in the field.
In atmospheric sciences?
I think so. It’s interesting. One just recently died; his name was Charlie Anderson. He was probably the most senior person. I don’t know if you’ve heard of Charlie Anderson. It’s interesting; he was one of these famous Tuskegee airmen. But he was a weather officer for the fighting wing. Once the war was over, he went to Chicago and I think he got his Ph.D. at MIT in 1962 or around there. He was a little older than I was. He died about three or four years ago, but he was at Wisconsin for a long time teaching cloud physics and then went on to North Carolina. He was a big influence on me.
So you knew him pretty well?
Yes, I knew him very well. And the other one is an interesting case. Patrick Obassi. Do you know who he is? He’s the secretary-general of the World Meteorological Organization. He signs all those reports. In fact, he became Secretary-General after Aksel [Winn-Nielsen] was Secretary-General of the WMO. He was from Nigeria, and went to MIT, and got his Ph.D., I think, in 1963. Since he is really from Africa, I suppose he should not counted as an African American, thus that would be make me the second African American with a Ph.D
Did he go back to Nigeria, or did he stay in the States?
He went back for a while. In fact, I met him — he was on the wrong side of a coup. This must have been sometime in the early seventies. So he left Nigeria and went to Kenya. And there was an interesting experiment there. — there was something called the “East Africa Meteorological Organization.” What they did was all those countries—Zimbabwe, Kenya, two or three others, Uganda and others—joined forces into one weather service that was joint between all the countries. They also had a department of meteorology at the University of Nairobi, and that was probably the best department in the whole continent of Africa. Many of the people who are over here who are Africans went to that department. It’s been disbanded now because of political problems and so forth. There was that department and another in Nigeria, which actually taught meteorology most of the Africans. And the good students went on to Europe and England and the U.S. to get their Ph.D.’s. I was pleased to see later that there are now international programs to educate and train meteorologists between Africa and the Western countries. I have no idea what the total population is — we don’t keep track of each other, but I don’t think we probably graduated more than one Ph.D. every two or three years in the country now.
I think it sorts of begs the question and I think I know what you’re going to say, but is there something about atmospheric science that makes it particularly difficult to —
I think we’re looking at the 1% rule. At the 1% level is the number of Ph.D.’s in the physical sciences. It’s roughly African-American only about 1%. So when you’re talking about — we’re probably generating only 100-200 Ph.D.’s, it’s probably less than maybe one or two a year, are coming out. So that kind of gets back to what you asked me about, do I expect to see another Warren Washington here at NCAR? I think our chances are pretty slim. We have about 130 Ph.D.’s; I think one or two have graduated per year. The chances of NCAR getting one of those is probably pretty slim. I think the academic people will grab them a lot faster, because in the academic world — in places like Michigan — are under a lot more pressure to make sure they bring in capable Ph.D.’s.
I heard about a big lawsuit last year against the University of Michigan over affirmative action, to try to get rid of the affirmative action policy, and they are probably going to lose in the courts and still be [?].
Yes, the University of Michigan case has been a cause of concern. At the National Science Board where served for 12 years, we had a case that we just had to settle. It was a case which is going to devastate our affirmative action program at the National Science Foundation. On the advisement of the Department of Justice, we had to settle a case in which we had a summer camp for Black girls to teach them science for two weeks. And it was for juniors in high school. And a white male applied to that program, and was turned down, and was paid a handsome settlement. Now it was very distressing to the Board, but the Justice Department felt that was not the kind of case we want to take to the Supreme Court circumstances. But apparently this person was put up to it by right-wing organizations because they are the ones who can pay the lawyer fees, and you look at it, I don’t think it was a discriminatory program because — in the normal sense of the word — because they had picked out these bright young girls who were interested in science, and thought that they kind of get some advantage in interacting for a few weeks in the summer to reinforce each other, that they were doing the right thing in going into science. It was supposed to be a fun experience for teaching science. I don’t know what school this was at, but there are all kinds of these things all over the country now going now. I just feel that we need to have all kinds of programs to encourage students to go into science and technology areas…it is the future. No one program is going to fit all circumstances. It is a shame that right wing organizations want to put road blocks in place.
These play a very important role. That’s terrible.
So we lost that case and we had settled. And I think the lawyers got all kinds of money out of it. Now we’re trying to fine-tune the language for these kinds of workshops, so they pass muster with the legal requirements. We don’t want to stop doing them, but we need to make sure that they’re written up in a way that satisfies the legal requirements.
Let’s talk about the Sununu affair. You’ve written this up very nicely in these pieces that I guess were made from your notes on the plane coming back from —
So you’ve made copies?
Yes, I really enjoyed that. I had heard about this from a number of people…We don’t need to go through the whole story since you’ve already written it up elsewhere. Are these things available for people to see, are they published? I didn’t think so, but…
I don’t know. I don’t think there’s any problem with making any of them available. If you want to use them in any way that makes sense to you as part of your story…that’s fine. I’ve never published them because during the time I was told and I know that if I would have revealed it to the press, it would have caused a lot of embarrassment. And I would have eliminated an inroad with these people that everyone felt—people in the Global Change community thought it was great that I had such ready access to the White House. Even Gore asked me to tell him about…
Let me say on the tape as we’re talking about it, that you were asked by John Sununu to give him sort of a stripped-down climate model. He did one in his office, you did that —
I’ve got that, if you want to put it on your computer.
If it’s easy to get, I would love to [have it].
I can download that file onto a floppy disk. I’ll bring it in.
These documents I was just talking about are right opposite the experience that will be available to the NCAR Archives; just say this for the tape. Will the transcriber know where to get them?
Or through you, if you’re still here.
I’ve opted to be open as much as possible. In fact, I also have a book at home that Allen Bromley wrote after he left the White House — he was the President’s Science Adviser. There’s a little bit in there.
There were pages from that in the files that you gave me, so I copied that.
In the write-up, you described this episode as Sununu had training in physics; he wanted to see how the model worked. He grilled you on the way various calculations were done and wanted to sort of see it for himself. Do you think it had any influence on him or on anybody else in the White House in terms of their thinking about the climate change issue?
Well, let me see. It’s a good question. I don’t think Sununu actually used it in any way. I think he was embarrassed to. Maybe Bromley told him that, “Gosh, if you say that climate change isn’t going to take place because you’ve incorporated a thick ocean layer as a way of slowing down the Greenhouse Effect, people might think you’re an idiot.” I have no evidence of exactly how he used it. However, he was a very strong supporter of putting more money into global change research. And that was the whole point before the Community, as a way to stall it. Can I tell you kind of a crude joke? (crude is not the right word….strange is better)
Maybe I’ve already mentioned this to you. On one of my trips back there, I was on the AAAS board at one time — this is in early 1991-92 timeframe, before Clinton took office. There were two things; in fact, about that time Bush was supposed to come to the AAAS meeting, but that was the day he decided to send the airplanes into the Persian Gulf, actually start the attack. So he wasn’t coming to the meeting — it was in Washington, I think, at the Sheraton Hotel — he asked the Board to come over and then he would videoconference his speech to the Congress by television, closed-circuit television. So we were all told to get over to the White House and sit there. I was in the front row, by the way. And Roger Revelle was there. But he was very ill, and he wanted to come. Unfortunately we talked a little bit; I think that was the last time that Roger left home, I guess, before he died. But that evening, or one of those evenings, we (the AAAS board) had dinner. The speaker of the evening was Bromley, to the Board. I was sitting next to Bromley at the dinner… [INTERRUPTION]
The following short section of tape has been erased at Warren’s request with something he didn’t want to have on the record. Tape continues after this.
— the White House. They really were tired of being beat up about this Global Change — [brief interruption] But I don’t think it extended to the Cabinet. I don’t know if you noticed one thing that I said in my notes, that Darman, Richard Darman, was very concerned about this and they were really worried about chaos theory. In fact, someone brought up chaos — I forgot the guy who was on the Council, one of the economic advisors who was there at the presentation, but they were really worried about this. But they were also worried about how it would sell. And I think the strategy — although I wasn’t in on the discussions of this — but it was quite apparent because I think there was a White House conference on global warming which I think was in the (news)papers where a lot of this took place, a few months later. They promised to put a whole lot of money into global climate change rather than try to do any kind of mitigation.
They said the US GCRP is a product of the Bush Administration.
That’s right. And got substantial budget increases during those years, out of proportion to what other science agencies got.
That’s still very much framed as a research —
I really don’t know what Bush himself — I just talked with him very briefly, but I never got a sense of what his feelings were. I think he left matters like that up to Sununu. Sununu had strong biases. But the other agency people — Reilly over at EPA, I talked to his staff, and I talked to Admiral Watkins’ staff, and the Secretary of Energy, and those people were much more concerned about what possible changes might take place. I don’t think they were so much into [the] not-harming-the-economy sort of argument that Sununu had. He’s still like that from what I have heard.
I don’t know, he used to be on “Crossfire” years ago. I don’t know what he does now. I don’t see him on television, like he used to be on — after Bush left office; he used to be on television talk shows, that sort of thing. I don’t know what he does now. I’ve only seen him once since then. There was a meeting at MIT where Dick Lindzen was at; it was arranged by John Deutsch (A former professor of MIT). John Deutsch was head of the CIA for a while, but he was also in the running for the provost of MIT, so he had this kind of workshop sort of thing. It was kind of a talk fest. Bromley came to it. They had cameras on and all that stuff. I never saw the videotape of this thing, but it was a high-profile kind of thing. I don’t think anything of substance was accomplished out of it except everybody aired their concerns, both pro and con about climate change.
We could stop here. There’s obviously much more to talk about, but it depends on you. Do you want to spend another half hour or so tomorrow morning to talk about the Clinton administration, the most recent period, which would be good. [BREAK]
All right, so we’re back for one last little bit of talk with Warren Washington. It’s October 30th, and there were just a few things I wanted to sort of finish up with. The first thing is: we talked at different points during this interview about the different styles of directing climate labs. You were the director of the Climate and Global Dynamics Division at NCAR from 1987-95?
I’d like to ask you about what your own style was, and what sort of complex advantages that style produced.
In terms of directing or in terms with interaction with the Washington scene?
Let’s talk about internally first, and then externally.
I probably followed the style of the people like Chuck Leith and Rick Anthes and Will Kellogg. Although the biggest thing that kind of happened was, I would say, about halfway into my tenure, that we got more and more pressure on us to build a more coordinated climate modeling activity.
Where did that pressure come from?
I think from the community, in general, NSF. Part of it was due to the fact that NSF was supporting several modeling groups, and they knew that that wasn’t sustainable. I think our model was at least during part of that time, the modeling was mostly atmospheric modeling, so we were making new or additions to the versions of CCM—the Community Climate Model. And they always took longer than people thought they would take, just like any other complex thing. You make a timetable early on with the hopes that you can get things done, but then, things take a lot longer. Sometimes by a factor of two. And there was a growing awareness that the smaller effort that I had working with Jerry Meehl really wasn’t getting us to the place where we’re going to be able to compete with the Hadley Centre and the Max Planck Center. So through these meetings I talked to you about mostly organized by Anthes and kind of a group of outside advisors, including Bob Dickinson and others [Bob Dickinson is at the University of Arizona, but he used to be here at NCAR], that it was agreed that we would make this a higher priority. So I entered into negotiations with Jay Fein, at the National Science Foundation, who runs the climate program, and Jay then worked with our people, including me and others, to come up with something called “CMAP:” Climate Modeling and Analysis Prediction Program, which is an initiative that NSF put together that funded increased augmentation of climate modeling research here at NCAR, as well as with the academic community. It was like a minor theme in the part of the Foundation that funds atmospheric research. There was a plan put together by NCAR and UCAR, which was to essentially scale up to sort of newer generation climate model components.
— publication or document of some sort?
Yes, there probably is.
How would I find it?
It could be buried in my files somewhere. But it’s probably buried in the division office… get a copy out of his files of early Climate System Model as well as CMAP plans that NCAR and NSF put together. And I think that we pretty much followed that plan, so —
There I may be a little hazy about, but it was before I stepped down as director so I would say 1992, 1993, 1994 timeframe. So getting back to the original question: what came out of that was a more focused division plan for how we would get to the next level of climate modeling.
How did people here feel about that?
It was mixed. I think most people agreed that we had to do it. It was an appropriate activity for a national center.
They put you into a position of having to hold out carrots and move people with sticks to get them into line.
To a certain extent. It just meant, I think, it essentially meant that more of our NSF-supported resources would go into it. And as people like myself were funded by DOE, we were expected to use some portion of our funding to contribute to it. There was an understanding especially between DOE and NSF that DOE wouldn’t on its own start a new modeling effort, and that we would contribute to this NSF one. But at the same time, we would benefit from it, which meant that when a new atmospheric model was available to the community, that the DOE researchers, including myself and Jerry, would make use of it. So in sense what we did was actually help fund this new effort. In fact, Jim Hack and Dave Williamson, who were in the Climate Modeling Section building future generations of the atmospheric CCM-3 and CCM-4 models, had programmers who were paid for by the DOE. So the DOE feels that it’s a full partner in the CSM. Now keep in mind the only difference between the CSM and the PCM that I’m working on is that we’ve geared our model to make use of massively parallel computers that also use ocean and sea ice that come out of DOE funding. So I think just to emphasize again there’s a feeling of partnership between the two projects. And the DOE is not planning, for example, to build its own atmospheric model. They’re depending on using the NCAR model. Just in terms of the end products, let me emphasize again. The NSF wants to have a model that’s available to the university community. The DOE wants a model that could address issues dealing with the mission requirements of DOE. For example, the carbon issues about emissions policies. Sequestration of carbon. So their mission isn’t necessarily to supply a model to the community, but to answer some questions that the Department of Energy is interested in.
Anything else you want to record about your tenure as director of CGD?
I don’t know whether I was successful or not. I think I was successful in that we kept a top-rate staff with scientists here. I was not able to change the interactive skills of some of the scientists…and I think that that has always been a problem, because some people just feel that they should have their own agenda and have temper tantrums and that sort of thing and get their way. What I tried to do in order to keep my blood pressure down as well as keep them under some control, was use the collective wisdom of the senior staff. So I used the section heads and the senior staff as sounding boards. I went to them all the time about every policy issue, or every big operational issue we had to deal with and it probably took a little extra time to do it that way, but I consider myself more like a dean than a director. I had to follow certain main themes that were being passed down from above and I agreed with them. On the other hand, I didn’t specifically walk into people’s office and tell them, “This is what you have to done this week” — or that sort of thing. I think that worked out mostly; I think 80% of the scientists were quite reasonable. And I think — I have a different style, I suppose, probably not that much different from Chuck Leith or Anthes when they were directors.
We have a meeting in a few minutes…let me just ask a couple of final questions. We won’t have time to talk about this at length, but what kind of interactions have you had with the Clinton administration? How has that been different from the Sununu period?
I think very good. Obviously, I was rather surprised when I was appointed to the National Science Board, which is a prestigious honor.
This was in 1995?
Well, actually 1994, but I wasn’t confirmed until 1995. On that class of people who were nominated by the President, it took a year to get confirmed by the Senate and it’s taking even longer nowadays. The Committee on Labor, I believe, it had to be approved and I believe it was under Ted Kennedy and [?] Kastenbaum. And they worked very cooperatively. Now on the Senate side, things just sit for long periods of time. The Senate Majority Leader, Trent Lott, has over the last six months or so, really turned things around, and now appointments are being treated much more quickly. So I just saw a couple of days ago that three new members of the National Science Board have been confirmed as part of the budget package. They kind of pushed it in. But I think I have always had very good relations because I served on the AAAS board with Jack Gibbons, who became the White House Science Adviser. And I’ve known Jack for a long time. In fact, when I was at an AAAS dinner, Jack came over to me and asked me if I would agree to serve on the National Science Board long before I was formally nominated. Just to make sure I wouldn’t say “no” after they go through that tortuous process. It is kind of a tortuous process. On the Board itself — it sort of nominates like fifty people, in various categories to fill in.
For how many slots?
For eight slots. And we probably only get into roughly four or so, once it goes through the White House. The nominations go to the Science Adviser and the Science Adviser forwards them on to the Personnel Office of the White House, and somehow the names come up. It turns out that under most administrations, they’re pretty non-partisan. There’s a real effort to not pack it with people of one party or one sector in the scientific and technology society. I’ve obviously been active in a whole bunch of other things. I serve on the advisory committees for DOE and for Jim Baker at NOAA, and I chair the advisory panel for the DOE.
Also the Environmental Defense Fund.
I’ve never gone to a meeting. In fact, if you look at that list of people like Barbra Streisand and so forth, I think it’s more of a show kind of list. But back when they asked me to be on it, they said, “You won’t have to go to any meetings.” I think what they want is a letterhead sort of thing. I kind of like the things they’ve been doing. In fact, I’ve gone out and talked to, way back umpteen years ago, I think they’ve done some good things. John Firor, who was previously director, served as the Chairman of the Board for many years of the Environmental Defense Fund. I don’t want to be too closely associated with the environmental movement because people won’t become objective, but I think that there are broader issues that environmental organizations such as EDF work on that I do support.
We have time for one last question. This is going to be a doozy. If you had to name what you think are your two or three most important contributions to climate science, [what would they be] [?]
That’s a hard one for me in some ways because one looks at your career in much smaller segments. I would say, in building models. And being able to use the models to help understand societal issues. I think I’ve arrived on the scene at the right time to make contributions. I think now that you’re much more part of a much larger effort—and individuals sometimes get lost in the effort because of the size—but that’s just the way a lot of science goes. It starts small then grows to some large things and the management of it leads to a fulltime job. And it’s unfortunate, because I think it’s hard to have a holistic view anymore of what climate modeling’s all about, because of the breadth of disciplines involved.
Yes, it’s a whole different world now. Big, big projects and teams, a different way of working from more or less the Sixties, when you got started.
I think I made the transition to a certain extent, like this DOE effort I’m working with people at other laboratories and so forth and we’re all itching to contribute to that common goal, and to the NCAR effort, too. I’m willing to kind of change my style. I know people sometimes have difficulty changing their style of science as their careers develop. Clearly, climate modeling research is beyond an university department-scale type project.
I want to thank you very much for a wonderful interview. It will be very useful to me and also I think to many other scholars as they start to study this field.
If there are other questions as they come up, give me a call and I’ll try to help you out.