Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
Please contact [email protected] with any feedback.
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of Cecil Leith by Paul Edwards on 1997 July 2, Niels Bohr Library & Archives, American Institute of Physics, College Park, MD USA, www.aip.org/history-programs/niels-bohr-library/oral-histories/31392
For multiple citations, "AIP" is the preferred abbreviation for the location.
In this interview, Chuck Leith discusses his early life and the development of his career as a physicist, a mathematician, and a climate modeler. Topics discussed include: University of California, Berkeley; Ernest Lawrence; Lawrence Berkeley National Laboratory; Manhattan Project; Edward Teller; Emilio Segrè; Herb York; John Von Neumann; computer atmospheric modeling; Bob Richtmyer; Joe Smagorinsky; J. Robert Oppenheimer; UNIVAC; Institute for Advanced Study; simulations of thermonuclear explosions; Joe Knox; Jule Charney; George Cressman; numerical weather prediction; weather modification; Akio Arakawa; Lewis Fry Richardson; Warren Washington; George Michael; Mike MacCracken; University of California, Davis; Dave Fultz; Global Atmospheric Research Program (GARP); National Center for Atmospheric Research (NCAR); Walter Orr Roberts; Akira Kasahara; Control Data Corporation computers; Cray computers; Seymour Cray; National Academy of Science; ozone layer depletion; Francis Benedict; Steve Orszag; Dick Lindzen; climate change; Alexander Obukhov; Andrei Kolmogorov; Rich Anthes.
...problem with this tape recorder, which is I can’t tell exactly when it stops, so I have to check it and keep track of the time and in 45 minutes I’ll start looking at it. Okay, so, this is Paul Edwards. I’m interviewing Cecil E. Leith, who likes to be called Chuck. Cecil E. “Chuck” Leith.
In my office at Stanford University on the 2nd of July, 1997. And we’re here to talk about your career as a physicist, a mathematician and a climate modeler, and I guess I want to start with just some basic background about you. I believe your CV says you were born in 1923. And tell me about where you grew up and what your early life was like.
I grew up in, I was born in Boston Massachusetts and I grew up in Massachusetts, went to high school in a town south of Boston — Scituate. When I graduated —
How do you spell that?
I’ll probably have to ask you to spell a number of things during this so that the transcriber can figure it out.
Fine. When I finished high school in 1940, I went to the west coast. That was about as far away from home as I could get, as it turned out. No particular reasoning except the interest in seeing another part of the country. And I went to Berkeley, and started there as an undergraduate in the fall of 1940. In mathematics, and that was the subject I was generally interested in.
So the war was on then, but the U.S. had not yet entered it.
That’s right, that’s right.
And what was that context like? Did you have some awareness that this might result in military service later?
That’s right. I finished my Bachelor’s degree in the fall of 1943. That was short, because they had kind of a compressed schedule with people going to school during the summer, in order to speed up the process. And when I finished, while I was a student I had effectively a student deferment, but when I finished then there was a question of what I was going to do. And what I did was to start working for the University of California radiation laboratory that Ernest Lawrence [?] had set up some time before. I started working —
This was the one in Berkeley, because the Livermore didn’t exist.
No, that’s right. In Berkeley. So my background, although my degree was in Mathematics, and yet nonetheless I then found myself being thrown into the sort of research and development aspects of the Manhattan Project which was getting, Lawrence’s branch of that, was getting started there in Berkeley. His part of that, his contribution to that, was separation of uranium isotopes by electromagnetic separation. What it really amounted to was the fact that he had before the war got the money to begin to build his big accelerator, and the part that he had built already was a big chunk of iron for the yoke and the magnet that he was going to use for the magnetic field, which later became the 184-inch cyclotron. But the war came, and of course that interfered with his plans to build the cyclotron, which meant that what he did was, for some months had a big chunk of iron sitting on the hill above the campus in Berkeley, yoke and magnet fields. But then it dawned on him that he might use that field for electromagnetic separation of isotopes would be a useful activity in connection with the Manhattan Project, in particular that therefore see if one could in that way separate out the 235 isotope of uranium, which was the one that was interesting for weapons purposes. And so there was a kind of a prototype separation process that he got started there, and I got involved in in those early years, because it was just about getting going, but already they were planning to replicate the whole thing many-fold over in a big plant in Oak Ridge, Tennessee. And so already in January of 1944, after I had only been there a few months, I and a number of other people from Berkeley went across the country by train to Tennessee to be involved in the initiation of, the development of the aspects of the plant in the isotope separation plant in Oak Ridge.
You were still a civilian at this point.
I was a civilian at that time, that’s right. That’s right. During the —
Was any of this work classified at this point?
But that wasn’t a problem for you as a civilian to be involved in?
No. I came under the usual rules for classification, which were actually sort of just getting formulated at that point, because it was early days, but I and others were well aware of the fact that this was a highly classified project. We were not supposed to in fact talk to people about what we were doing. When we took our trip from Berkeley to Oak Ridge, even that was supposed to be kind of secret. We were not supposed to tell people where we were going. Although there was a fairly large group of us, and it was kind of an obvious anomaly in the transportation system at that time, but nonetheless I remember one of the things was that we had been advised, because Oak Ridge was still in rather primitive state and was a lot of red East Tennessee mud on the roads, they weren’t paved yet, so we were advised that we should stop by Marshall Fields in Chicago en route and buy overshoes — not just rubbers, but overshoes — because the mud was so sticky that you had to actually lock it onto your feet. Which we did. Now again, we were not supposed to be telling people where we were going, but there was no question about the fact that there was a peculiar situation was that a fairly large number of people through Marshall Fields to buy overshoes at a particular time.
About how many people?
I think in our group that went, there were several groups of people went, and I think there were about the order of 30 people in our group. We traveled by train.
And you had to go to Chicago before you went to Tennessee —
We went to Chicago to go on our way to Tennessee. Yeah. To Knoxville, Tennessee, which was the — So, anyway, those were indeed, it was a very primitive situation there, although within a short time they started building more housing and this, that and the other thing, and so my wife joined me in Tennessee. And then —
When did you get married, and what was the —?
Oh, we were married in 1942 or thereabouts.
Did you meet her at Berkeley?
Yes. She was a student in the — She’s a Californian and I’m from Massachusetts, but she was Californian, from the central valley. So, yeah, we were 19 years old at the time we were married, so we were pretty young, but those —
What’s her name, by the way?
Mary. But these were, you know, peculiar times in any case, in one way or another. So she joined me in Tennessee, and then after a while I was drafted. Although people involved in that project could be deferred and were for quite a while, because of security at some point they didn’t want to be making the case to the draft boards about how important this was, and also I think there was sort of a blanket rule handed down by the draft boards in general that no one under such-and-such an age could be deemed so important that they should be deferred, and I was below that critical age, because I was one of the younger people in the group. So for that reason, I knew that I was going to be drawn into the Army. I knew also, however, that I wasn’t going to be changing where I was going to be working because they were —
You were just 21 at this point, right?
Yes. That’s right. And so I was essentially sent back, after spending a few weeks in Louisiana in basic training and engineering, corps [?] of engineering, back to Tennessee. There again, I was not supposed to talk to anybody about where I was going while I was in the Army, although it turned out that one other person—it became sort of clear who was in basic training with me also was going to Tennessee. He didn’t know why he was going to Tennessee. I knew more about that than he did, although —
Did you know you were working on a bomb at that point?
Yes. Now, many people in Tennessee did not know. There was a number of layers of cover story about what was being done at that plant. Those of us from Berkeley however were enough deeply involved in the physics, you know, physical notions of what was happening, so we knew what we were doing and why. Although most of the people working at that plant in Oak Ridge did not know. It was amusing that there were badges which had a letter and a number on it, and the letter indicated which areas you could go into and the sort of levels of classification that you were entitled to, the numbers told which cover story you were supposed to know. [laughs]
How many were there? Do you remember?
Well, I think there were about two or three levels, something like that. I’ve forgotten. And then the amusing thing was that although we knew completely what was happening in our small group, for security reasons our badge did not show that fact. We were given a lower level, although we were only talking among ourselves in any case, so it really didn’t make much difference.
Did Lawrence go on that trip?
Yes. Lawrence was there occasionally. Now he visited. He sort of roamed around visiting the various parts of the center rather than spending a lot of time at Oak Ridge with a group there. But he did come through occasionally, and it was useful that he did, because, remember, at a later stage in the work that I got involved in in Tennessee, it became important for me to know something about what the customers wanted, the customers being at Los Alamos in that case, details of what we were doing. We could make certain changes in the way the plant operated and which would be either good or bad, depending on what they wanted. So but it was a rather technical question, which I couldn’t call them up on the phone and talk to them about it, so I used Ernest Lawrence as a courier because I knew he was going to go by. And so I posed the question to him to pass on to the people at Los Alamos, and eventually bring me back the answer. And the first answer that came back indicated that they hadn’t quite got the point, so I had to sort of sent him back again around through the loop in order finally to get a straight answer from them. It had to do with the fact that we could choose between quantity and quality of the material that we were producing, and I had to know the relative value of quantity versus quality. I had to have some kind of crude formulate to indicate what that was so that I could know how to optimize the situation.
So tell about what you actually did at Oak Ridge.
Well, the electromagnetic separation process which Ernest Lawrence had developed involved essentially separating uranium isotopes, 235 from 238, in the magnetic field by sending them [???] the source, ion source, send out a beam this way, accelerate it into a beam that goes in an arc and winds up in a receiver over at this end where there are two different slots in this thing. And one tends to collect the 238 isotope further out, and the other collects the 235 isotope a little closer in. And so this is a separation, [???] separation process. These receivers are built of graphite. They have a little sort of catch things in them, little containers that we’d try to — And I became fairly soon sort of an expert on the design of the receiver, rather than the source, so I was particularly concerned about the various aspects of where the beam was going and what it was doing when it hit this graphite and how it would erode the graphite and things of this sort, and how most effectively to trap these separate isotopes. The graphite pocket it was collected in would then be sort of burned down and go through a chemical process which would then purify the material that we were after. And this was, there was a 2-stage process. You went through this first stage, and then you took this stuff that you got which was somewhat enhanced in the ratio of the isotope you wanted, 235 over 238, but that wasn’t good enough, so then you had to go through a second process in which you took the output of that, did the whole thing over again, and that way you finally got up to the concentration that you wanted. Later the first step was replaced by a diffusion process rather than the electromagnetic separation process. In fact, of course in the Manhattan Project they tended to follow a number of parallel paths so that, you know, any one of them might not work, and so that way there was better assurance that something was likely to work. And so finally a diffusion process replaced the first stage, and it brought up what we called the enhancement, the ratio of the material a ways, and then we would take the output of that and put it through our second stage, which was the one that would produce the final material. But yeah. So I became sort of an expert on design of the receiver and its properties and so on. During that time — it was a relatively small group of people, and I did a few things well enough in that activity so that Ernest Lawrence appreciated my contributions to what they had been doing, and certainly had talked to me a lot about what was happening and encouraged me, so I got to know him reasonably well.
How much was the work that you were doing on this separation process sort of engineering level design and how much mathematical design?
Largely engineering. Most of the people there were physicists in the group, graduate students in physics department or whatever. I was I guess one of the few people from the math department, because that was kind of an anomaly, but I threw myself into the physical aspects of this, and the engineering aspects. In fact, one of the letters on our badge indicating what kind of background we had was “P” supposedly for physicists, although the people involved said actually I think it’s probably plumber, because it really is more of an engineering matter than it was a physics matter. But that’s got me involved in this sort of activity. When the war ended, well of course some of us knew it was going to end, or it would very likely be, because we knew ahead of time what was happening by, you know, we knew within a few weeks when these weapons were going to be used. But I was in the Army at that time, and I had to stay in, where the rest of the group who were not went back to Berkeley you see, so I was sort of left stranded in Tennessee for another year, something like that, six months or a year. The way, after the war was over, the way they let people out of the armed services had to do with how long they’d been in and whether they’d been in combat or not, so I was fairly far down the list for being let out. You know, that was a certain length of time just to process people out. And so I was there for quite a while, and finally however did return to Berkeley. I, as I mentioned, had just finished my Bachelor’s degree in Mathematics, so when I went back to Berkeley it was essentially to start graduate work in Mathematics, which is what I was planning to do.
Before you go on, let me ask one more question about the war years. What were your feelings about bomb itself?
That’s an interesting question. The use of it. We talked about it at the time, and I remember even discussions with Ernest Lawrence about it. He felt that he had to sort of express a view on the matter naturally, because he would be deeply involved in it. Our general feeling I think was that, because it ended the war it actually saved lives, both Japanese and American. I think that was essentially the essence of our argument. I know that recently people have been having some second thoughts and trying to reexamine those times and so on, but at the time I think that was the argument that we told each other and believed, that we had in fact, although a large number of people had been killed, it was such a shock as it in fact happened that it effectively terminated the war. The argument about the second one is a little fuzzier, I must confess, although we, I think we felt —The argument that was made I think, I don’t know how, this is a little more dubious, I must say, but I think the argument was that if it had been only one it may have been the only one in existence. It could have taken years before there was another one. By having two, one right after the other, indicated that in fact we were prepared to do this fairly regularly.
That there was an arsenal.
That’s right. In fact, in Oak Ridge we were not prepared for that kind of — There were two kinds of weapons, as you know. It came from different sources. The Oak Ridge kind, we were not prepared to do another one, because we essentially drained everything out of the plant for the first one ahead of time. It actually in a sense slowed down the regular production. But nonetheless, within two months we would have been back in business. So it was one of these situations where you are building up production. Of course you are mostly interested in getting the first one out, but it was on a rising curve and so it would have been coming out very rapidly after that. So there was certainly some truth in that fact that there would have been more fairly regularly if it had been necessary. But, well, so that was the argument that we made to each other and that Ernest Lawrence made, and I think that we more or less believed. I guess I still believe it. So there were of course, have been more recent discussions about the possibility that the war was going to end anyway for other reasons, and that of course is something that we were not, we didn’t know so much about, so I guess we felt that we had [???]. When we had been working on it, we had the feeling we are going to do something that is going to end the war, and we did something that ended war, and I guess the inclination was to believe that yeah, we had done it. So —
So was there a sense of pride about —?
Yes. Yeah. A sense of accomplishment. [???] the people who feel most strongly on this argument, incidentally, I’ve met since, they were the people who were in training for the invasion of Japan in the military.
They were not happy about this concept.
That’s right. I talked to a few of these people, and they said there’s no question in their mind that they were really very happy that this abrupt end had come to them. So anyway, that’s —
And one other question. I was going to ask you this later, but since we’re talking about this period anyway. One of the articles you have is in a volume that’s a sort of birthday book for Edward Teller.
For Edward Teller?
His 60th birthday book [???].
Yes. Did you meet him during this period, or was that later?
Yes. I’m trying to remember. I think I met him, though I didn’t know him very well, not during the war years, after the war years, but almost immediately after the war years when I was back in Berkeley as a graduate student. Although I was a graduate student in mathematics, Ernest Lawrence saw me walking across the campus after I’d been back there a day or two and he told me to go to work as an experimental physicist essentially. This actually put me back in the laboratory. And it was kind of interesting. I was a graduate student in mathematics, but I was working as an experimental physicist essentially. And during those years, I worked rather closely with Emilio Segre for one, as an experimental physicist, and I remember once Edward Teller was, I think he was usually in Chicago but he was in Berkeley visiting people in the physics department, and I think I met him in Sagray’s office sometime when he was visiting, and maybe it was Sagray, during those years right after the war, late ‘40s. Then of course there was the Oppenheimer-Teller affair, and the whole issue of what to do about thermonuclear weapons, and so on and so on, and the Livermore Laboratory was to some extent Edward Teller’s laboratory. It was a second laboratory, because the Los Alamos people had not been in his mind responsive enough to his ideas. And during, about 1950 or ‘49 or so, when I was in Berkeley, I got caught up in a measurements project that was being run by Herb York, who was incidentally the first director of the Livermore Laboratory.
I’m not quite sure how he was drawn into it, although I think probably — Well, anyway, it was a challenging experimental program. I had been working with Herb as an experimental physicist. We’d been on a small team of people working under Emilio Sagray for the most part. He was Emilio Sagray’s student; I wasn’t. I was in the math department. But we were working together on these scattering —
And this is at the Lawrence Radiation Laboratory.
At the Lawrence Radiation Laboratory. It was called, you know, you see radiation laboratory at that time. I think the Lawrence name was appended only later. But it was on the hill behind, up above the campus.
Same place it still is.
In Berkeley. That’s right. And so this measurement project involved a request essentially from people at Los Alamos in connection with one of the tests out of the Pacific of thermonuclear weapons. One of the first efforts to actually have a manmade thermonuclear reaction is what it amounted to. It was strictly an experiment. It wasn’t anywhere near a weapon. It was just trying to answer the question is it possible to make a thermonuclear reaction manmade. And our job was to measure the information about what was going on during this reaction process, which is of course very fast, and then do experimental diagnostic studies to find out what was happening — what the temperature was, [???] happened, whether that actually worked or not, whether there had been any reaction products from this that we could identify, and so on. And that was the so-called Measurements Project, that Herb York was a leader of, and it was a group of about 40 or 50 of us that went out to the Pacific for these tests in connection with that.
It’s the [???] test or —
There was one in Eniwotek, if I remember correctly, is the one that we were involved in primarily.
Eniwotek, that’s right. Spell that if you —
E-n-i-w-o-t-e-k, I think.
I’ll check it when the transcript comes back. Okay.
Yeah. In fact, there’s a tape that somebody lent me just a few days ago, a week or so ago, about these weapons tests. I don’t know if you’ve ever seen it. But anyway, I happened to be watching it within the last day or two. It was reminding me of some of these events that took place out there. In any case, we were involved in this Measurements Project, which turned out to be successful, and we did what we tried to do. We found that yes, it had worked. We measured the temperature and so on. But that meant that when the issue came up, because of the Oppenheimer-Teller affair, setting up a second laboratory, Herb York and this group of people were kind of a natural nucleus for starting such a thing from the Berkeley end — or Livermore, as it turned out — because we had been drawn enough into the whole nuclear weapons business that we knew what the problems are and what we were trying to do. So it was in the fall of 1952 that the Livermore Laboratory was set up in Livermore. When we started. It had been a Naval air training station, and so there were already barracks buildings and all sorts of other things for us to move into, which we did. But the actual discussions of what we were going to do and get started on were started in the previous summer, around June of 1952 in Berkeley. Now one of the first things that we knew we had to do, and this was at the suggestion of Edward Teller, who was under the influence of John Von Eimann [?] on this issue, was to buy a computer. It was fairly clear to Von Eimann, and he had already been working on this, and he persuaded Teller and York and me and others that there was no way we were going to understand what was going on here if we didn’t start working on numerical methods, computer techniques. And so that was when I got very deeply involved in that. I got sort of fascinated by the question of what can we do with these things. I had already had a background because of my interest in these things in hydrodynamics and radiation transport and things of this sort, but I also recognized, as others did, that some of these problems were so complicated that it was going to be very useful to develop numerical techniques for trying to handle these sorts of things. And so that’s essentially what I got involved in, very much, very deeply involved in it, very caught up in it, and that’s what sort of interfered with my graduate work. Until you know I sort of was — I’d finished all the course requirements, but I still have to worry about thesis research. So I sort of put that off, and then after some years after we got going things settled down. Herb York, who was a director at the Livermore Laboratory, essentially asked me if I didn’t want to go back and finish up, take a year off to go back to Berkeley to finish, which I did. So it was at an interesting, well, it turned out — that was about 1957 or thereabouts, something like that — and it was an interesting time as far as I was concerned, because after I did get my degree, which incidentally was linear operating theory, which had little or nothing to do with the other things I had been working on, when I did finish then of course I was at a point where I had to make a decision about what I’m going to do. But was also at a point that it wasn’t clear that there was going to be much future in nuclear weapons design, which is what I had been really mostly involved in. That was about the time of a temporary moratorium on nuclear tests.
Yes. It only lasted a year or so. I’ve forgotten exactly the dates now, but there was beginning to be the notion that this was a kind of dead end activity, and of course it would be interesting to find out some other matter to get into which would also involve complicated physical processes, and I’d had some experience with this, and that’s what got me into the atmospheric modeling business.
Okay. Let’s go back a little bit before we get into that, because that’s a different phase in your career. One of the things I’m very inter — I wrote a book about the history of computers during the Cold War, so one of the things I am very interested in all of this is the role of computers in climate modeling. And, as you know, this was one of the major problems that Von Eimann saw as application of computers.
I believe he perceived two things: weapons and weather. That’s what it came down to. Both he was fascinated by. Yeah.
So I want to ask you to talk some about not only the sort of Iniac [?] and post-Iniak digital computers, but any other kinds of computing equipment that you used from the work at Oak Ridge all through the founding of Livermore.
I’ve often seen these plots of the logarithm of the speed of computers versus calendar year, and it turns out that I can track back those points from my own experience to the one in which the clock time was in the order of a second. But of course it’s gone up ten to the eighth by now, or ten to the ninth. But in Oak Ridge I did get involved in the beginnings to me of numerical schemes. We, as I mentioned, had these issues having to do with making choices between quantity and quality in connection with plant operation. We have certain parameters we could play with in the design of the receivers, as I mentioned. That was the thing that I was particularly concerned with. So we could make certain changes in the design of the receiver and trade off amount versus purity. And there was an issue of how to make that tradeoff. Well, I reduced that problem to a rather simple system of ordinary differential equations coupled, ordinary [???] differential equations for which I could not easily get an analytic solution. Therefore I turned to numerical methods, which is fairly early in the game. The computer that I used at the time was the IBM 601 Multiplier, an accounting machine. But people in Oak Ridge had already started using it for —
This was a punch card machine, yes.
Well, no, it wasn’t even a punch card machine. The way you programmed it was to plug in a plug board with connecting wires. And so with the wires you could say that the result of this multiplication should be fed into addition and so on.
Were the inputs punch cards, or how did you get the inputs into the machine?
I’m trying to remember now.
We had to load up certain registers which we used for storage of numbers, and I was only worried about a few of these things. But I’m trying to remember now what the mechanism was for setting up the — The output essentially was the counting machine, paper rolling up in it.
No. A sheet of paper, wide sheet of paper.
Yeah. A typewriter.
A typewriter. That’s right. Except that it was, all the type, each line [???] in an accounting machine, essentially there was a roll of typefaces and they all hit at once every second essentially, and the paper rolls up, so you see —
This is all numeric.
All numeric. Yeah. So that had a clock time on the order of a second. If fact, you could to it. They’d go chunk, chunk, chunk, like this. And then you —
Was that all mechanical? Were there any electronic parts involved?
No, I think it was all mechanical. The registers were essentially little, some kind of mechanical memory devices, which had little rachet [?] control things.
Right. Gears and [???].
Yeah, that’s right. So it was all mechanical. That one. And as I say, so that was the 1-second clock, but then when I got involved in the beginnings of these things in Livermore, I knew of course I had to make numerical simulations of explosions is what it amounted to, and putting in a lot of different kinds of physics. The first ones that I did, where things were just essentially spherically symmetric, so it was essentially spatially a one-dimensional problem in time with some variables to keep track of. Now, I was doing essentially hydrodynamics and radiation transport and neutron transport, thermonuclear reaction calculations and all that. Sort of poured into this thing. Because I was the one who wrote the first simple explosion calculations for the Livermore Laboratory when we were getting started.
Okay. So in between Oak Ridge and Livermore, did you use any computing equipment, or was it mostly hand work? I was trying to think. In the Measurements Project, I was already in connection with that experiment getting involved in rather complicated problems of hydrodynamics and radiation transport and so on in connection with trying to figure out what was going to happen in the experiment we were doing and the diagnostic studies that we were doing for this thing at the — And in connection with that however I was I think primarily using paper and pencil, and maybe a Marshong [?] calculator, that sort of thing, and sort of doing it by hand in other words, rather than turning to any kind of an automatic computer. But there were the CPCs, the card punch calculators, were early on, and so those were some of the first things that we started to use at Livermore when we got started.
This was the IBM card program calculators, CPCs.
That’s right. CPC.
[???] not analog calculating aids [???]?
That was an interesting question that came up in those years that I thought about a little bit, and I think we had a differential analyzer that was one of the early ones that I experimented with a little bit.
You mean at Oak Ridge or at —?
Well, I’m trying to remember where it was. I think it may have been in Berkeley somewhere. I’m not sure. I don’t remember the details of it.
A bush type, large —?
Yeah. Yeah. We may have had it at Livermore. It was long ago I can’t remember, and I didn’t use it that much.
[???] because those things were enormous machines.
Yeah. Yeah. Because I turned very soon to digital methods. In connection with the digital methods, the guiding people at that time were Bob Richtmyer and John Von Eimann.
Who is Bob Richtmyer? Tell me about him. I don’t —
Let’s see. He had been working at Los Alamos on some of these numerical matters, and in fact some of the early books on numerical methods were published by Richtmyer and Morton, I think was a later version.
Okay. Spell Richtmyer’s name?
R-i-c-h-t-m-y-e-r. His father was a physicist and also published a book, Richtmyer and Canard [?], which is kind of a standard old physics book, a textbook practically. Well, that was his father. This is his son. He was at Los Alamos during the war years and became involved in these issues, numerical approaches in particular, early on. Early on. In fact one of the earliest technical problems that we had in treating [???] dynamics by numerical scheme was that in the cases that we were interested in shocks often form. Those shocks are sharp discontinuities of course, and the properties of the flow, and there was some considerable difficulty in treating these discontinuities moving across a finite difference mesh, and all sorts of noise would be generated. And it was Richtmyer and Von Eimann who suggested that one introduce a prescribed nonlinear artificial viscosity which would smooth out the oscillations over two or three zones. A fairly crucial matter, simple as it was, and in fact crucial to me. I understood it right away what they were trying to do, but much of the rest of my life has dealt with large eddy simulation [???] modeling and describing turbulent flows in numeric computers, and that was essentially the beginning of it.
The basic notions since really are outgrown from that original idea that they had. In the atmospheric modeling business, Smagorinsky introduced such artificial —
I’ll close the door. Okay.
In the beginning of atmospheric models, a similar problem arose and Smagorinsky had introduced an artificial viscosity of a certain sort, nonlinear viscosity, which controlled that situation. And it was just a few years ago that I was asking him about the history of this because I had noted early on the similarity, formal similarity between his prescription and what Von Eimann and Richtmyer for shocks. And I mentioned that to Joe Smagorinsky once in some discussion. He said, “Well, it’s not surprising. It was Johnny Von Eimann that suggested to me I try something like this on the basis of the experience he had had with the shock treatment.” And so yeah, essentially the nature of prescription then went into the beginnings of atmospheric modeling. So.
Let me pause for a second here and check the tape. Okay.
But of course the beginning of atmospheric modeling was the beginning of computing [???] essentially. They really went together and almost immediately. When I first visited Von Eimann’s place at the Institute for [???] Study where he was doing these early competing experiments —
When was that?
That must have been in the early ‘50s, maybe even before or shortly after the Livermore Laboratory was set up in 1952.
Okay. Had you met Von Eimann before?
I don’t think that I had. I don’t remember having met him before that time. Although I certainly knew who he is. But I mentioned my thesis research was in linear operator theory and I mostly was studying his papers in connection with that, so you could say that was — He is one of the imminent mathematicians of that part of the century for sure, so —
How aware were you of his work and the work of anybody else on digital computers in this period before Livermore?
Not very. I knew about it while I was involved in the Measurements Project before the Livermore Laboratory, so around you know maybe by 1950 or ‘51 I was aware of it. But early on when we were setting up the Livermore Laboratories I mentioned we wanted, we [???] in the computing business, and one of the things we did was to visit Von Eimann. I’ve forgotten exactly what time. Incidentally, at the institute was Oppenheimer, who was supportive of our efforts. We turned to him for sort of advice and help, in spite of all these concerns that people had. So that situation is much more complicated than one might be led to believe by some versions of it. But in any case, we, as I say, early got involved in numerical methods and it sort of went hand in hand with the availability of computing, and the first of them were very crude, like the CPC, the card program calculator, which was essentially you fed a deck of cards with the program punched on it through a hopper, you took it out the bottom and put it back in the top if you’re going through a loop, [???] loop, and we would use, IBM cards had different colors on the edges and so we would usually have alternate colors of yellow and blue, something like that, so we could see where we were in these time cycles.
So you were programming.
We were programming, yeah.
But you were also designing the numerical simulations.
Well, yeah, we had to worry about both numerical schemes and then implementing them in the programs. For example we almost immediately had to learn about stability analysis. It was easy to write down, [???] different schemes which looked physically as if they were doing the right thing, but would give you nonsense if you have to carry out linear stability analysis, which Von Eimann of course had told us about right away. I can remember an argument I was involved in once with Jastrow. I don’t know if you have ever heard of Jastrow.
He’s quite an interesting character.
Yes. He was involved with us in those early years there also. And Bob Jastrow has a feeling that his —
Let me check this again. It makes a little click when it stops, but I’m not quite sure, so I want to keep checking. Okay.
Bob Jastrow has this feeling that his physical intuition is so powerful that he doesn’t have to worry about these mathematical issues. Or at least he thought at that time. I remember getting as kind of in an argument trying to explain it to him, and he had written down some difference approximations or something or other he was interested in which was not stable, and I had to try and explain to him what that was all about. I remember that was kind of interesting. But those are the things that we were learning at that time. But it was Von Eimann and Richtmyer and others were early involved in these issues of numerical techniques, and that however moved pretty well along, but at the Livermore Laboratory of course our problem was to always put our hands on the next fastest computer that was available, because we had and I think people still have an insatiable desire for computing power, especially if it doesn’t cost too much.
So you started with the CPCs, and you talked with Von Eimann, and he of course by then had built an Institute for Advanced Study machine, and there was [???] —
Eckert [?] and Wokley [?] were the people that spun off that. And then there was the Univac, that they [???] down as the first sort of commercially available machine rather than the handmade one. And we bought that, one of the early —
You bought Univac.
Univac 1. I think we bought serial number 3 or something like that at the Livermore Laboratory, [???] started with practically. That was our step toward essentially an electronic computing vacuum tube, but —
I skipped over just a little bit of tape at the end of that last side. [Note from transcriber: he skipped over no more than 10 seconds’ worth]
Univac, the first electronic computer we got at Livermore, and as I say one of the early copies of it, was an interesting machine. It was peculiar in some respects. It was, for one thing, operated on a decimal system, which is not what one thinks of in connection with those early machines. It had a memory which was mercury acoustic delay lines, which was a kind of interesting scheme. But it worked just fine. We learned a lot about how to use computers. One of the things we learned, it had a thousand word memory, not 1,024 but 1,000. One of the things of course we had to learn early, because of the size of the problems we were doing, was how to buffer information between tapes which were the slower memory medium and the mercury tubes, [???] memory. But we learned that. There was on that machine a good buffering capability, so that in fact you could take a 60-word batch, put it into a buffer and have it more slowly be moving out of a tape or the other way around. And quite separately, so you could be doing your arithmetic at the same time that all of that, covering the IO transfers. That was a very nice scheme. Not all of that capability was kept in there for years in other later machines, but it was there, and I learned and others learned a lot about how to use such machines, and mostly these issues of information transfer from one memory medium to another.
I believe that pretty early on Eckert and Wokley wrote an assembly language called short code. Is that was you used, or something else?
I don’t remember that name for an assembly language, but the whole issue of the language was an important one in those early years. Initially we were writing in an absolute — with a 1,000-word memory you could fairly well remember what was in location 237, but it became a challenge after the memories got bigger. The operation code was fairly straightforward. It was an alphanumeric machine rather than just a decimal machine, and then so you could store an a for example for add, and so part of a 6-character instruction would be a-blank-blank. 2-3-7 [???] add the contents of 2-3-7 to what’s in the register, that sort of thing. So anyway, it was fairly straightforward, but it was essentially absolute coding. And so when we start going on to the later generations of machines of this sort where the memories were getting bigger, one of the, to my mind, one of the most useful software developments was symbolic assembly language. To my mind in fact, looking back on that history, that was a bigger step for me of making life easier than FORTRAN later. Which is a fairly strong statement, I think. I mean —
[???] can see why, since [???]
Well, they’re comparable, I’ll admit, but to me I think the symbolic assembly language was a bigger help at the time than FORTRAN later. In fact I was a little slow in moving toward FORTRAN. I had a lot of essentially symbolic assembly language programs, and I continued to write in symbolic assembly language after FORTRAN became available, mostly because the compilers were not very good, and it took a long time for them to get them to be reliable. And in fact it was sort of you find yourself spending more time tracking down difficulties with the compilers in the early years than you did worrying about the problems that you are having with your own program.
Well there was also an issue with early compilers about the fact that some people thought of it as having to run your program twice — because you had to run it to get it compiled, and then run it again to actually run it. It was very inefficient, and since the compilers were slow and machines were slow, this really took a long time.
That’s right. Although I think an early issue was that mostly writing in the assembly language we could write programs which in execution were faster than the typically compiled programs were, as the compilers being general had to pay a price, and therefore they tended to be slow in the execution phase as well as the compile time of course, which you had to pay extra. So in fact, the troubles with — We would find ourselves moving at Livermore to the next generation of faster computers fairly often, and when you had made that move you didn’t want to wait another year or two for the compiler to be working on it. So that was one of the aspects of this. The assembly language tended to be available almost immediately, and the compiler was —
Did you still have to —?
You know, the group of people would be working on the compiler for the new machine, and it would take them a year or so before they would get to it.
But each time you moved to a new machine, did you then have to rewrite the code from a previous [???].
Well, to a large extent, yeah. To a large extent. There would be language differences and capability differences of sort or another. And that was a chore. That was painful. That was the price we would pay for the next step in computing power, and for that reason we used to think that it was a factor of five increase in speed that was the only step that made any sense. So we would sort of hold off until we get a factor of five, and a lot of them turned out to be a factor of four only, in fact, but —
Marketed as a factor of five.
Right. But that was, it was a price that you were paying for these. And of course my going finally to compilers, it would be enough to wait for the compiler to be running, but at least it made the transition to new computers eventually much more straightforward. So.
So CPCs, and then the Univac 1, and I believe the first Univac 1 was sold in 1951, so definitely —
That’s right. We would have been getting ours in late 1951 or ‘52. We almost immediately went into discussions with Eckert Wokley about it when we were first starting, even in our discussions in the summer of ‘52 I’m sure. I can remember visiting Philadelphia early on.
Oh yeah? The assembly plant in that un-air conditioned —
I’ve heard a lot of stories about that place.
The one that we got, which was serial number 3, as I mentioned, was the one that they used for the election in ‘51 I guess it must have been, the fall of ‘51.
The one that correctly predicted the outcome of the election, but the announcers didn’t believe it, so they reversed themselves.
That’s right. That was the same machine that was essentially that we had ordered.
What’s your memory of the Next machines that you purchased at Livermore?
Well, then we moved over to the IBM machine, the 701 if I remember correctly, 704, that series. All of those were essentially still vacuum tube until they got the transistorized ones which were the 7040 or something like that, I’ve forgotten, or 90, 709-90, I’ve forgotten. Anyway, there was a series of IBM machines. Essentially what we went with at that point. One of the IBM machines was one that was — there were only a few built, so-called Stretch [?].
The first sorter [?] computer, people tend to think.
One of the roles that Livermore felt that it was playing was to push the computing industry. That was almost explicit. The guy that was running all of this at Livermore, Sydney Furdmak [?], was an early, you know, he recognized the whole fraud issue of computing I think, and he realized, as others did, that we needed high speed computers. We’d have to push the industry to make them. We always needed something faster than what there was, and if we had to put in early orders to encourage the people to go ahead and build the machines, fine, that was what we’d do. So I think, as I say, it was really an explicit effort on our part to encourage the development of the building of new generations of computers faster and faster. Because we could clearly see then we could use anything they could provide. So it had to do with the fact that the more power that you had, the finer you could make the mesh, the more resolution you could in simulation, and all this.
The process that still continues, and probably will continue forever.
That’s right. Absolutely. Yeah.
As far as anyone can see.
Although in those early days we were doing essentially one spacial dimension calculations, but soon, in the late ‘50s, after I returned from the year that I spent at Berkeley, I started on a two-dimensional cylindrically axially symmetric configuration code, hydrodynamics and radiation transfer code. And so that took a lot. That already ran up the amount of arithmetic you needed quite a bit.
So, let’s go back to what you were doing with these machines. You were working on the simulations for thermonuclear explosions, and I think you talked about that some. You were about to talk about your early interest in atmospheric problems.
Oh yes. That was in the late ‘50s. Well, a couple of things had happened. One was that, as I mentioned I had taken off a year, I was in Berkeley, to finish my Ph.D. in mathematics. But then when I returned, although it was a little like a sabbatical so I felt I owed them something when I got back, so one of the first things I did was to extend two dimensions to cylindrically symmetric configurations what I had done in spherically symmetric configurations. But then I was in a position where I could sort of choose which way I wanted to go, and at that time, as I mentioned earlier, there was the test ban on nuclear weapons and beginning to be the feeling that you know people were sort of pulling away from this and that I got curious about whether there wasn’t some other activity that would make heavy demands on computing and involve the kind of complicated processes that I was familiar with — radiation transfer, hydrodynamic flow, and so on — and so the natural one that came to my mind was atmospheric modeling. But I didn’t know anything at that time when I first thought of it about what the background of that was.
Really? So it was completely unfamiliar.
Well, that’s right. I vaguely remembered that Von Eimann had been doing something, I knew about that, and in fact I —
Had you talked about that with him when you went out to the [???]
Not very much.
No. But they [???] the Institute for Advanced Study. I remember when I was there on some visit or other their mentioning that they were also doing these weather forecast calculations. In fact they were talking about — I think they made a forecast of a November storm was one of the first ones of these numerical weather prediction experiments. But, and I had heard about it —
In 1950 they did some forecast with the Iniac [?] for three 24-hours periods I think.
Yeah. That’s right. I knew about that, but I hadn’t paid much attention to it at the time. But later I started thinking about the numerical simulation of the atmosphere as being something a big and complicated and maybe also something that people would care about for indefinitely. And that was what got me interested in it. There was someone at the Livermore Laboratory who had come out of the UCLA Meteorology Department and was involved, he was sort of the only meteorologist at the Livermore Laboratory.
Who was that?
Knox. His name was Joe Knox.
Oh, I’ve met him. Yes, sure.
The reason they had a meteorologist there was that they were interested in fallout problems and clouds and stuff like that, and so he was connected with that part of the program. So I went to talk to him about it, just to ask him you know what’s the background and what’s going on and who’s doing what, is it something that’s interesting to get into, and he said, “Well, why don’t we go to MIT and visit some of the people there who have been involved in this?” Jule Charney [?] and Herman Phillips [?], in particular. So the two of us made a trip to MIT and visited with them, and essentially the question that I posed to Jule Charney was, “Here I am with a background in mathematics and physics and some computing. Is it worthwhile for me to throw myself into numerical modeling for the atmosphere? Which is what I am kind of interested in trying to do.” And this could have been during the spring of 1960, come to think of it, because in the fall of 1960 we were getting a new computer that was ten times more powerful than anything earlier available at Livermore, or anybody else had. That was the LARC.
The LARC, the L-A-R-C.
Livermore Atmospheric Research Calculator or something like that.
Automatic Research Calculator I think. And of course I got to know Jule Charney much better later, and of course I should have known what the answer would be. He says, “Yeah, sure! Go ahead!” you know, so he was really supportive of the notion that I should try this. And so that meant that — this was in the spring of 1960, and the machine was to be delivered in October of 1960, and so I took off the summer and went to Stockholm where there was an international institute for atmospheric sciences. I’ve forgotten what it was called. That had been set up by Karl Gustav Rosbe, one of the leaders in the early stages in this business. Rosbe had died in the late ‘50s so he was no longer there, although when I went his institute was still there, and almost everybody that I met there had worked fairly closely with him. But that’s essentially, I spent three months there during the summer of 1960 in Stockholm working at that institute — which provided me with a library and some people to talk to about what I was interested in trying to do, and learned something about the problems in meteorology. And it’s essentially at that time that I put together the code that I wanted to do it. Now, the machine hadn’t even been delivered, but what I took with me was a manual describing how it was supposed to work and what the instructions said it was and so on, and essentially worked from that manual to write the code for the machine.
Now, when you were in Sweden did you see were there any other people doing this kind modeling, computer modeling of climate or weather?
Yes. Rosbe got interested in that, and some of the early work had been in numerical modeling of the atmosphere, had been started not only in Stockholm but also in Oslo. So [???] met these people.
You visited these groups and looked at their models and their code?
Well, I talked about what they were doing, and I would explain to them what I was trying to do and get some advice on various things. What I was trying to do was more ambitious than most of the models there. I was going to build a 5-level model in [???]. They had been scared of water vapor because they thought it would blow up everything. And but I had just gone ahead and done it, not knowing any better I suppose. And so I think my model was one of the first that actually had a high chronological [?] cycle water vapor condensing it, precipitation falling out, and [???] being released and all the rest of it — which is a fairly important, energetic driving mechanism for the atmosphere, as it turns out.
Right. I want to focus on this for a minute, because as I said, one of the things I am trying to do is build a family tree of GCMs. So I want to know more about who these people were and what sorts of things you saw in the models. Were these GCMs or WP type [???]?
That’s a good question. Let’s see, some of the early numerical weather prediction work Puretoft had been involved in. I think he was — some of these people had been in the group at Princeton, for one thing. I think Puretoft was part of that Princeton group that was getting started on numerical weather prediction. And then he was in Oslo, or continuing to be involved in these things. I don’t remember any — although the people in Stockholm and in Oslo had been interested in numerical schemes, I can’t remember running any particular models in Stockholm for example. Or in — it may have been lack of computing resources at that time, the early years in computing.
Do you remember if they had computers?
I’m sure they did on some level. But I think mostly what Puretoft and others had done, was, it had been at Princeton for example in connect with that. The Von Eimanns’ early machines.
So most of these discussions you had there were conceptual.
Mathematical rather than actually looking at models or code.
That’s right. But for example when I was trying to learn something about the physical processes which are important in the atmosphere I turned to an article by Art [?] Eliassen. It was in the Handbook of Geophysics.
Spell that name.
E-l-i-a-s-s-e-n, I believe.
Okay. I can check that.
And but he had essentially written down the physics equations effectively appropriate for the atmospheric flow, which I followed rather closely when I was building my numerical model. But there was very little on the subject of numerical modeling. There were parallel efforts, however, going on at about the beginning of about that same time. The two that I think of are Smagorinsky getting started at GFDL, eventually, although he was getting started even before that when he was in the Weather Service somewhere, the research part of it. George Cressman was early on involved in the forecast. Now, there was this problem within the Weather Service between Smagorinsky and Cressman. Cressman was interested in numerical weather prediction. He had been involved I think in some of these early experiments with [???]’s group, or at least saw the outgrowth of that. After they at Princeton had seen that they could in principal make some crude forecasts, then of course they set up the so-called Joint Numerical Weather Prediction Unit, with support from the Weather Service, the Air Force and the Navy I think it was. Yeah. And then started moving in that direction of getting something going. On the other hand, Smagorinsky also essentially in the Weather Service, was interested in GCMs rather than the weather prediction, and there must have been I suppose an issue of resources going to those two different directions, but it sort of became a battle between Cressman on the one hand and Smagorinsky on the other, which was a little awkward. I mean, I liked to talk to both of them when I was in Washington for various reasons, but it was sort of like visiting Israel and Egypt: you had to not tell one that you were going to see the other, because there was so much animosity between them developed during some of those early years — probably arguments over resources I suppose. You know, they both needed high speed computers. And I think that in some ways was unfortunately technically, because that there was this parting between climate modeling and weather prediction, for a couple of reasons. One is that one of the problems with weather prediction models is that their climate may not be very good and so they will slowly drift to their own climate, which is wrong, and that’s a bias on all forecasts. And if they had ever worried about getting the climate of the forecast model right as well as other aspects of it, they would have gotten rid of that drift.
That’s one. And on the other hand, one of the biggest problems in climate models has to do with cloud radiation interactions, which is one of the weakest parts of the whole business. And I have often argued that one of the best ways of checking that out might be in real time with a forecast model to get that right there in check with satellite observations and the rest that you are getting clouds in the right place at the right time within the atmospheric model. And then rather than otherwise trying to tune it on a climatological [?] basis, which is essentially what they [???] doing, I believed that they could have got to the best prescriptions for generating clouds if they had done it in real time in the forecast mode rather than in the statistical climate mode.
Right. These sound like things you must have been thinking about later, afterwards.
Yes, yes, that’s right, that’s right, that’s right.
But so I think that was, in some ways, unfortunate. I think that’s all really history now, but there was no question about the fact that there was a real split between the GCM people, Smagorinsky, and the forecast people, NWP people, with Cressman and a personality conflict between them as well for that matter. I think the European Center for Meteor [?] and Weather Forecasts however now handles both, although they are supposed to concentrate on forecasts. But nonetheless I think they know that —
Yeah. I also know that there is some work going on at NCAR to try to embed [???] scale models in [???] scale models in [???].
Yes. Well, NCAR tends to focus on climate modeling rather than forecast modeling. There has always been, well, yeah, there’s been an issue there about there’s a — If you get involved in the forecast business, you’ve got an operational responsibility, and I think that’s one of the things that Smagorinsky didn’t like. He liked the research aspects of the general circulation model rather than the operational aspects of numerical weather prediction. I think a lot of people said, “Well, why don’t you help out and there would be [???].” I don’t think he wanted to help out the NWP. He was afraid he’d get drawn into operational responsibilities, that’s what it amounted to, and lose the freedom that he liked.
Speaking of Smagorinsky and that group, what about the fact that their funders were the Air Force and the Navy in addition to the Weather Service? Did that have any influence on the directions they took?
I don’t think so. Of course that Joint Numerical Weather Prediction Unit which was funded in all those ways was essentially for weather prediction, and where there was effectively an operational requirement. And of course then there was finally both the Air Force and the Navy split off separately in their own operations later. I’ve forgotten exactly what year. The Navy in Monterrey and the Air Force in Omaha I guess it is. But I don’t think that — I think that it was generally perceived to be a good thing, and especially the weather prediction part of it, useful and important activity. The climate modeling, I think essentially Smagorinsky had to get that out of the Weather Service, I believe, rather than from this joint operation.
Because the Air Force and Navy would have been primarily interested in weather prediction for their military purposes.
That’s right, that’s right, and I think Smagorinsky essentially got his funding out of essentially the Weather Service which later became SO [?] which later became NOAH [?]. And the GFDL is a NOAH laboratory effectively.
Now there was a lot of interest in the early ‘50s, and actually throughout the ‘50s into the early 1960s in weather control, and I know that Von Eimann was one of the people interested in that. I’m not sure when his interest sort of fell off, but it —
Well, it was that question of weather modification and weather control. One of my earliest involvements in this had to do with the predictability problem. My theoretical work had to do with indicating theoretically the limits of predictability for weather forecasting, which was mostly a matter of estimating how rapidly small errors would grow. [???] time scale for the growth of small errors was. And so I was, because of that, rather dubious about it. Edward Teller got very much interested in this. I almost had to argue with Edward about this matter. He would say, he said, “Either the atmosphere is very stable in which case it is very predictable and he ought to do it,” or “it’s unstable, in which case it’s controllable.” Because he thought “I can do a little thing and have a big consequence.” Well the answer of course is chaotic. You can do a little thing out of a big consequence, but it’s an unpredictable consequence, so — I kept trying to explain that to him, and I think I finally, after some while, got the message through. But Edward was very enthusiastic about weather control, modifying and something or other.
Well, some people were interested in using nuclear weapons for that purpose.
Well, that’s right, that’s right.
I think that’s right. I think that Teller and Von Eimann probably talked about these things too. Or maybe Edward Teller got these ideas from Von Eimann or something, I don’t know.
Had you had discussions with Teller about this issue of weather before you got interested in weather modeling?
Yes. Yes. No. Yes and no. When I got interested in it, I talked to him about it, and he was very supportive of my going into this. He had known about the stuff going on at Princeton. He knew what Von Eimann was up to, and he was very, very, encouraged me very strongly to start looking into these weather modeling problems. I had got —
But you and Knox were the only people at Livermore, at least for a while, that were doing this work.
That’s right, and Joe Knox was mostly trying to track clouds, radioactive clouds in connection with [???] fallout problems as a practical issue. So he was interested in models into which radioactive clouds could be embedded that would be carried along. He I think essentially took existing models from UCLA. He was from UCLA, and some of the early atmosphere modeling work was done there by Mense [?] Arakawa [?].
Arakawa. I’m going to interview Arakawa in a couple weeks.
Yes, you should, of course, because they were deeply involved in these early stages. At the beginning, really in my mind when I was getting involved in it, there were already two other groups that were involved: Smagorinsky and Mense Arakawa group pair [?] building simple models and—
And then the Sweets [?] as well.
The Sweets as well.
Yeah. Oh yes. In this country I’m saying. That’s right.
What about any other groups elsewhere in the world that you might have been aware of at that time, the late ‘50s?
Let me think.
Warren Washington, when I talked to him, thought that there may have been an Australian group working on this. Maybe not quite that early, but —
Maybe not quite that early. Burke in Australia.
Burke is spelled how?
Okay. Do you remember his first name?
No, I’m sorry to say I don’t.
I can find it.
Yeah, fairly early, but not as early as I think these times that I am talking about. I’m trying to think. The European center didn’t get set up until the early ‘60s. That’s right. I talked to the people when they were organizing it, so I remember the beginnings of it. Axel Wiin-Nielsen was the first director. Leonard Bengstson was a guy that went around trying to get people interested in getting this thing put together.
Spell those names please, I’m sorry.
Bengstson is B-e-n-g-s-t-s-o-n. I think. Bengstson. And W-i-i-n-hyphen-Nielsen.
Glad I asked you. That’s a complicated [???].
And Axel Wiin-Nielsen was one of the people involved in the setting up of NCAR. I had met him when I was in Stockholm during the summer of 1960 and it was at that time that people were talking about this business of setting up a center in Boulder, and he and a number of others have [???] involved in these issues.
Okay. Well, let’s go back to your own work on this GCM. So you spent the summer in Stockholm [???].
In the summer I was putting together the model, and in the fall I went back to Livermore and the machine was delivered at the end of October, and I was running on it within a few weeks. As I mentioned, I had taken the manual with me, so I was free of all the concerns of the people that stayed in Livermore who kept hearing these stories about, “Well, it’s not working quite the way it’s supposed to.” I just assumed it would work exactly this way. And it’s true, there were a few changes I had to make because of differences between what actually was done and what the manual said, but it didn’t take me long to do that, and so I was running well before other people. I had an awful lot of computer time at the beginning because of that fact that I’d moved ahead of everyone else. Mostly other people were waiting for the compiler to get working, for one thing, and that was another year or so, so —
Do you remember was there a name for the assembly language you were using, or is it —?
I think it was just called SAL, Symbolic Assembly Language or something. I think nothing more profound than that. And I’m not sure that that was the assembly language for that or for some later machines that we had, so I’m not even sure about that. Wait. It didn’t have — I wasn’t using a symbolic assembly language on that machine, so I’m not sure. I think they were devising or developing it, but I’m not sure how far they, what progress they made on that machine with assembly language, Symbolic Assembly Language. I think when the IBM machines is when the Symbolic Assembly Language actually started being useful.
Okay. So what were you using on the [???].
I was writing essentially in absolute. That machine, the LARC, had a 1,000-word memory, and I was, essentially you had to remember what was located in certain memory locations. Well, it wasn’t that hard to do, with only a thousand words.
Your memory got as much exercise as the machine did.
That’s true. It wasn’t that hard to do with only a thousand locations though. And in fact it tended to have blocks, information block transfers. Sixty words was a unit of transfer, so I would tend essentially to set aside 60 words, if I remember correctly, for every zone, and I’d put all the variables for that zone stacked up in 60 words. It was fairly easy for me to remember which was which. So, it wasn’t that big a problem. And later when, with the IBM machines, a 71 and 74 and so on, the memory started getting bigger and it got to be more of an issue, and that was when I think the move to Symbolic Assembly Language [???].
Okay. Alright. Actually, let’s take just a brief break so I can go downstairs. [tape off, then back on...] Okay. So, tell me about this GCM. Describe its characteristics and —
The one that I built when I was in Stockholm, it was five levels essentially, or five layers. It’s a little hard to calculate that. It used a 5-degree by 5-degree horizontal mesh. It was written in such a way that in fact every latitude [?], it was, I started off with only a part of the Northern Hemisphere in fact, between the Equator and 60 degrees north for the first version. I put a wall, slippery wall, at 60 degrees north. But that meant that I could use essentially a 5-degree by 5-degree mesh without us getting too fine and [???]. That was my quick, initial solution to that problem. I essentially laid out every latitude line as a data set, and so I needed three adjacent ones for the finite different scheme that I was using. I didn’t keep those all in the memory at once. I had those working off a — it had a storage which was rotating magnetic drums, in addition to tapes. And this in the LARC computer. The rotating magnetic drums were such that I in fact laid out one latitude circle on one rotation of the drum for this strip of magnetic material. And then when I moved through the different latitude circles from the equator toward the pole, I could watch the head moving on the drum.
How interesting. Almost an analog.
Well, that’s right. So you’d step through. And I needed three lines in at once, but then the fourth line was being buffered in, so I had a number of these things. Essentially I needed three in the memory of ones from the old time. It was calculating a new time. An adjacent new time was being written out from a buffer, and on the read level, the new time, there was a fourth read in buffer that was bringing in the new stuff. So essentially there was a kind of a flow of information through this between the memory and the rotating drum. So I could watch the thing step through. And, I worked from — there were more than one drum, so I essentially worked from one drum to another on one time set, and then I’d reverse it and go the other way. That was not an unusual thing for me to do. In the early days using tapes I learned you could only write a tape forward, so it was a good idea to read it backward. You wouldn’t have to rewind it. You see what I’m saying? [laughs]
Yes, yes. Clever.
And again, it was a matter of working between two tapes. So you’d be reading one and writing on another.
Okay. So five layers and Equator to the 60 degrees on the Northern Hemisphere —
Yeah, that’s right. And 5- by 5-degree mesh. And it was five levels, and it was essentially the top layer was roughly speaking the stratosphere. I’ve forgotten exactly — I think essentially they were 200 millibar levels. I think they were even in pressure. That meant that things were — there was not very good vertical detailed structure. The earlier models, however, had been useful simulations of the [???] of the atmosphere with only essentially one layer. So, equivalent barotropic [?] models that had already been used to more crudely forecast the weather, so five was actually something of a refinement. So that’s essentially what is was, and it was a primitive equation model rather than a balance model. That was one of the early decisions that had to be made, and it was being made by a number of people about that time. The difference in this gets a little complicated, but the equations of the atmosphere are general enough to support among other things sound waves or gravity waves like only happen when Krakatoa [?] blows up. Usually you don’t get that sort of thing. And so, because of the speed of propagation of these things, the constraint that’s put on the time set that you use is fairly severe; it has to be maybe the order of a few minutes, five minutes, something like that — depending on the mesh size that you are using. And whereas the interesting motions are more like an hour would be the appropriate times to have [?]. And so some of the early model efforts at Princeton had learned how to modify the equation so only the slow motions were describable, the so-called boundless equations of various sorts. But there developed some uneasiness about the validity of the approximations made for the balancing process, and it’s still an open issue and it seems that I think a lot of people worry about. And furthermore, in order to impose this balance constraint, it was necessary to solve effectively a plasonic [?] ray [?] force, which you had to iterate to. It was a linear equation, but you found yourself spending the time iterating to a solution to this balance condition rather than taking small time steps. And so the amount of arithmetic turned out to be about the same. You might as well spend the time in small time steps, and let the acoustic molds, the gravity wave molds, which are fast and not necessarily interesting, but leave them in there and take the penalty [?] on the time step that you had to use. So I and others were moving in that direction in any case at about that time.
One thing I was going to ask you about earlier — we’ll come back to this later, because I want to talk about some of the publications that you have in 1965 and all that, but two questions that may be related. One is, between 1957 and 1965 in your CD at least there are no publications listed at all, and I’d sort of like to hear about what you wrote up, if anything —
What was doing then? There was an issue, remember I returned to the, I got my degree and returned to the Livermore Laboratory, and so much of the work that I was doing was classified.
And that explains it to a large extent.
That’s what I suspected.
At that time, it was not until the end of the ‘50s — well, when did I start publishing again?
‘65, alright, that was when I was already starting to publish about results of the atmospheric modeling I presume.
Well, there was a fairly big gap between ‘52, which was when the laboratory was set up, and so the work I was doing was un-publishable until I got involved in this atmospheric stuff around 1960, and that finally led to publications in ‘65, I can see that. But that was what that gap was.
I guessing that today, with the right kind of request, one could get your publications, your Livermore work, at least the stuff on atmospheric models. I imagine that would not be still classified.
Oh, the atmospheric stuff. Well, the things that are listed here are the only things which are unclassified and published.
Yeah, but what I’m asking is, would it be possible for me to get from Livermore at this point the stuff that you did on atmospherics there?
The only thing that I did on atmospherics there is essentially what was already here. There was no classified work in atmospheric work, let me put it that way.
The classified work I did was in other areas. Now, I was — but notice, ‘65 was already a turbulence. I was really getting more interested in turbulence than I was in atmospheric modeling. In connection with the work on the atmospheric model, I found early on that turbulence was starting to — I started to get worried about aspects of turbulence. And along about this time I was starting to teach at the Department of Applied Science in Livermore, courses in this general area. Let’s see. [tape turned off, then back on...] Turbulence [???]. That was [???] of that.
Okay. But so the other question, which turns out to be unrelated, is one of the articles you have, you talk about Richardson’s models from the ‘20s, and I’m curious when you learned about those and whether they were part of your modeling work on this first GCM.
I knew about it almost immediately, when I started getting interested in this. It’s historically interesting, but Richardson’s book, which Warren got the Dover people or somebody, Warren Washington got the Dover people to republish in recent years, so it’s now readily available.
I didn’t realize that. I had to do it out of the deep storage here to take a look at it.
Right. Well, I first saw a copy of it when I was in Stockholm. At the institute there they had a copy, and I read it with considerable interest. It’s a remarkable —
Had you heard about it in the U.S. before?
Oh, I had known about it, yes. The history, background history was such that those in the atmospheric modeling business knew that Richardson had written down the equations first, is what it amounted to.
Yeah. I mean it appears everywhere in the literature, so I assumed that was true, but I wanted to check.
That’s right. But no, I think it’s Dover has a version of it, edition of it, that Warren Washington got, he persuaded the Dover people it will be valuable, useful to have this done.
Yeah. It’s a fascinating book.
Oh yeah, it is indeed.
My favorite part of that is the weather factor [???] stadium full of people with calculators to keep track of the weather. A little bit ahead of [???].
Yeah. The interest in that came back, that Richardson’s description of that weather factory came back fairly recently in discussions of moving atmospheric models to parallel architectures, but you see it was a parallel machine he was using, talking about.
Massively parallel, in fact.
That’s right, and we were worried about the same issue of a message passing between those.
One of the really interesting things, I’m not sure if I’m right about this, but I’m pretty sure at the number of people that he had in mind, 64,000. It’s an interesting number from a computer perspective.
I’ve forgotten what the number was, but I know that there was fairly massive parallel computer. And almost exactly what people have later, more recently done in connection with parallel, moving these models to parallel architectures. Which I incidentally was very much interested in at the Livermore Laboratory in the early ‘80s.
We’ll talk about that more later when we get to it. So you’ve been describing the model that you made. How did you get the idea to do the visualization of it, and how was that done?
We’ll take a look at this later, I think after lunch, because [???] been going for a while now, but —
That’s an interesting question. We had already at the Livermore Laboratory started visualizing other results of computer calculations, so it was a technology that people there, at Livermore, had been evolving.
Give me some examples.
They were mostly hydrodynamic calculations in which you would have some kind of a mesh that was described, that had been used for —
Yeah. We already, at the Livermore Laboratory people had been experimenting with, in the especially two-dimensional hydrodynamic calculations we had been making. Of course you’d have some kind of a — these were usually Lagrongian [?] calculations, that is, the mesh would flow with the fluid. And then you could have all the sort of mesh lines and you could convert those in one image. And then you could watch the sort of wire diagram, as it were, moving and shifting and changing and so on.
What were you visualizing [???], CRTs at that time, or —?
CRTs, yes. Yes.
It’s very early for that.
That’s right. I’ve forgotten exactly how that got started. There was a fellow named George Michael.
Oh yeah, I’ve met him.
Okay. Talk to him about it, because he was one of the early people involved in trying to bring this visualization technology, CRTs, to bear on these things.
And of course it was fascinating. You could watch the results of a hydrodynamic calculation that you had been making and watch it actually evolve on the screen, is what it amounted to. And so there was already work in that direction, and it seemed very naturally therefore to me to use the same technology. It was not in color. It was essentially in black and white. It was white lines on black background. What I did in connection with the film that you are going to see, which is color, is to take several fields and make different film strips of it, and then have them emerge later in a photo laboratory with filters of different colors so that there would be super-position of different colors, lines, on this thing.
I can’t wait to see it. I didn’t even realize it was in color, so even better.
Yes. That was kind of interesting. Got an outfit in Hollywood to do it. Pacific Title. It’s an outfit I still see their name in connection with the credits for motion pictures. They are experts in providing the titling, all the stuff in the beginning of the film, or the end of the film.
What’s the name of the company?
Pacific Title. It’s just a sort of a piece of a small warehouse down there, if I remember correctly, when I went there years ago. Stacked up was reels of film all over the place that they had been working on. But they had the technology for doing this sort of merging of different images onto the same thing, is essentially what was needed in this case. And so we would effectively produce three white-on-black line diagrams evolving in time, motion pictures, and then they would project them through filters onto single frame. So you’d get, well, you wouldn’t be able to see this, but for example surface pressure field or 500 millibar height field and so on can be all superimposed on each other and you could watch over the Northern Hemisphere, you can watch these things emerging. And the one that I have is essentially a polar projection of the Northern Hemisphere, and you can see the patterns moving in mid-latitudes. And that was kind of interesting when I did that early on. I did it just because I knew we could do it, it would be interesting to look at, but it was almost too interesting, and whenever I’d go anywhere and give a talk about what I was doing, I would show the film and everybody was fascinated by the film and they didn’t care what I said about the technical aspects of the model, as far as I could tell. And in fact Smagorinsky used to chide me about it a little bit, he says, “That’s just big plan showmanship. There’s no science there.” But he started, they started making movies too.
Well, of course a lot of people don’t think that anymore. I mean, the people at NCAR have told me that the visualizations they have made of their models have taught them things about what’s going on in the models.
Oh yes. And in fact early on I noticed an effect of this. It was a peculiar one. I was looking at my early movies, and noticed something peculiar happening up near the polar region, and so I went and looked at that in the code, and I found that I had a bug in there. I had the wrong surface temperature. I had gotten a decimal point in the wrong place or something like that. There was an anomaly at that particular point, and it showed up right away on the film. But the other things that showed up that are interesting on the film, like 12-hour tide of especially cloud cover, which is still something —
Yeah, a kind of a 12-hour — You can see it going around, like a tide, like an ocean tide, but it’s an atmospheric tide, which shows up particularly in the water vapor field. Which people have talked a lot about atmospheric tides, but nobody knew very much about what they were. There is some evidence of the amplitude of these tides, but this actually right away showed one. It was kind of interesting in that regard. But mostly, well, mostly it just got people to see. Well, I can remember I drove places to give talks about what I was doing, and people would be watching the film with interest, and I remember once somebody came up to me afterwards and said, “I’m from Israel,” he said, “and I noticed the remarkably realistic way in which you’ve got these things tracking across Israel.” And of course I don’t know what’s going on in Israel, I had never paid any attention to it, but he had spotted by looking at it, he spotted the fact that in fact whatever it was doing, it was doing the right sort of things as far as he was concerned. I guess that’s become fairly common now. But it is true that that was a fairly early example, I’ve been led to believe by people like George Michael and others involved in the business of — A fairly early example of graphical generated output from [???]. Of course visualization is a big thing now, and everybody does it, but —
Tell me about, what sort of parameterization [?] problems did you face in making this movie?
Well, fairly serious.
Oh, and one thing I forgot to ask before, what was the time step?
It was about five minutes.
Oh, that’s very short.
Yes. Because of the fact it was a primitive quasar model, so it was a short time. I think it was five minutes, something like that.
What was the typical length of a run for this model?
It could run hours. I had lots of time, and so I could make — I think perhaps the simulated times would be of the order of a month or so. And the running times, if I remember correctly, it ran 24 times faster than real time, roughly speaking. I think it was an hour a day, if I remember correctly. Of that order, anyway. As I mentioned, however, I had the LARC computer almost completely to myself essentially, except for people checking out the compiler, for a long time, so I was able to get fairly long runs on it.
Okay. So parameterization [?].
Parameterization is an issue. I had effectively introduced a fairly large amount of viscosity, and nothing very profound —
[???] gravity wave.
That’s right. Well, and it prevents scrambling in general. There’s a problem with any of these hydrodynamic calculations, and that is that there’s a tendency for the larger scales to interact and make small scale structures — noise, effectively — and pretty soon it’s small enough so you can’t resolve it, so you have to do something to terminate the, sort of damp the stuff which is too small to be resolved and the simplest thing to do is to add a certain amount of viscosity, just linear, make it a viscous fluid so that the small scales are essentially damped out. And that’s essentially what I did, and it was a linear viscosity at first. And all I did essentially was to grind it up until things looked okay, is essentially what it amounted to. Have enough. Put in. I knew theoretically, roughly speaking, how much I ought to have, but I was maybe a little heavy handed. Well, as it turned out, I had a viscosity coefficient of ten to the tenth centimeters squared per second. Well, the whole atmosphere mixes things as if it was about four times that, so essentially had sort of an artificial diffusive viscosity which was about a quarter of what the whole atmosphere does in the way of mixing. I might have been able to go down a little on that, but essentially that’s how it turned out. And in fact, those early runs were and show a lot of signs of being excessively viscous compared to the real atmosphere. But they did, nonetheless, permit the — See, the eddies in the atmosphere develop, one can say, for the purpose of moving heat from the Equator to the Pole, through eddy structures. And you have to move a certain about of heat. You don’t have any choice there. It’s pretty well known how the tropics are heated, and it’s pretty well known how the polar regions are cooled, so there is a certain heat transport that you need to get from one place to another and the only issue is how you do it. Well, I was doing a fair amount of it by ordinary sort of diffusion, heat diffusion, but enough was left over so it actually would establish these eddy structures, which are the things that are interesting. And that whole issue of course is a key one, which people still wrestle with, including myself, but as I say, at the beginning I knew roughly speaking that what’s the so-called Gross [???] Coefficients, the gross mixing coefficient is about four times ten to the tenth, and I use ten to the tenth is what it amounted to. If I’d used four times ten to the tenth I would have diffused everything presumably and had no eddy structures at all. I used a quarter of that and I got eddy structures, [???] structures and all that. Of course what one should use, and a lot of people have done since, and I don’t know if I ever did it in this model, which I didn’t look on beyond its early stages, but is to use the nonlinear prescriptions which adjust themselves to the local circumstances in an appropriate way — which I have written about a lot in some of the papers which I have listed here, so, but no, so it was rather sluggish in that regard.
I think you said water vapor was included in this model.
Yes. Water vapor was included. Then there was one of the early models with water vapor in it, which meant that I actually permitted saturation to occur and things to rain out and they need to be released, and this was an energy source. The atmosphere [???] of course is not just a heat engine, it’s kind of a steam engine in fact, because the latent heat release is a fairly important part of driving it. And that, mine was one of the first models that included that explicitly as a quantity that was carried, water vapor, and then —
How did you go about prioritizing that?
Well, that was tricky. There’s no question about it. Especially the convective processes, vertical transports, and I don’t remember now the details of the convection prescriptions that I used, but I do that I spent quite a bit of time getting something that would be plausible in connection with that. In fact I think there was separate notes that I wrote in those early years describing some aspects of the prioritization of the vertical convective processes. Mostly I did — well, I sort of, in those early years, it’s now become fairly common, but I pulled out essentially a single column model and would experiment with it rather than use the whole thing, because that’s rarely a vertical process. I’d prescribe effectively the consequence of the large scale behavior, and then just look at consequences of various prescriptions as far as the single column was concerned, and playing around with convection prescriptions and the like.
Okay. What about other aspects of atmospheric chemistry? Was there [???] or any of that?
I didn’t do anything about that, no. I at one point thought a little bit about adding chemistry, but it’s a somewhat separable problem from the rest of it, and I know from later experience that if you add chemistry you are adding a whole new dimension to the problem, and the amount of arithmetic goes way up. I know that because when I was at NCAR I had to make some decisions about what we were going to try to do in that direction. So, in that case as an administrator, so —
Did this model have a name?
The model that I ran at Livermore had the name L-A-M, LAM. I don’t know what it stood for. I thought it stood for the Livermore Atmospheric Model, other people said it stood for the Leith Atmospheric Model. But anyway, all I know is — and LAM is written on that as a label on some of these images. I don’t know why — well, it was sort of a tradition, naming codes. Everywhere. Not just at Livermore, but elsewhere. And somehow or other it was just a way of referring to it, identifying it. The LARC computer ran — let’s see, it was sort of an outgrowth of earlier computers that tended to run off — it ran off of metal tape, not magnetic tape, but —
Nickel oxide or something like that, yeah.
That’s right. And so it would have these rather heavy reels of tape that we would play with.
Well, Univac had the first, and it was carried over to the LARC, which was essentially sort of an extension beyond that. But it also meant that we didn’t use punched cards much on that machine. In fact if you wanted to put information in — it’s coming back to me now — there was a typewriter that you’d sit at and you would type out the information and it would go on line by line out of a metal tape that was being sort of stepped [???] above your head.
So I assume there is also paper, so that you can see what you [???].
Oh yes, yeah, yeah, right. And also you could take the tape and then run it through a printer effectively, which was really a typewriter being driven by it, to see what you’d got on it.
Anything else important about the characteristics of that model that you want to [???]
Not that I can think of particularly. It was early, rather crude, but it was surprisingly realistic in the images which it generated. A lot of people wondered how come in some sense it seemed that the atmosphere had to do certain things, and it would do them no matter how crudely you tried to model it. But that was encouraging in connection with that [???] circulation modeling. There are still rather fundamental issues I think about the sensitivity of the — the current problems in connection with climate have to do with the sensitivity of climate to changes in external influences, like any amount of CO2 for example, things of that sort. Or a change in the radiative, [???] radiation, things of that sort. Probably my model was not very good in that regard. In the early days of worrying about this sort of thing, Jule Charney gathered together results from a number of models to try to make estimates about sensitivity, and found that mine had been relatively unresponsive and insensitive, probably because of the large amount of viscosity and drag and so on in it compared to some of the others, and I think that was — It showed up also in the predictability issue. I think that was in fact what he was looking at initially at that time. It was a kind of a sensitivity matter also. Predictability issue of course is one having to do with numerical weather prediction, and that is if you have small errors in the initial state, how rapidly do the solutions diverge? So therefore, what is the limit on deterministic predictability of the weather from initial conditions? And I think that my model, because of its sluggishness, tended to show slower divergence of solutions than was realistic, even than I believed that I was getting involved in at that time. There are a number of ways in which you could get at the predictability of the atmosphere that were being used at that time. One of the simplest and most obvious was to take any run that had been made by a model, forecast model, and go back to the beginning and make a small change and make a new run and see how rapidly the solutions diverged. And that was done on a lot of models. Now another scheme that I got involved in also involved sort of writing down a turbulence theory for the growth of error and trying to apply that to this situation. And of course neither of these was the real atmosphere, and it was Ed Lorenz [?] who tried to do this for the real atmosphere by going back through historical records and finding if he could matched pairs that were sort of close states to each other and then watch the way they actually diverged in reality. The trouble was that you could never get very close pairs of all the history of the records that he had, and so he had to sort of extrapolate back toward what the consequences would be if he got to smaller initial differences. All three of these techniques tended to give something like the same answer, so made us reasonably confident that the Aramus [?] error double link [?] time was the order of two days, three days, something like that.
Last question on this, and then we can take a break for lunch. This was a primitive equation model.
To what degree did you rely on data about the atmosphere in parameterizing or anything else, and where did it come from?
That’s a good question. Of course it’s a climate model, so it’s only statistical information about the climate that you are interested in, but such things as the, sort of the mean temperature distribution between the equatorial regions and the polar-regions and the vertical structure of the atmosphere. And one of the issues that comes up in connection with any of this has to do with the amount of fudging that’s done. After all, there are a lot of parameterizations. A nice word for it, but a lot of parameters or a lot of things in there which you don’t know the value of from first principals, and so you feel free to adjust them on the basis of the behavior of the model. And so surely there was a considerable amount of this done. And what you essentially tried to do, what I tried to and other people tried to do, is to sort of mimic not only the mean structure of the atmosphere, but also to some extent the variance, the amount of fluctuations you see. And adjust things until it comes out about right. How many things I adjusted, how many parameters, I just don’t remember now. It mostly had to do with the horizontal diffusion coefficients that I mentioned. That was perhaps the biggest thing that I worried about. But in the vertical transport and convection processes and the like, there must obviously be other adjustable quantities. Since I did have water vapor, I also had condensation. Therefore I could generate clouds when condensing was occurring. I knew that I had a cloud layer there, and I tried to take into account, crudely, the effect of those clouds on the radiation transport which I had. But that’s very crude also. Of course it’s, as I mentioned earlier, it’s the problem that makes most people most nervous about what’s being done in these models.
So were you doing this mostly by just tweaking model, or also looking at meteorological encyclopedias or [???].
Well, I knew enough about the mean structure of the atmosphere from published graphs and publications and so on and climate information. And something about the variance also. So that I knew when I was about right or when I was all wrong. And there is that concern that, well also people have tried to make estimates of what the heat transport is between the equatorial and the polar regions by eddy structures, eddy transports, horizontal eddy transports. And so there are these things that you can check against to see if you are getting something like the right behavior. But it was long ago and I don’t remember the details exactly what I was checking against. But it was somewhat more than just making pretty pictures. I do know that there is some quantitative information we tried to extract and find out if things were coming out about right.
Okay. Let’s take a break. [tape turned off, then back on...] Okay, so we’re back talking about your LAM model, and we’re about to watch a videotape. So why don’t you just — I’m going to play it —
I’ll describe it as we see it.
If I remember it. As I have mentioned before, this is essentially a north polar projection of the Northern Hemisphere, so the pole is in the center and the Equator is on the rim and there’s a sun running around on the Equator, if you can see it. But this is a 600 millibar temperature, contrary [?] intervals. The dotted ones are below a certain value, so they are cold, and the solid ones are warmer, relatively. The day indicator is up in the right hand corner. I’m not sure if you can see. If you watch, you can see the thing going around the circle.
Does it say “day” there at the top?
Yes, it says “day,” and then there’s a counter on the right.
What kind of time span is this covering?
I think it’s of the order of 30 days, but I’m not absolutely sure. This is the 500 millibar height field, geopanential [?] height field. And there you can see rather clearly the tide. It’s a 12-hour tide in fact.
And so is that around the equator that we are seeing this [???] ?
The Equator is the outer circle. And you can, and it’s in the tropics that the tide shows up the most.
But you can see a slow progression of the planetary waves at the [???] drifting off toward the east.
Okay. So this says 500 millibar geopot —
Geopotential. Which is essentially the height of the 500 millibar field.
And what does it say at the top? Do you have any idea? Top left.
Up on the top left it probably says LAM something or other, and probably north, for Northern Hemisphere.
Okay, this is the third segment, and it says, “SFC [???]” ?
Surface pressure. Okay.
SFC is surface pressure. That’s a lot more detail in it than the 500 millibar height field. And the dotted ones are essentially the lows.
So we see the pressure zones moving, rotating.
That’s right. Right. And you can see the continent outline. So for example there is a deep one moving across the Atlantic Ocean over toward Europe. Here’s the mega field, the vertical velocity effectively in pressure coordinates. It’s a measure of the vertical motion, across the 600 millibar level. And here again dotted is negative, which means upward in the pressure scale, upward, geometrical sense.
[???] then is downward?
And the solid are downward contours. And it’s in the dotted ones therefore that you expect the convective processes to be occurring up where there’s an upward motion. You can see essentially those upward motions moving across also, across the Atlantic, [???] the Northern Atlantic with the storm tracks. This is precipitation reaching the surface, and again you can see the band of precipitation going across the Atlantic.
Right, right. So these are all different outputs of the same model from the same —
From the same model and the same run; it’s different fields from the same model run. So they are comparable and in fact in the later images will be super-posed. They are supposed to be in different colors. The colors are not too colorful, I must say, by this reproduction but —
But on the original film they were.
Yeah, right. And this is a super-position of a couple of things which you can make out. I think it’s the surface pressure and 500 millibar height field perhaps, I’m not sure.
Huh. It looks like, I see, yeah, the 500 millibar is say blue underneath the surface?
Yes, right. The pressure is in yellow I think somewhere, or whatever. Can you see the sun marker going around the Equator? If you follow, it loops around.
Oh, I see. Yes. Right.
It shows what time of day it is, or where the sun is [???].
This looks like it’s about maybe one and a half or two seconds per day on this tape.
That’s right, that’s right. One, two, about two seconds per day. Here is again a composite of [???] surface pressure and maybe, I’ve forgotten, precip — It’s hard to read that. It’s hard to read. They are just printed on top of each other. There’s a surface pressure for sure, I can see the “p-r-e-s-s,” but —
And then the 500 millibar something, but it’s — pressure?
That doesn’t sound right.
No. Surface pressure and 500 millibar pressure?
Could be. Well, it would be the 500 millibar height field in this system, so — geopotential is what it would be. Surface pressure and —
Another is in a different segment. Gosh, very interesting.
I think one of these combine surface pressure and vertical velocity, omega field. Unfortunately, it’s just barely off in the fringe. It’s a little hard to see. But again it shows these — the things corresponding to that storm track going across the North Atlantic.
This is probably the last track on the tape. It says it’s seven minutes long.
Yeah. I’m not sure if I believe that. We’ll see. It may be. Yeah, it could well be. It looks to be the end of it. I am puzzled by that.
So, you can study it a few more times and see if you can make out the —
Yeah. Well, what I would like to do, if it’s okay with you, is I don’t know how well it’s going to come out, but I’ll try digitizing some of this. And I’m not sure if I’ve explained this to you yet, but there’s the book project that I’m working on, but in addition I’m a consultant to a Sloan Foundation grant project to work on building a website about the history of climate science. And so one thing I would [???] to do is —
That’s a matter of getting it into a machine readable [???].
Yeah. Exactly. So what I’m hoping to do is to put a, as I develop this family tree of GCMs to put information about it on the web, and I would like to include a digitized segment of this, I mean probably just a minute or so —
It might be worth getting in touch with Warren Washington again and finding out if in fact he has the original film which would be clearer than this.
What was it made on, 8 millimeters, 16, or —?
What’s that? Eight. Yeah. I think it was eight.
I’ll have to find a machine that can play it still too.
Oh, that’s right. Maybe it was 16, I don’t know, because it was —I’ve forgotten.
Well, I can get in touch with him about that.
Yeah, I think he would remember it, I’m pretty sure. The other person I think who may have a copy somewhere, and it could be at Livermore for all I know — let me ask around some of the people. Where I sit now is in a different group than where those people are, and some of them have moved away anyway so I’m not sure what luck I’ll have in tracking it down, but Mike McCracken [?] did have a copy of it some many years ago. He’s in Washington now, but some of these people are still around, and I can see if I can track it down through them. I don’t know how long he’s going to be in Washington, about a year or two maybe, I don’t know. He was a graduate student under me in the Department of Applied Science in Livermore when I was teaching there. So I sort of drew him into the climate modeling business.
Let’s go back upstairs. [tape turned off, then back on...] So, the next thing I want to ask about this model is, what became of it? Did anyone else ever develop it further? Did anybody want to borrow the code? Were there any [???] —?
I mentioned Mike McCracken. If I remember correctly, he ran it some in connection with his thesis research, and a few other graduate students ran it in connection with that, but it was not really evolved much further, and probably at some point it just wasn’t moved to the new generation machine, is what it amounted to. There was, somebody did write a FORTRAN version of it later, one of the graduate students, but I don’t even remember now who that was.
So the students who worked with this would have been students of yours in the Department of Applied Sciences at Livermore?
That’s right, at Livermore.
And where did they come from? How did they end up there?
Well, the Department of Applied Science was sometimes referred to as Teller — [???]. It was something that Edward Teller got set up there. He had wanted to set up such a Department of Applied Science. He would have preferred to have done it in Berkeley, but the Berkeley faculty would have none of Edward Teller, for reasons which you can properly understand. And so he set it up in connection with the Davis campus of the University of California that then had sort of a Livermore branch — which is still there. And it’s a graduate program, and —
Jim Knox is affiliated with Davis now, and this may explain the connection and —
That’s right. He’s in connection with one of the climate impact project program of some sort. But yeah, so this was a graduate program, and I started in some of those years in the ‘60s sort of teaching fairly regularly. About a third of my salary came from the Department of Applied Science and two-thirds from the laboratory. So I was spending — I was usually teaching about one course during the academic year.
And the courses would happen at Livermore or at Davis?
Livermore. So, although the Department of Applied Science was sort of in Davis, and there were some students at Davis and some students at Livermore. Usually often what happened I think was for the first year or two they would be at Davis taking graduate courses in various areas, but then they would move, as they became more specialized, concentrating on particular thesis topics, they would come to move to Livermore. As I remember correctly, that’s the way that worked, more or less. And so I was usually teaching a fluid dynamics course of some sort or other, turbulence and some climate aspects, but there also was a course of applied mathematics that I took over for one term, if I remember correctly. Numerical methods and the like.
Any other sort of ramifications of this model? Spinoffs, other people at Livermore who might have worked with this besides students?
Not very many people. After all, the atmospheric modeling was not a key part of the Livermore program. Although —
You and Knox were still the only ones to do this as a sort of mainline professional activity?
That’s right. Of course he was the only honest meteorologist. My background was not meteorology, but I’d come in. And yet within more recent times there has been built up a kind of an atmospheric modeling community, climate primarily, with a number of people involved. But yeah, that was, but as I say, although I was given considerable moral and material support for doing this sort of thing, it was not what you might think of as a main part of the Livermore program. It was perceived, however, as an activity that people could work on that was unclassified but which was using the same kinds of, developing the same kinds of skills which were also valuable in the classified activities.
Did Teller keep track of this, and did you have conversations with him about it?
He always remained interested in what was going on. Not closely, however, but this issue of controlling the weather kept coming back. Every so often I’d get a note from his secretary that he’d like to see me for a while and give me a lecture again about the importance of this, that and the other thing. In fact, he’s still there. I still see him about once a year on one of these, he gets some kind of a notion about something or other and likes to use me as a sounding board for some of these things.
He shows up here periodically [???] classes here at this point I think.
Hoover Institution I think. Yeah. He’s a fascinating guy. I enjoy him one-on-one. I sometimes cringe when I hear him in public saying some what I think are rather strange things, but not so much lately. He hasn’t been that much of a public figure lately, but I can remember times when he — But as an individual to talk to, he’s just fascinating.
I can imagine. Let’s see. Next thing I want to ask you about, I think, is this book. This is a fascinating book called Global Weather Prediction: The Coming Revolution, edited by Bruce Lucinien [?] and John Keeley [?].
And the introduction to the book tells us that this was a — the book is published in 1970, but it was a spinoff of a course that happened here at Stanford in 1966, according to the book.
Oh. I hadn’t remembered that. I hadn’t remembered much about it, except that I was asked to make a contribution to it.
Well, that’s what I wanted to ask about, because the implication is that there were 40 or 50 scientists and engineers involved in this and people from business as well who were thinking about a kind of weather satellite project, and I wanted to know more about that and your involvement in this project.
1960, which was the summer I spent in Stockholm, was an interesting time in technology in connection with these things, because at the same time that they were started to launch satellites, we were taking images of the cloud cover of the earth, which one of the greatest boons ever to human forecasters using pattern recognition capability that people are very good at, at the same time that this technology was coming along, at the same time we were developing numerical models. And computers didn’t know what to do with the satellite images particularly, so there was a kind of a peculiar technological branching here, where something came along that was finally really great so far as the old fashioned way of doing weather forecasting or perceiving the state of the atmosphere was concerned, at the same time that people weren’t doing that anymore. They were going off letting computers generate these things. And so really I’ve always thought that was an interesting crossroad point, around 1960. But it is true that the meteorological satellites were coming along about that time and giving these fantastic images of the surface of the earth in a global sense.
Were those primarily or exclusively visual images?
Initially much of it was visual images.
[???] capsule [???] —?
Yes, and are finally transmitted I presume in a bit stream back down to the [???] station. But that probably was a while. I think the trouble is, I don’t know exactly what the history of this is. There are two levels of what’s going on. There was meteorological satellites for these purposes, but there are also of course spy satellites. And so it’s confus — So the question in my mind about whether the technology was going [???] with these two streams.
Yeah. There’s an excellent graduate student at UC Santa Barbara who has been opening up all the classified spy satellite history, and what he is learning is that there were parallel developments in these fields and often what was happening was the spy satellite resolutions were much higher than the ones of publicly available meteorological information.
Yeah. I have been in the past a member of, well in recent years, a member of a committee in Washington that worries about the spy satellites, so it’s funny. I know all that technology developed through the years also. In particular, this is a group that is trying to look at what can be taken out of that archive of stuff and fed back into the scientific community in general and as a guide to declassification now.
Right. Because [???].
It’s tricky. But there’s a lot of stuff there, and the question is, what can you pull out of it and declassify and use for scientific purposes without revealing the details of how it is that you got this stuff in the first place. And these people sit around arguing about this for the last four or five years. It was Al Gore I guess who sort of got this thing going, Vice President got it going, because of his interest in doing this sort of thing. So he was the one that pulled this committee together to try to do something about it.
Interesting. Okay, so back to this book. You were talking about Stockholm and [???] weather satellites.
Yeah, that was — When was the book published? It was in the later ‘60s.
That was in 1970. It says that the lecture series from which it was drawn happened in 1966.
Well, that sounds about right, because about 1960 more and more people were getting involved in this, the combination of satellite images and numerical models and all that was sort of coming together to bring kind of a new view of keeping track of the behavior of the atmosphere. That’s right. And of course there was the general circulation model path as well as the numerical weather prediction path, which we talked about earlier, and I think much of this has to do with the climate issues of the general circulation modeling.
Yeah, there are two articles, there is an article in here by you on a 6-level model of the atmosphere.
That was a description of the model that I have been describing to you in fact. In fact, that may be the best description around, although there was one in Volume 4 of Methods of Computational Physics I think, which is —
Yeah, I have that too. But I thought you told me before there was a 5-level model.
Well, I may have counted wrong.
That’s all. I may have called — sometimes it’s hard to know what you mean by levels, how you count and so on, or layers or levels or whatever sometimes gets confusing.
Okay, but it’s the same —
I think it’s essentially the same model.
Alright. And then the other article in here on atmospheric modeling is by Yale Mitts, on the four basic requirements for numerical weather prediction.
But you don’t recall being involved in the lecture series itself or in this class project?
This was something that was added in.
Okay. Alright. Oh. This is a stray question from an earlier period, but were you aware — there were some experiments, and I don’t have a good line on exactly how this happened, I’ve just heard rumors about it from people, but apparently there were some analog models for climate made in Chicago using dish pans.
Oh, dish pan experiment. That was Dave Fultz and Jerry Plossman [?] may have been involved in it, but Fultz is the name that’s usually associated with that.
F-u-l-t-z. But I think Rosbe was originally kind of interested in that possibility also. The atmosphere largely of course is just rotating fluid, and so the notion was well, alright, set up a dish pan and get it going and then see what kind of — and have it, in this case they would cool it in the center and heat it on the periphery to correspond to the heat transport requirements for the atmosphere. And then watch for these waves to develop, as they did, and take pictures of them. And you would develop a 5-wave pattern typically on these things. So there was a lot of that done, as I say, primarily by Dave Fultz, in Chicago in the laboratory that was set up there. And they developed more and more refined techniques and used more and more detailed sensors. I remember however I had an argument in the summer of 1960 with Jule Charney about this. He wanted me to forget about modeling the atmosphere, which is what I was trying to do, but model a dish pan instead, a rotating dish pan. The reason of course, well, I didn’t do it, and I argued. But he, the reason was that he thought well that’s simpler and we’re beginning to develop some theory about how this a lot simpler configuration works. And it would be nice to see if we could get a numerical model that would bear out some of these theories. But my argument with him was that I would rather go for the real thing on the one hand, and the other was — I pointed out to him that, owing to the needs of weather forecasting, the atmosphere was the best instrument, the real atmosphere was the best instrument — ...That was counter arguments that I was trying to use to explain. Of course he was right, that the simpler configuration of the dish pan experiments made it easier for people who were interested in kind of theoretical fluid dynamics issues to kind of begin to construct some theories about what was going on. And that was why he was interested in it. But I persisted in trying to go for the real thing, partly because of this business of the instrumentation. The instrumentation of course comes from the requirements of numerical weather prediction, which requires soundings made twice a day at noon and midnight Greenwich [?] time all over the globe, and that provides an awful lot of detailed information about the actual state of the atmosphere.
A lot of the sort of global organization of that instrumentation system started during that AGY [?] in 1957-58.
Yes. That’s right, that’s right.
What do you remember about that?
Not very much. I was only beginning to get involved in these things sort of toward the end of the ‘50s and ‘60, and so I didn’t — I knew from hearing about, a lot of the people that I got to know when I got involved in this had been people who were involved in the planning and organization of that. Although I then got more deeply involved in the planning for the Global Atmospheric Research Program, GARP, which came later, somewhat later. But yeah, the requirements — Of course the sounding requirements had gone back a long time. It was just a matter of trying to keep them organized and spread them out and fill in blank spots and all the rest of it. And GARP, the Global Atmospheric Research Program, was trying to do that even more so later. But soundings are expensive, but that’s off on some remote place with [???], and satellites are pretty good for this, except that they were a lot harder to interpret the vertical temperature and moisture profiles from satellite soundings, although they made some progress on that [???], those infrared sounding techniques. But satellites really were making a big change, although as I say it was a change initially the satellite images were pictures of clouds is what it amounted to. Later they started developing sounding techniques which would give you temperature profiles and the like. Winds were a little harder to get at, and perhaps the most valuable thing you could have gotten from satellites were winds as it turned out, but that was done by tracking cloud fragments and stuff like that. But it was somewhat indirect, and not completely satisfactory. There were however, during GARP, there were programs releasing a large number of balloons that would float around for a long time and be tracked. That was part of a general programs —
Well, since you brought it up, let’s talk about GARP. I believe that the sort of impetus for that came from Kennedy in the early ‘60s, but that the program itself didn’t really develop until considerably later.
That may be. I think it was being developed during the ‘60s. I have forgotten exactly what the first global experiment year was. But anyway, it’s somewhere in there. I was somewhat drawn, I was drawn into it fairly early I think. I was a member of various organizing committees for it, or the organizing committee for GARP for a while, an international committee. That is, GARP was a global activity, there were lots of nations involved.
It says ‘71 to ‘77 you were on the committee.
So in fact it was in the ‘70s that GARP got really going better I guess. Well, it evolved into a world climate research program. It was a kind of a transition from the GARP, which was more concerned with providing new and better information to initiate weather forecasts, and then that sort of got folded into a climate research program. That was the late ‘70s that I was on the joint organizing committee for GARP and the Joint Scientific Committee for the climate program was early ‘80s, or it was later. I don’t know —
Actually maybe we should back up a bit, because I want to find out about how you got to NCAR from Livermore.
Well, when NCAR was set up in 1960 they were talking about it, and I forgot, it got started about 1961, something like that. But I’d been talking with the people who were organizing it, many of whom I talked to in Stockholm when I was there during the summer of 1960 where that was sort of in the air at that time, the notion of setting up such a national center, so that —
So this is [???] Roberts?
[???] Roberts was the first director. Now, he was not an atmospheric scientist. He’s a solar physicist. But they had been looking around for some first director for the place and some place to get it started, and I’ve forgotten exactly the reason why Boulder seemed like a very good place to have it, and Roberts was already there with his high altitude observatory, so the notion was that well, we got the nucleus of something organized already, let’s just expand that. So that’s essentially I think the argument that was made, and he was of course very much interested in doing this. And they did want him as a first director, because he already had proven skills as a director in connection with the high altitude observatory. But he refused to leave Boulder, so that decided where NCAR was going to be. It’s as simple as that.
Well, it was a good choice. Boulder is the nicest place.
It is. And so NCAR was set up in Boulder, gathering together some people from around who had been planning and talking about this, and of course they too realized that one of the first things they had to do was get hold of computers, so that was an issue, and Walt [?] Roberts had early called me in, in connection with discussions about what kind of computers might be available and what would be useful and so on. And as well the other people that were setting up NCAR were people that I had known and interacted with quite a bit, so I found myself visiting NCAR more and more often during the ‘60s, until finally I just went, is what it amounted to. Yeah, so I just moved there.
What kind of interactions did you have with them about their first climate model, which I think that project began around —
That was Warren Washington, and Akira Kasahara [?] were the people who were putting together that first climate model. And one of the reasons I visited it often was to talk to them about my experiences and what they might do and so on, so although their model was by no means a copy of mine, but nonetheless they had a lot of discussions with me, as they must have had also with Yale Mintz about the experiences that he had had in connection with climate modeling. So I sort of watched with interest and tried to make what helpful suggestions I could make to Warren and Akira Kasahara when they were putting that first model together. Of course they knew one of the first things they would want to do is to put together an atmospheric model in the National Center for Atmospheric Research with computers. And so that’s when I got to know them well.
And what advice did you give them about computers in this period?
Well, I’m not quite sure. We all were always looking for the most powerful computer we could get, and so it was hard to give them any advice that people didn’t know already. But the computing center at NCAR had an advisory panel, as computing centers tend to do, and I was a member of it from almost the very beginning. Still am, off and on. It’s been about 30 years or so I’ve been connected with that committee, although my most recent term just ran out, I am happy to say, last, I don’t know, a few months ago. Having been on the same committee for about 30 years, I got in that awkward situation where during the committee meetings some question would be raised by some relatively new member. A question I had heard raised at least three times before over that period of time. The same questions keep coming up over and over again of course. Yeah, so I was involved in it, and as I say a frequent visitor. I would spend the summer there for example, working with —Initially, they were not in the Mesa building that was later. First they were in various, well, the first place was essentially on campus, a building that they took over.
At the University of Colorado.
At the University of Colorado. Right. Then they took over some I think university plus government NOA [?], or something a little more, a little bigger buildings, a little fancier, while they were planning the building of the Mesa. Have you been to —?
Yes, I have. Yeah. Magnificent building.
That’s really a spectacular building.
I remember Walt Roberts showing me around the mesa before they started, telling me about their plans for — He was really enthusiastic about it.
It’s a really fantastic thing. So, this is sort of another stray question that you can answer from any period. Did you ever have any interactions with computer designers from IBM or CDC or Cray or —?
Let me think.
Because one of the things I’m interested in is, I mean it’s very clear that nuclear weapons work had a significant influence on supercomputer development in particular kinds of calculating needs. I would like to see to what degree, whether in climate modeling, also had an impact on the way supercomputers developed. Because a lot of people who said this thing, that buying the best computers possible was an absolute necessity, and they are very good customers at least. Did you also interact with the designers?
I have the feeling, looking back on it, that the impact of the weather modeling, climate modeling customers was not as great as the weapons customers. Mostly because the weapons customers had more money sooner, it what it really amounted to, and yet, nonetheless, I wouldn’t be surprised but what the European center as a customer for example may have been perceived as having some impact on the development. I can’t think of any computer that was developed for climate purposes, except maybe the — There were only a few stretches. They didn’t work out. But indeed one of them was from Smagorinsky’s laboratory, so he may, in a sense, as a potential customer for that machine, may have been encouragement, perhaps unfortunately, for IBM to have built that machine. Well, one says that. Everybody points to the stretch as being an IBM failure effectively, but out of the stretch they acquired the transistor technology for the whole generation of computers. So they have no reason to complain about that experiment I think. It really initiated a new line of computer development. It itself didn’t work out too well, but they learned a lot on it, is what it amounted to I think. So —
What computers did NCAR have when you got there?
Well, let’s see. The late ‘60s, when I went in the late ‘60s, I think it was like the CDC 6600 or thereabouts.
That’s the right period.
I think they may have temporarily got some 3600s, and then got 6600 and 7600 were on that path. I think that was about it. I remember interacting with the people. I had been working on the 6600 and a colleague of mine at Livermore under my, within my group, working group there, under my encouragement, had developed what we call stack groups. That is, you take vector operations and you write an assembly language loop to do them very more efficiently than the compiler would typically treat it. We could pick up a factor of two or three in speed on this on the 3600 if I remember correctly, [???]. But then Seymour Cray [?] got around to vectorizing and his machines got better, and so we lost some of that, but for a-while we were picking up a factor of two, two-and-a-half in speed by this trick. Later the hardware did it for us, and so we —
And what did you call it again? Stacking?
We’d stack loops, we call it.
It was little macros, effectively, which were vector operations. And you would call these subroutines [?], which would be A times B plus C for example equals D, but what would happen essentially was that it would lay out the issuing instructions and stuff in such a way that you’d fill the instruction issuing slots all in a loop. Which you sort of tried to get the optimal sliding of the issuing of instructions. And so you could pick up a fair factor that way. The amusing thing was that the fellow who was working on this came in my office once and said, “Well, I got the divide speeded up.” The divide is always slow on these machines, and it was 60 clocks or something or other, and the machine was [???] divide every 30 clocks. He was doing the divides faster than you could [???] machine divide time. But all they did was to perceive that you could, you can replace four divides by a lot of multiplies and a single divide. You multiply all the denominators together, divide once, and they start multiplying out everything else to get back to what you wanted. Multiplies are so much faster than the divide, that this would speed things up considerably. So that vector divide loop, people are still using it, that trick. Even with compile code [?] it’s useful. You can pick up a pack or two or three, and I’ve handed it out to people all over the world by now. I get little packages which I give people for vector divides. Yeah, most, the UK Med Office for example picked it up from me a year or so ago if I remember correctly. But that just was one of the things. That was the most dramatic perhaps of the various speed-ups that came out of this little business of getting these little loops that you put in.
I remember I figured out that trick last year when I was living in France, and the exchange rate there is five francs to a dollar, so I was having to divide everything by five. Then I realized that instead you can multiply and two and move decimal points, and it was a lot easier than dividing by five.
But this other issue, I guess I still think it was the weapons business more than the climate business that pushed the computer technology, and maybe still does, I don’t know.
Clearly more so, but the question is whether there was any influence from the climate and weather side.
There may well have been. It’s hard to tell what vendors react to, but even now the advanced whatever they call it, scientific [???] initiative, it’s a Department of Energy ASCII program which is essentially doing the same sort of thing. They are trying to push computing.
Yeah. High performance computing.
High performance computing. It came along with the test band, kind of a tradeoff with Washington.
Right, right, because you want to do testing by simulation instead of —
That’s right. I don’t believe it, but nobody else does either, but anyway you’ve got to do something. Well, there was the other aspect of keeping around and bringing in young people who would learn the trade. They had learned it on computers, but nonetheless there is a certain capability you would prefer not to have die away is what it amounted to.
So, you ended up at NCAR in ‘68.
In ‘68, yes. I went there at the invitation of Walt Roberts, but also there were a number of people already I had been interacting with a lot, and so it was kind of a natural thing to do. I drifted into administrative responsibilities at NCAR within a few years, became eventually the division director of the division that the climate modeling and the weather prediction dynamics and so on was done, numeric modeling. I guess in fact things had been rearranged somewhat, but Warren Washington later filled essentially that position that I was in within my group. And that was for some years I was involved in that.
Another thing that’s on your CD from that period, 1968 to 1970, you were the chair of an NAS panel on Environmental Data Services.
Yeah. The weather service and various things essentially went through various evolutions to the Environmental Science Services Administration, ESSA, and then it became NOA, that sort of thing. And typically along through this history they would keep having advisory committees of one sort or another, often under the aegis of the National Research Council of the National Academy of Sciences, and that was in connection with some of these things that I found myself being drawn into this sort of thing. Then there was also, under the National Academy of Science or National Research Council was the so-called Committee on Atmospheric Sciences, which took a somewhat broader view across the nation of what was going on in the atmospheric sciences, and I became involved in that in some of its panels and so on, and for a while I guess I was chairman of it.
Do you remember who else was involved with those committees, who else was serving on those committees?
Other people. Vern Suomi is the person that was mostly involved in that sort of stuff. Suomi, S-u-o-m-i. A Fin. He was one of the people that was an early advocate of satellite meteorology, and he made a large contribution to getting all of that organized, but he also was deeply involved in the Global Atmospheric Research Program in particular. But balloon programs also were largely his doing. And he was very, very active in connection with these various academy committees, panels, and so on.
Were you also involved as part of that work with international cooperation and [???]?
Well, yes. GARP of course is an international program, and so there was, the Joint Organizing Committee had members from various nations around the world. Two members from the U.S., two members from Russia, two members from I’ve forgotten what other country had as many as two, maybe Britain. And one from a lot of other countries around. I think it was a dozen members, something like that, scattered.
Anybody in this group from the Southern Hemisphere?
Not so many, and that was something of a problem. We would usually have a member from Kenya, African, if I remember correctly, because we met once in Nairobi, because you know wherever there was a member eventually we would get an invitation to meet there. But nothing from South America. We had Mexico representation, but we never really, nothing much from South America, and I always, even at the time I wondered. It’s just that there wasn’t much going on in the atmospheric work or modeling, or any combination of the two, in South America, for reasons that are not completely clear to me. In fact, there are aspects of the geopolitical development of South America versus other parts of the world that I am still puzzled by. I don’t quite understand. After all, it’s got the climate, it’s got everything else you might think, and yet there is something different about South America. Whether it’s the influence of the church, I don’t know, but anyway, it’s an interesting difference which I have been puzzled about. I guess you probably know more about it than I do, or thought more about it.
Actually I don’t, and it is a big puzzle.
It is a puzzle.
Did that mean that there were data problems from South America?
No. The global system worked pretty well. We had data problems from oceans, where there were nobody. That was the worst part. We’d have big holes in the ocean, [???].
Even though there were people in GARP from the Southern Hemisphere, there were still national weather services.
The standard weather service had operated the World Meteorological Organization and its predecessors had laid out this global observing system for routine weather forecast purposes, even before worrying about the numerical weather prediction, just because they wanted to draw global weather maps. Well, it’s true, initially they were mostly Northern Hemisphere weather maps, and so it was slower for the Southern Hemisphere to get itself organized, no question about that, but when the numerical models came along of course there again I think the first weather prediction models were probably Northern Hemisphere models.
Yeah, they were.
Other than the feeling that nothing much crossed the Equator. So there really was a separation between hemispheres in this business for a long time.
How about, any Australian representatives?
Australian, yes, definitely. Australia has been early on fairly deeply involved in these matters, both numerical modeling and also has connection with [???].
[???] the names from the GARP project in Australia?
From Australia. I’m not sure if I can remember the names of Australian members. Oh well, let me think. Tucker. Brian [?] Tucker was the name of an Australian who was fairly deeply involved in the GARP, the Joint Organizing Committee. I think he’s the sort of international organizer from Australia that I can think of the most. He was more on the administrative level within the system there and Bureau of Meteorology. Oh, I guess he’s connected with not that, but the CSIRO [?], the research institute equivalent to NSF sort for Australia. He was essentially connected with the CSIRO activities.
Let’s go back to your first years at NCAR. What is the first time you remember being aware of the carbon dioxide doubling issue as something that people were beginning to focus on?
Okay, let me try to remember. Earlier than the CO2 problem, what we call the ozone wars came up. It had to do with the supersonic transports and the concern that people had that it would upset the ozone layer and what consequence would this have. As well, aerosols would be left in the high stratosphere by these things, and that would cut down the incoming solar radiation. So one of the first kinds of questions we asked about the sensitivity of the climate — that is, how does it respond when people do something to the system — were in connection I think with the ozone problem and the aerosol problem, in connection with the SSTs. And so we were trying to pin down these sensitivity numbers for those. At that time we became pretty much aware of the uncertainties in those numbers, like factors of two, but not factors of ten probably, but of the order of factors of two. But there was, the Department of Transportation, because of this interest in the SST issue, was sponsoring a good deal of discussion on these matters within the United States.
And this would be in the late ‘60s?
Yeah, or ‘70s, something like that.
[???]. I know that in ‘70 there was a big issue about the SST, but it may have started before that.
I think it was about early ‘70s. I was involved in some kind of a report on some of that stuff. Yeah, it was ‘75 there was a [???], Climatic Impact Assessment Program was run by the Department of Transportation. That was in ‘75. And that had been going on for a few years before that, so it was essentially the early ‘70s that there was concern about that sort of thing. And so there was this issue of trying to find out what the sensitivity in climate was to these changes. That preceded. Then came the concern about well what happens with this CO2 problem.
Now were you considering [?] the ozone issue as a radiation issue?
My concern here more had to do with aerosols and the effect of cutting down the, effect of incoming solar radiation, of changing the albedo [?] of the top of the atmosphere slightly, rather than ozone per se. It didn’t enter dynamically particularly or models. It has of course other consequences, but it was not one that I was particularly involved in at that time, compared to the aerosol problem. But then the CO2 problem came up. Well, there again of course it was recognized that changing amount of CO2 could change, upset the — it would change the radiative transfer properties of the atmosphere, and then the question is what's the consequence. And that became more of a sort of sociopolitical issue I think. There was a fairly large amount of money riding on whatever it amounts to, and I think people were rightly concerned about how accurately can these things be determined by the models. At least the model or results called attention to the problem, let me put it that way, so that people started worrying about it in a fairly serious way. It's not clear what's it all worth, and what you can do and so on, but —
1970 and 1971 were Skep [?] and Smick [?]. Were you involved in those at all?
Not particularly. I knew of it, and —
Because they raised the carbon dioxide issue, among other things. There were several problems, including the SST, which come up in these reports.
Yeah. [???] the CO2 problem. It's interesting. In my own family history, one of the early baseline measurements of the amount of carbon dioxide in the atmosphere was made in the early 1920s by my grandfather.
He was a physiologist, and interested in the metabolism of respiration and so on. And one of the things therefore he was interested in was one of the constituents in the air. And for that reason incidentally he made this rather careful measurement, which is in the '20s as I say, which is —
What was his name?
Benedict. Benedict. He was a director of the nutrition laboratory of a Carnegie institution in Boston or Brookline, Massachusetts.
What was his first name?
So the measurement he made was in Boston, I assume.
The measurement was made in Boston, although he went down to Blue Hill and measured it there and then he started worrying about what variations there might be. I think he worried about seasonal changes and stuff like that. But mostly he was just, he was measuring a lot of the other constituents also, but that was one of course which was of interest to him. He did research on metabolism in general, so you'd fine he'd be putting animals in boxes and measuring, aerated cages, but he measured what's going in and what's coming out of the box. And he did this for a large range of, all the way from mice to elephants, if I remember correctly.
That's a big box.
That's right. I saw, when I was growing up I saw him acting as a scientist and as a laboratory director, and it looked like a good life. That probably had something to do with what I finally found myself doing.
What did your own parents do?
My parents are divorced. I was raised by mother. So that my grandfather to some extent was the male figure in my family.
So he was your mother's father.
Let's see. So, were you involved in any of these CO2 doubling experiments at NCAR?
The CO2 doubling experiments, I don't know to what extent. I don't think I ever myself did any model experiments on CO2 doubling. I certainly at that time probably I was encouraging people to look into the matter, and concerned about the uncertainties associated with it, and I got, there are statistical issues in connection with the statistical significance of results of any climate experiments done on a climate model or looked at in the real climate. You have got to figure out how much is sampling noise and how much is real signal, and so I did publish some papers on this issue of the statistical significance of such experiments, and so —
Right. Yeah, I see there are lots of papers on predictability and sensitivity.
Well, predictability and sensitivity, yes. The predictability problem had to do with the one of weather prediction and how rapidly the solutions diverge, and the other had to do with pulling signal out of noise and climate experiments. So the first had to do sort of with the prediction problem, and the second had to do with the climate sensitivity problem.
So were you working on models of your own at this point, or mostly —?
Not particularly, no, no.
You had sort of moved up into a more senior scientist role and watching younger students —
That's right. And I was starting, already I was thinking more about turbulence if anything than I was about atmospheric modeling, already when I'd gone to NCAR.
Okay. Tell me about that.
Well, I soon realized that the predictability problem was a general problem in the predictability of any turbulent flow that is a chaotic flow, and that small effects have large consequences deterministically. And so I became interested in trying to pin down with turbulence theories an estimate of the rate of diversions of solutions or the rate of error growth. And so I was effectively getting more and more deeply involved in the turbulence problem. Though I got involved in turbulence before I went to NCAR, it was while I was still at the Livermore Laboratory. What had happened was that there was a graduate student that was working with me at Livermore, and he was looking around for things to work on, get interested in. He was at that stage. And I had gone off for a few weeks to some meeting or something or other, and as I left, he was starting to get interested in the theoretical work that Bob Craiklund [?] has done. He's an early leader in the turbulence research business, theoretical work. But as I left I warned him, I said, "I'm not sure you want to get involved in this. It gets pretty complicated. You'll get in and I mean I've seen it happen to other people." So I went away and I came back a few weeks later and he had ignored my advice and he'd got involved in this stuff, and he dragged me in after him. So I became involved also in trying to understand what Craiklund and people in that school had been doing in connection with turbulence modeling. And as I say, that really did draw me in more and more, so I spent quite a bit of time after that working on the turbulence problems and various issues of turbulence. And in fact I'm still working on it. Last summer I spent a few months at something called the Isaac Newton Institute, which is connected with Cambridge University in England. It's an institute for mathematical sciences. And they have programs on various topics, and last summer it had to do with climate issues or weather prediction, or geophysical issues. So I had been invited to come and participate as I did for three months or so. But now, coming up not this coming year, but beginning in January '99, a year and a half away, they are planning another six months on turbulence. And so of course they have asked me to please help them organize the thing, so already I am finding myself being drawn into thinking about well, what are the current issues that we ought to be thinking about in connection with this six month program at the institute in Cambridge. And although a year and a half is still quite a bit of time, before it gets started a year and a half is not an awful lot of time to do something new and significant about the turbulence problem, so I'm starting to think of [???] in those terms, see if there's anything that can, you know, what can I do and what can people in general do. The turbulence, in fact, everyone agrees that it's an important problem, but as a science administrator colleague of mine at Livermore keeps chiding me about it, he said, "Of course it's important, but nobody has made any progress on it. Why waste your time?" It's considered intractable. Well, to a certain extent he is right. The present philosophy about turbulence is that — I think one that I believe and others have believed, a few of us have believed that there is no solution to these turbulence problem, because there is no single turbulence problem. Almost every configuration has its own special aspect. But what you can hope to do is to set up numerical simulations which can account separately for the different configurations of the large eddies, which are important in this. But now the problem is, what do you do about the effect of the unresolved small eddies back on the large eddies, because everything is interacting. And so it becomes the so-called large eddy simulation is what the jargon now is for what we're trying to do. That is a numerical model in which you explicitly describe all the largest motions, but you put in some kind of a prescription that takes into account the effect of the small stuff back on the large stuff. The small stuff you can't resolve in the model.
Right. Sounds like a viscosity issue.
Yes. It's an artificial viscosity issue. That's right. Except that we'd like maybe to do things a little bit more complicated than that. One of the contributions that I and a few others have made to this is to recognize the fact that not only are the small scales in some sense dragging energy out of the large scales, lack of viscosity, dapium [?], but because of their random and unknown motions they are also feeding random forcing back into the large scales. So it's kind of a stochastic [?] effect that's going on. And I have published some papers and have thought a little bit about that issue. It is consistent with the predictability issue too, because some people when they first hear about it they say good heavens, you've converted my deterministic calculation into a stochastic one. You have destroyed what I thought I was trying to do. I said, "It's happening anyway, whether you like it or not." So — [laughs]
It's not the mathematics, it's [???].
That's right. So anyway, that's the turbulence business that I've been in, and still am in, to a large extent.
Let's see. A couple more questions about the NCAR period. During this period one of the things that happened was a sort of move from finite difference type models to spectral models. Talk about how that happened, and what the importance of it was.
I'm not sure that it happened at NCAR particularly. It happened in the fluid dynamics community, but it was happening also in the atmospheric modeling business. The person that pushed that in my mind was more in the turbulence game, and that was Steve Orszag, now at Princeton.
Spell that, please.
He was early interested in the simulations of turbulence, and noted that spectral or strictly speaking spectral transform methods were going to be an efficient way of doing some of these things for a given accuracy, for a given amount of arithmetic, which is what it comes down to. The point in the spectral transport methods is that the linear part you do in the spectral domain, where it's simple, and often differentiation results in essentially multiplication in the spectral domain. The nonlinear terms, multiplications in particular — and these are quadratically nonlinear things that we are usually doing — that better be done in configuration space, ordinary space. So it was worth it to transform back and forth every time step, and to do the right thing in the right space. So you work both in K space and X space, effectively, in wave number space and in configuration space. And that's the spectral transform technique, which he pushed and I believed and which is finally now what goes into current atmospheric models.
Yeah, almost all of them.
That's right. Now, the notion that it might be appropriate for atmospheric models too really was coming out — Well, the notion of spectral models was essentially coming up, Machenauer and others in Denmark I think were looking into it, among others.
Spell that name, if you could.
Oh boy. That's M-a-c-h-e-n-a-u-e-r, M-a-c-h-e-n-a-u-e-r, or something like that.
The notion of using spectral representation had been floating around, but in the nonlinear problems the amount of arithmetic involved, or at least the number of nonlineator [?] action, coefficients that you had to pre-compute or something, were so tremendous that it didn’t look like a very feasible way to go. This notion of making spectral transform got around that. It did the nonlinear stuff to where it was simple, and it did the linear stuff in the spectral space and only had to worry about the efficiency of the transform back and forth, and all the effort went into that. The spherical problems for the atmosphere were done in terms of an expansion of surface harmonics, surface spherical harmonics, as the basis functions for the spectral domain. And then a grid point representation for the other. But yeah, that, and those were about the years that that looked to be a really good way of doing it. And Steve Orszag was at my encouragement a visitor at NCAR about the time he was thinking about this for the turbulence problem, and so he and others started looking at it for the atmospheric modeling. Not only there at NCAR, of course, but as I mentioned also the people in Denmark had been looking at it also. In fact amusingly enough I was climbing a mountain once and stopped overnight at hut at a high level, and met a guy that I had never known before, but he happened to be from the Danish group, and we started talking about spectral transform models.
Strange place to have a conversation like that.
At the elevation of 15,000 feet on Kilimanjaro.
I thought that was amusing, sort of an accidental meeting and conversation.
It’s a very small world.
Yeah. Right. [laughs]
So the primary advantage of the spectral transform technique is that it’s computationally more efficient.
Yes. Although when it really boils down to it, we’re not sure how much more. It’s a little hard to know what you mean by equivalent accuracy with grid point versus spectral schemes, and different people use different rules of thumb. But no matter —
Why? I want to hear about that.
Well, you don’t know whether a wavelength in the spectral regime corresponds to four grid points or five grid points or six grid points, or how to equate these. But there is a feeling nonetheless that within this certain uncertainty, the amount of arithmetic involved for roughly comparable accuracy may be different only by a factor of two or three. So it’s not a big effect, although it’s in the favor of spectral schemes. But my feeling has always been in connection with the biggest advantage of the spectral schemes over the grid point schemes in my mind was the sociological one. Every modeler using grid point schemes said use the finite difference approximation. And so when the models disagreed in some respect, each person would say oh well, he used that peculiar finite difference scheme which I never understood anyway. The spectral modelers, there was no argument. Essentially everybody was doing the same thing. The only differences had to do with the way they truncated the, how, where they truncated.
But otherwise there was no, there was not such a variety of possible ways of treating the numerics in the spectral domain.
That’s very interesting. Okay.
But yeah, but it did become more popular, perhaps because, well perhaps just because it’s a more uniform approach. You know more clearly what you mean when you truncate in a certain way, which mode you keep and which you throw away. And that’s sort of the end of the argument.
Did changes in computer technology have any effect on making that a better technique to use? [???] processors and —?
Not until the issue of parallel computing came up. Now with parallel computing again raised the issue, and in fact the grid point approach looks more natural to parallel computing than does the spectral approach. However, the spectral people realized that full well before they got that far, and so they have easily taken care of the issue of parallelizing the transforms which are required for spectral methods, so that’s not really an issue any longer. Although our priority, you would have thought the grid point methods would have been better. And they are more obviously parallelizable.
Because you can divide the whole domain up into chunks. But each chunk on a node, and only worry about communication across the borders. And because it’s border surface to volume, the communication requirement always scales less than does the arithmetic requirement in the center, so that the arithmetic, by choosing number correctly you can always essentially not be limited by the communication, which is of course the price you pay for parallel computing is communication between —
On a sphere it’s a little more complicated than it is for other kinds of hydrodynamics calculations or other kinds of calculations, but it’s essentially the principle is the same.
Okay. I don’t know if this is the right question to ask you. There may be other people that this would be more appropriate for, but what do you recall about the history of climate models at NCAR? I mean, is the Washington model, which I think either is or becomes CCM-1, what happens after that?
I can tell you a little bit about the history of that. Warren Washington and Akira Kasahara were building atmospheric general circulation models early on, there’s no question about that. When I became a director of what was named the atmospheric analysis and prediction division at NCAR. NCAR was facing continual difficulties with what its’ proper role was relative to the university community. And there was a lot of sort of political difficulties associated with NCAR. Were the NCAR scientists as good as the university and research scientists? Were they competing unfairly with them? All sorts of peculiar difficulties came up. And when NCAR was first set up, I think the university people looked upon it as a center that would provide facilities to the university people, not competition.
Right. They were thinking more of UCAR [?] instead of NCAR.
That’s right, that’s right. They were happy to have the computers, they were happy to have the aircraft and all that stuff, but they weren’t sure they wanted all those scientists. I pointed out, at some stage in this argument which ran on for years and it will probably continue forever, that it wasn’t that simple. Because the only solution would be to execute the NCAR scientists. Otherwise they would just leave NCAR and go to some other university and compete with you still, so you are not going to get rid of that. Then they argued, however, well, but we have teaching responsibilities. And you people just get to do nothing but research. No wonder you could do so well. So it seemed to me desirable, for those of us at NCAR in my division, to choose some activity to contribute the order of half our time to. And at one of our retreats I threw out the notion that perhaps putting together a kind of a basic climate model, community climate model I called it at that time, as other people did, would be a good idea. That this would not necessarily be the best climate model around, but it would provide a kind of a basic structure on which other people could add stuff if they wanted to, but they wouldn’t have to rebuild basic pieces all the time if they wanted to do their experiments, climate modeling experiments. But they could only add the part of physics that they were interested in. And so we sort of thought that was maybe a good idea, and so a few people got together, about three or four or five, something like that, and started constructing this basic piece which we called it — We already imagined generations of the community climate model, and this was CCM-0 [zero]. I mean it was just a starting point.
Based on the Washington-Kasahara model?
That was certainly a large part of it. What we did essentially when we put it together was to talk to everybody we could think of and ask them for their notion about what was working and what wasn’t in other people’s experiments. And so it was largely built by a group of people at NCAR. I was sort of advising, but I wasn’t directly involved.
Do you remember who the main people were?
Let’s see. Maurice Blackman [?] was one of them, and he is at NCAR now, effectively running the division that’s as close to any as the one I was running. Eric Pitcher was another.
Spell the last name?
P-I-t-c-h-e-r. Bob Malone. He is at Los Alamos. Those are the people that I think of offhand as — they were people who were actually not so much a part of the regular NCAR staff as people who had just happened to show up and looking for something to do, either as visitors or whatever. Bob Malone was just there for a short visit and went back to Los Alamos and continued to work on the numerical aspects of it, and so on. So these are the people that put together this first CCM-0, and then I guess we had originally, I had originally imagined that there would be about one generation a year, but as it turned out it’s been more like one every five years I guess, as it turns out. But of course Warren Washington was deeply involved in that, and Akira Kasahara and other people were giving advice and guiding it.
Do you remember exactly when that project started?
Let me think. If I can look at the schedule here and find out. See, I was director for that division from 1977 to 1981, what I am talking about must have been sometime during those years that we got that started.
Fairly early on, I would say probably around, if I had to guess I would say 1978 for example. Yeah, and as you know, it has gone through a number of generations now and is now called the Climate System Model or something, CSM, and there’s a fairly important component of the NCAR program, and as well is used to justify the acquisition of more powerful computers at NCAR. And I, just within the last few months, have seen some recent results of a long run that was made on it, and it’s remarkably realistic, I must confess, even looking at it as an outsider, that I’m amazed that they are doing as well.
Yeah, it’s really beautiful stuff.
I don’t know, I think it must have been luck to some extent.
Well, there is an issue in connection with climate modeling that you may have heard of, and then some flex adjustment so-called.
Yeah. Yeah, yeah.
And this is a somewhat awkward issue, but they claim to have got there without it.
Right. And it’s sort of the first major model to do it.
I think that’s right. However, a counter point in connection with this, they may have done something else. They may have adjusted parameters. I call that sort of internal flex adjustment. You don’t actually —
Yeah. So they move it from one place to another [???].
That’s right. You jiggle something else until the flexes come out right. And it’s not quite the same, but it’s sort of the same sort of thing. And there has always been plenty of room for this kind of fiddling, because the things you are fiddling with you don’t know from first principles what to do anyway, so you have a certain freedom to adjust. And so I suspect that why they are doing as well as they are, but they are doing well, there’s no question about that.
During your time at NCAR, you know, you are sort of rising in the hierarchy of this place and in your career, and I’m curious about what—and you’re involved in things like the National Academy of Sciences and GARP and so on, and so you’re starting to see things that are very high level and I’m sure have some kinds of interactions with political people of various stripes. What sort of influences were there on your work on NCAR during this time?
I don’t know. There — sometimes raise the issue about whether there is an agenda beyond what we are actually doing scientifically. I never felt that. I don’t think I, or the people I worked with, ever much cared how it turned out. I don’t think I’m being naive about that. In fact, the objection I sometimes raise to people from the counter community — and there is, Dick Libbs [?] and —
Lenz [?] and Singer [?], etc.?
That’s right. My concern about, you know, I like Dick [???], because I like him to be there to argue about these things, but it forces you to think, and he’s smart enough so that you know you have to listen to what he is saying. But my concern with these people is that they have not been helpful to us by saying what part of the model physics they think may be in error and why it should be changed. They just say, “We don’t believe it.” But that’s not very constructive. And so one has the feeling they don’t believe it for other reasons of more political nature rather than scientific. It would have been much more useful to us if they say, “Not only don’t I believe it, but I don’t believe because I think you have treated the cloud radiation interaction improperly, and this is a better way of doing it I think, and this is why.” But they don’t do that, you see. They leave it as, “Oh well, those climate models are no good because of the cloud radiation problem, so we don’t believe them,” so all of this concern about the increased global temperature is just uncertain. They won’t tell us what the negative feedback is, for example, that will counter some of the positive feedbacks we are concerned with in connection with water vapor which will amplify the effects.
Right. They are not going to put a number on it, but they know it’s there.
Yeah. So that’s why this is sort of the interface to some extent between politics and science, and so, as I say, now one of the issues having to do with the sensitivity of the climate system to external influences can be revealed if you ever could do an experiment on the real atmosphere by throwing in some kind of a pulse disturbance of some sort and watching the way it died away. From that relaxation I can back out some information about how the system will respond when I do things in general a little bit. And Lenzen [?] pointed out that volcanoes do this for us. So there was something kind of interesting about that. However, I don’t think the volcano responses were any different than what we were getting anyway, so it didn’t tell us anything that we didn’t know, and I think even he recognized that fact, although, so he’s sort of torn between —
It took a while for him to come around.
So but yeah, but it’s true some of the people in that camp provide very little science. Mostly they have political concerns and or, and then can point to flaws or uncertainties in the model, you know.
A different sense of politics. There was a, you know, GARP is one of a number of sort of strong international cooperative projects, and to what degree do you sense political support/opposition to that idea, to what degree was the experience of working with —?
I think it spun out of the World Meteorological Organization, which, like the postal union, no one has ever argued about effectively. I think that’s essentially what it amounts to. No one has ever questioned the importance of the United States being involved in this big international postal union so that you can send mail places, and I think no one ever really seriously objected to the fact that the United States contributed its share into the data flow for the global collection of data from all the weather balloons being released and stuff like this that I’ve ever heard any concern about. Now, I’m sure there are arguments in Congress about how much money ought to go into this sort of thing compared to other deeds, but —
Have you felt that, in your work as an atmospheric scientist, that there has been a strong degree of international collaboration all along?
Has there been periods where it has waxed and waned?
No, I don’t think so. In my experience.
For example I see that in the paper that’s in this book I think — or maybe it’s one of the other, maybe it’s one of your articles. There’s a reference to a fractional time step scheme devised by somebody named Marchek [?], and it appears to be a question —
Marchek. Gary Marchek, yeah.
Journal title. Is this somebody that you had personal contact with, or —?
Oh yeah, yeah. Gary Marchek is a mathematician. His career has paralleled mine to a large extent, because he got his degree in mathematics. He got it from the University of—Leningrad, St. Petersburg. Then he went into atmospheric modeling and he became the director of the computing center at the Academy’s Nova Seversky [?] Institute. And one of the first things he started doing was computer models that he could then build because he had, such computers that they had, they were somewhat slow compared to what were used to, but was to build, start building atmospheric models. And I think he did that not out of any particular prior involvement in atmospheric problems. His background is mathematics, just as mine was, but he thought this is a challenging problem, so he started working on it and published some results on early atmospheric models and developed numerical techniques which were appropriate. Bob Rickmire was somebody that he interacted with, and visited him often, as I did. We interacted quite a bit, from I’ve forgotten now what year.
Well, let me think.
This appears in a 1965 article, so it had to be before that.
Somewhere in there, yeah. It was, yeah, in the ‘50s, maybe something, yeah. Now he then became, he moved up in the hierarchy of the Academy of Sciences and became the head of the Siberian branch of the Academy of Sciences, and then after a while he became, came back to Moscow, Leningrad, Moscow, and was finally head of the Academy of Sciences in general for the Soviet Union. And became Minister of Science and Technology for the whole Soviet Union. So he was, well a corresponding, well we don’t have anything quite like that in the United States, but that’s essentially what he was. And I continued to visit with him occasionally during all this time that he was rising through the system. Because of his earlier involvement in atmospheric modeling, a number of people like Joe Smagorinsky and myself and others would visit occasionally and talk to him about what he was up to. And he would come on visits to various meetings, international meetings.
Were there other important Soviet scientists that you had interaction with?
Well, in the turbulence business, Alexander Obukhov was one of the — Kolmogorov [?] is a well-known name in the turbulence business, and Oboukhov was working with him at that same time, so the stuff that they did in the early ‘40s having to do with minus five-thirds [?] power [???] and so on, which is one of the few things that are pretty well settled in the turbulence game —
Would you mind spelling those names?
Oboukhov. Well, I’ll give you something, not very good spelling. O-b-o-u-k-h-o-v, for example. Something like that. You know, it depends on how you do it.
K-o-l-m-o-g-o-r-o-v, o-f, or whatever. Kolmogorov. Kolmogorov is an imminent Russian mathematician in other fields than just this. I mean, this was only a little bit of what he did. Oboukhov in fact was his student, and Obukhov has told me that what happened was that in the early ‘40s, which is the war years, I’m not quite sure what was happening, but anyway, Obukhov was on some agricultural research station somewhere in the boondocks in Russia, in the Soviet Union, and had invited Kolmogorov to come and spend some months during the summer, and it was during that summer that they put together the Kolmogorov-Obukhov similarity theory for turbulent cascade which led to the minus five thirds power along, which is well known ever since. But Obukhov was a member, a Russian member of the Joint Organizing Committee for GARP for example. So I got to know him quite well during the time that I interacted with him on that. He was for some years. There’s always in the, I think as I mentioned, from the World Meteorological Organization point of view, there has always been good international cooperation on weather problems. It was, except during times of war when the nations stopped exchanging, putting stuff on — Well, the most recent time was that the British blanked out the Falkland Island reports for a while some years ago. [laughs] For a while. Took it out of the system. But I think that’s the only episode I know of in recent times. [laughs]
Okay. Let’s see.
Marchek later got into some trouble, and I don’t know — There are issues, what happened to him during the transition from the Soviet Union to Russia. I have seen him I think shortly after that transition, and he was sort of making it. I was asking him about it, he said, “Well, of course it’s not what it used to be. Now I’m chairman of a consultative committee for all of the various separate academy of sciences for the different pieces.”
Oh yeah, right. I was told —
So they get together every so often to talk about common problems or something, but he says, “There is no money attached to it.” Whereas before, he was in charge of all of the money that went out to all the Academy of Science institutes, he had to make decisions on. So he was sort of, he was amused by it, but he said it’s quite different than it was.
Yeah, I’ll bet. Let’s talk a little bit about the last part of your time at NCAR, which you left there in 1983 and went back to Livermore. You became a senior scientist in the large scale dynamic section.
Well, when I stepped down from being the director, which I had wanted to do for some time, I was not all that happy with the administrative responsibilities. And so I had been, well, I was slow going into it. I was acting for a while before I finally consented to take on the job in the non-active position. But then I was spending some time looking around for other people to come in and take it over. And in fact I finally did succeed in getting somebody to come and take over. That was a fellow by the name of Rich Anthes, who came —
You may know or heard of him in connection with these things, because I brought him to NCAR essentially to relieve me of my responsibilities as an administrator, and he took over, and he fell into it well enough so that he then rose to become director of NCAR and finally the president of UCAR, which I guess he may still be for all I know. I think he is. He was much better suited for that sort of thing than I was. And he’s done a, did a very good job of it I think. I mean I was pretty happy. At an earlier stage, when I had been acting director I finally decided that — I don’t know who it was now, but I took over as the director because there was somebody suggested to be the director that was so, in my mind and my colleagues’ mind so unsuitable, that it seemed better for me to do it. And I’m afraid that’s often how people get drawn into these things. As I say, I don’t remember who it was now, but that was a —
One of the things that happened toward the very tail end of that period, maybe a little bit later, but I think it was in ‘82 and ‘83 with the nuclear winter issue, which some people at NCAR had some things to do with.
Yes. Steve Schneider [?] was interested in that, wasn’t he?
He and Schneider [?] Thompson both worked on a sort of —
Yes, that’s right.
— trying to do the same calculations that the Sagan group did with a more complicated model and got much lesser effect, but it was still quite significant.
Right. No, that was certainly an interesting exercise, an important one. Of course it triggered some interest and concern at the Livermore Laboratory.
Yeah, I bet.
The trouble was that Edward Teller didn’t want to believe. But I mean you sort of have to, you know, you can’t say I just don’t believe it; you have to say why you don’t believe it and —
The same guy who wanted [???] weather control.
Yes, we will be controlling [???].
That’s right. So, yeah, that’s a real problem which —
Did you have any involvement with it yourself?
I did not have any involvement in it, particularly although I probably was sitting in on meetings in which people were arguing about what the magnitude of this effect might be and the like. And I think, well, it may have been Mike McCracken, who was using the climate models at Livermore, to check on this from the Livermore side. You have to make a lot of assumptions about the nature of the smoke and things of that sort, which is fairly complicated. And sometimes somewhat arbitrary. But nonetheless, there is no question about the fact that it’s real, a real potential effect.
So then you went back to Livermore. Why?
Oh. A very simple reason. I recognized along about in those years that I better — the nature of the University of California retirement system is such that I had to be back in the system for three years before I retired in order to retire with the formula using a salary which was better than the one that I’d left, is what it amounted to. What with inflation during those years and what you, really. So there was a considerable financial incentive for me to return to the Livermore Laboratory, or to somewhere in the University of California system, for three years before I retired. And there are a number people, I still was in touch with a number of people at the Livermore Laboratory who were happy to have me come back. So I went back for that reason. Now, it is true that I was there a lot longer than three years before I retired, and even after I retired I’m still going to work every day, so yeah, I still have an office where I go within the weapons program where I go every day. But, well, it provides me with considerable support, computer support and various other things, so — But yeah, that was essentially the reason. It had to do with the nature of the retirement system for the University of California. See, I left my funds in the system when I’d gone off to NCAR, and so, and I started working at the University of California in whenever, 1943, so it was a fairly large accumulation with the years involved. And in fact so much so that when I finally did get around to retiring, my income went up by 30 to 40 percent from that time.
Well, I had the combination of the UCAR/NCAR one separate from the University of California one. Probably there was a limit on either one, but the combination went, broke through the limit. So —
I didn’t think about it in those terms at the time, but it turned out that way.
Well, in the meantime, while you were gone, were other people picking up the atmospheric modeling strain —?
Not particularly, except McCracken to some extent continued this I think. There is an atmospheric group there, but they, and they play around with models to some extent. Larry Gates [?] became the head at Livermore of the so-called climate model inter-comparison group, or something of this sort.
Yeah, I know about this. [???] people.
And within that framework, a number of models were brought into Livermore for purposes of inter-comparison. This was a sort of separately Department of Energy funded activity, of interest to the Department of Energy, and sort of the rule was, yes, bring in models, but don’t worry about building your own. Now of course people tend to build models anyway, and I’m sure there are some floating [???] or been modified from others, and so on, but, and so there is a sort of climate modeling research group. Not just Larry Gates’ activity, but it’s a little broader than that with other people. Dannevik in particular, Bill Dannevik, who used to be in the turbulence business with Orszag at Princeton, came to Livermore and is involved in climate modeling activities.
D-a-n-n-e-v-i-k. And I was sort of sitting with those people up until a year or so ago, and then I moved to another building, so I see them less often now than I did at one time. But they were also looking at moving — well, one of the projects they undertook was to move climate models to parallel architectures, and also start bringing in ocean models and coupled atmosphere ocean models to look at climate, bigger color climate models, and so there’s a group of people at Livermore working on that. But none of those models can I think of as direct descendants of model — I think their atmospheric model, if anything, is more of a descendent from the UCLA model, which is sort of a descendent from the original Mintz-Arakawa work that was done years ago. And Arakawa in fact I think is still has — the Livermore people have discussions with Arakawa about the aspects of modeling problems that they come up with. So I can’t say that my original Livermore modeling work ever fed into any other model directly; only the general ideas within it perhaps were made use of by people who were then devising their models, rather than any copying of the code.
Can you think of any particular people it might have had a large influence on?
Well, Warren for example, and certainly Warren Washington talked to me a lot about — I think the motion picture we were just looking at stimulated a lot of people into believe yeah, it could be done, something sensible can be done here. I think perhaps just that image encouraged people that it was not absurd to try to do something like this.
It’s amazing how powerful visual media [???].
Well, it seemed to be, yeah. It sort of reassured people it was worth trying. You weren’t going to run into particular peculiarities. It looked feasible.
Now, it’s interesting that in your career you worked primarily at Livermore and then at NCAR and then Livermore again, and these are very different kinds of institutions. Livermore classified and closed, NCAR very much an open, public institution.
What was the difference like from your point of view?
It was different. Of course at Livermore there are open parts, like the climate modeling and ocean modeling and so on is all open, and that’s fine. And as well closed parts. Perhaps the biggest difference is that in the closed part of course, because it’s closed, people don’t publicize what they’re doing. Quite the contrary. And so the whole issue of publication is, attitude toward publication and the necessarily for it is quite different. You know, it’s publish or perish one community; publish and maybe go to jail in the other! [laughs] So you see what I’m saying, is that’s a fairly big difference.
Publish or perish or publish and perish.
Yeah, right. I can remember sometimes, when looking at the general scientific activities at the Livermore Laboratories, some of the people who were involved in it said, they said, “Well, we’re being used like, we’re being paraded up and down inside the fence to draw young boys in here to get them into the programs,” or “We’re enticing people into the laboratory that might otherwise think ‘no, I don’t want to have anything to do with that’ by doing some pure publishable research, indicating that well, yeah, there are some interesting things going on in this place, even if it is a terrible weapons laboratory.” That takes some of the curse off it. Sometimes people have expressed that view in connection with the scientific programs. But no, it’s really different. If you are in the open, academic community, as you know, what you are really doing is trying to impress your peers to a large extent, with the competence of what you’re doing. But in a place like the laboratory, what people are trying to do is impress their supervisor. He’s the only guy that in a sense—he’s the only person who knows what they’re up to.
Interesting. So a much smaller audience.
A much smaller audience. That’s right. And much more dangerous, if you stop and think about it. After all, you can rub somebody the wrong way, and you are depending on the goodwill of perhaps just one or two individuals. I’m thinking now about younger people getting started perhaps. Whereas in the open community, you’ve got, you sort of develop a base of respect, or disrespect, depending on how you do it, which sort of insulates you against the whims of somebody mean [?].
Has it had a direct effect on you, this phenomena?
Not particularly, no.
[???] when you were young.
I’ve never had any particular difficulty. I notice this more when I think about the roles that younger people are playing as I am looking at it as a division director at NCAR or as I watch people at Livermore Laboratory. But no, I’ve never had any — The last time I asked anybody for a job I think was when I went to work at the Berkeley laboratory in 1943. Mostly people talk to me about maybe wouldn’t you like to come to— So I haven’t really worried about it much, and I’ve sort of you know done whatever I felt like, is what it amounted to. When I retired a few years ago and continued to be at the laboratory, it was pointed out by some of my colleagues that now I could do anything I wanted, and I said, “Yeah, isn’t that really great?” and then I started thinking to myself, “But I’ve been doing that anyway.” [laughs] So I haven’t noticed the difference particularly. I have been relatively free to work on what I was interested in for a long time. Mostly I think that was because I picked things to work on which I thought would be appreciated by the people that I was working with in the early days, and then to some extent that seemed to have been the case. By being one of the early people in connection with the development of numerical models for simulating explosions of nuclear weapons, for example, since I was one of the first people doing it, there wasn’t anybody telling me to do it. I decided gee, I think maybe we could try something like this and see how it works.
Let me take a brief bathroom break, and maybe while I’m doing this [???].
I have to be, let me think. I’m trying to think. My, the parking meter is about going to run out on me sometime in some parking lot somewhere near here.
I’ve forgotten. About 3:30 or 4:00 or something like that. And how long do you want to go?
Well, I get the sense that we are almost finished, and I was really just going to ask you if there were any other areas of your career that you’d like to have recorded in this forum that we haven’t talked about.
I think we’ve covered almost everything I can think of. You’ve sort of looked at the list of things I’ve [???].
I’ll make a copy of this CV and send it in along with the transcript, so that will give people a guide to your publications and so on.
Yeah. I think — Well, why don’t you go and come back and then we can sort of wind up I guess, is what it amounts to.
Okay. [tape turned off, then back on...]
And I, at the Livermore Laboratory there was something called laboratory directed research and development. Six percent of the budget at DOE request was set aside for sponsoring new activities, both at Los Alamos and at Livermore. Six percent doesn’t seem like very much, but the budget is about a billion dollars, so that’s $60 million to play with every year. And so I was involved in the kind of, on various levels at one time or another in the committee that looked at various proposals to how to spend this money, and it was kind of fascinating work, because all sorts of bright ideas come bubbling up from a large number of people. But that I got particularly involved in had to do with massively parallel computing. At some point the question was raised, at Livermore, what, if anything, should we be doing about this new possibility of massively parallel computing. So I was the chairman of a small laboratory committee, drawn from people all over the laboratory, different departments and divisions, and we started meeting — I think it was in the summer of 1988, and we met through the fall — came out with a report in September saying well, we better do something about it, because it’s coming. But that got me sort of drawn into that whole issue how best to make use of the massively parallel computing technology that was coming along.
Now this was, by that point did connection machines exist?
Yes. Yeah, there were [???] time, that’s right.
There were several companies working on at least small numbers of multiple processor computers.
That’s right. Now, our philosophy that we came to in connection with this was that well yeah, because the point was that the mass produced microprocessor were going to have a cheaper multiply than anything put together by Seymour Cray by hand [?]. That’s what it came down to. So it was just cost effective, is what it really amounted to, but yet powerful as they were, they weren’t good enough for our purposes, and so we better gang a lot of these things together and try to figure out how to do the communication. But all of this, well you know, we moved ahead on. And yeah, we decided yes, we’d better move ahead, and then we started to do that by getting in first of all some sort of experiment — In today’s terms, rather crude, early parallel processing machines. Which we learned on. We learned how to essentially message passing between nodes and stuff like that. And that, I think, when I look back on it I would say that we made the move toward massively parallel computing fairly successfully, but it was disappointing. The reason I say that is that initially we had looked at the potential speed of microprocessors and thought we were going to get something like that. And of course it turned out we didn’t. Most of the —
Because the communication overhead was so high?
Well, I think it was more than that. I think that the load time onto the chip essentially was so slow that we were not getting anything like the speed of, the basic arithmetic speed [???] the chip. You get about 10 percent of it. You know, you’ll have a —
Bus speed is what’s limiting you.
And you can [???], you can try to use caches and all the rest of it, but nonetheless it was slower than we had hoped, let me put it that way. And that’s still to some extent a problem, because —
That’s interesting. You know, I noticed this with the new Mac user, but the new Power Macs have chip speeds of 240 or even like 300 megahertz, but the bus speed is still between 40 and 60 hertz or megahertz —
Megahertz, right. And that’s essentially what’s limiting. So well, it was a kind of a disappointment to us. We were finding, you know, you take a 50 megahertz chip and you’d be getting, if you were lucky you get 5 megaflops [?] out of it, you know that sort of thing.
So it was not what we were hoping for.
And that’s still to some extent a matter of concern. I think the IBM —
You said it was disappointing, but do you think it’s worthwhile in the end? I mean even if you’re disappointed?
Well, I think you have to do it anyway, and I think it’s just an issue of trying to figure out how to use the chips more effectively. And you can argue it’s a compiler problem. After all I can write assembly language loops to use the chips properly, so, and have, [laughs].
You’re still doing those old tricks.
That’s right. And very much the same issues, it turns out. And nothing much new. It’s a matter of issuing, filling every slot, time slot, by issuing instructions in the right order and being sure that when some number is coming out of arithmetic unit you are about ready to use it again. So it’s just a matter of planning. And again, it’s essentially going back to these stack loops to some extent and tuning those for —
So it’s not that much different a problem than we were looking at years earlier. Yeah. I think that’s not been pursued perhaps as far as it might be, but, you know, people are learning that. Eventually presumably the compiler tech — It’s not an easy compiler problem, this one of figuring out the order in which you ought to be issuing instructions and the timing and so on. It’s a tough, it’s a difficult problem, technically difficult problem. Nonetheless —
It’s a problem for the writers of model programs too, because they have to sort of relearn how to code for a system like that. I think a lot of people don’t want to put the time into doing that before they’re sure it’s going to work, and yet we don’t know for sure that it will work until after we have some successful prototypes.
But I early on wrote some massively parallel — I was interested in the communication versus arithmetic issue is really what it amounted to, so I didn’t really care about the physics that I was doing. I just wanted to have some arithmetic and some communication. So I wrote a simple linear diffusion calculation over a big mesh broken down into pieces communicating information across the edges of the mesh. Just got that running really fairly efficiently. I had this tremendous linear diffusion problem with nothing much to do with it because who cares [???] there was a test of the timing issues. More interesting of course perhaps is the shallow water code that I use for these purposes also, which is closer to simulate two-dimensional compressible turbulence or looks at some of the aspects of atmospheric models for example. So I have been in more recent years using that as a kind of a test base for these things.
What machines is Livermore, are you working on for [???] like this?
Well, right now, let’s see, I think they — I haven’t been using it. They have recently acquired some, well, they are using deca [?] alpha chips in some combinations of things. There’s a T3D [?], T3E [?]. Those are Cray computers, which use an alpha chip I think at the [???].
How many processors?
It varies quite a bit, and I don’t know the answer offhand, but it’s the order of 128 or 256, that order. I think the fact of the matter is that I haven’t been doing any massively parallel computing for the last few years, and although I have some codes that can be used to check some of these things, old codes which are sitting in the system, but I haven’t really been using them much for my own work. But that, as I say, I think we got, we got that started, but I was still for a while disappointed in the fact that the individual microprocessors were not being used to what I considered their full capability. I must say I think that the IBM machines, SB-2, or whatever it is, has seemed to have done a pretty good job on that. There was a year or so ago, our competition at NCAR for the acquisition of a new computer, and they on the basis of the careful [???] comparisons between different proposals had settled on a Fujitsu machine until political considerations intervened. I guess buying a Japanese machine.
It was interesting. We had always wondered about that issue and NCAR and elsewhere. What happens if you order a Japanese machine because it looks a lot better? And no one would ever say don’t. Nobody would want to be put on record to say don’t, but when they actually did, then the word came through, well, you better not. But it didn’t come through in the way of saying you better not. What it came through was the Commerce Department starting a dumping investigation to find out whether Fujitsu had been — how much it had actually cost them to build the machine that they had sold for a certain price, at NCAR. And that’s effectively the same, because that will run on for a year or more, you know, and by the time the whole situation will have changed.
Then you want something else anyway.
That’s right. That’s right. So I think NCAR is moving towards Hewlett-Packard or something like that. Actually I’m not sure exactly what their decisions are, but yeah. That’s just part of the issue, so, but it was the Fujitsu machine, I think the European center got one. It’s a good machine. And the NCAR people wanted one. But by a factor of about three for the given price. It looked — And the price, that factor of three was coming from the fact — I looked into it, and it was coming from the fact that the Fujitsu people had figured out how to use a microprocessor correctly. The basic technology was the same in all these machines. Microprocessors, you know, they’re 50 megahertz, 80 megahertz, something like that. They’re all about the same. It was only that Fujitsu people had figured out how to get more power out of the particular chip that they were using. It wasn’t that the clock was particularly faster on it.
It has some parallels at what Cray did in its early designs, because—hold on a second. Hi, Wally. I was talking about what Cray did early on with things like, you know, designing one foot wires that were going to connect everything so they’d all be a nanosecond in timing so that you get the timing for the whole machine down to the same very well ordered —
...door of the cabinet was at a certain angle open. But it had to do with the length of the cables around the hinge or something like that. It was rather disturbing that it was that sensitive.
[???] a few stories like that. I was a computer operator in the mid-’70s and we had a, for some reason one of the tape drives we had would sort of randomly dismount, a window [?] would come down and turn itself off in the middle of an operation. This went on for weeks. And finally one of the computer engineers realized it was always happening at the same time of day. What would happen is that the sun would come down and the light would hit the infrared sensor and it would cause this dismount. But we were so puzzled by it.
Massively parallel computers, that was about the last thing that I had on this first page of things I was involved in, and I think that’s about it, so —
Okay. Well, let’s quit then, and I mean you’ll have a chance, as I said, when you get to edit it, the transcript to edit it, and you can add things at that point if you like.
Right, right. You have my laboratory address as well as my home I suspect anyway [???].
I have your e-mail address at least.
Yeah, fine, good.
And I will send you back a copy of this tape, and also probably send you some other things.