History Home | Book Catalog | International Catalog of Sources | Visual Archives | Contact Us

Oral History Transcript — Dr. Akira Kasahara

This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.

This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.

Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.

Access form   |   Project support   |   How to cite   |   Print this page


See the catalog record for this interview and search for other interviews in our collection


Interview with Dr. Akira Kasahara
By Paul Edwards
At the National Center for Atmospheric Research (NCAR)
November 3, 1998

open tab View abstract

Akira Kasahara; November 3, 1998

ABSTRACT: In the interview Akira Kasahara discusses the following topics: family background and childhood; the University of Tokyo; his interest in astronomy and meteorology; the Japanese Meteorological Society; his research in numerical weather prediction and later tropical cyclones; his work at Texas A & M University, the University of California Los Angeles (UCLA), with the Atomic Energy Commission at the Argonne National Laboratory, Courant Institute, National Center for Atmospheric Research (NCAR), and the University of Stockholm; his use of computers as early as in the late 1 950s; and other physicists mentioned include Syono, Manabe, Jule Charney, George Platzman, Harry Wexler, Herbert Riehl, Robert Simpson, Joe Smagorinsky, Phil Thompson, Warren Washington, Chuck Leith, Yale Mintz, Arakawa, Takashi Sasamori, Bob Richtmyer, Vincent Lally, Will Kellogg, Dan Rex, Joe Tribbia,and John Freeman.

Transcript

Session I | Session II

Edwards:

So we're back for a second day of interviews with Akira Kasahara. Diane Rabson is here, and myself, Paul Edwards. And today we were going to talk about GARP, and your involvement in it. Planning for GARP began in the second half of the 1960s and you became acting director of it in 1972?

Kasahara:

No, actually, no. My involvement in GARP officially at NCAR was that I became acting head of GARP Program —

Edwards:

For NCAR.

Kasahara:

At NCAR. I think 1973 to '74.

Edwards:

Right. Right.

Kasahara:

Actually I don't know why I was asked to do so. My guess is that I had been involved in development of the general circulation model at NCAR and I had been working since 1963, and in 1971 —

Unknown:

Hi, I think there's going to be a meeting in here at 10:30, but the Damon Room is open for the whole day, so why don't you guys go move over there?

Edwards:

OK. (A break in tape)

Edwards:

OK. Actually, let's go back just a little bit and tell me when you first start to become aware of GARP and...

Kasahara:

Well, since Warren and I had been developing general circulation model and work went along quite well. In mid 1960s, around '64, '65, we started producing some useful results. Then we had a contact from Jule Charney asking NCAR to participate in GARP activities. And one of the things Charney and his company were interested in was to find out how predictable the atmosphere is. Jule Charney himself had been working using Yale Mintz-Arakawa UCLA model to find out predictability of the atmosphere. And what they did was to run one forecast, two weeks forecast, and then run another forecast with small amount of perturbations added to the initial conditions to see how much initial error grows. Now, because of the nonlinear aspect of fluid motion, the initial perturbation eventually grows. If the difference between the control and perturbation runs becomes an order of day-to-day variability of flow, then predictability is lost. Using the Mintz and Arakawa model, and I believe Smagorinsky had done a similar work, people found that predictability of the atmosphere is about two weeks. And of course that particular magnitude, whether that is 10 days or 14 days, or even 30 days, depends on the capability of the model. Namely, if the model has a relatively small variability, in other words, if the model is dull, then even though you put some initial perturbations, small perturbations don't grow that fast, while if the model is more sensitive, then small perturbations grow faster. And it turns out they, like Phil Thompson and Ed Lorenz, made some estimates using actual atmospheric data. People guessed that the predictability was relatively short, maybe sort of one week. And there was a question whether that's really two weeks or one week, it makes a big difference, and since we had been developing GCM, the GARP Committee people contacted us to study that particular aspect.

Edwards:

Around what year is this?

Kasahara:

That's around mid 1960s, '67, '68 perhaps. So Warren Washington actually helped Jule Charney to run some experiment using the Mintz-Arakawa UCLA model.

Edwards:

Here? Or at MIT?

Kasahara:

I think he may have run here. You may have to ask him about that. We also tried to do the predictability experiment, but that's actually more sort of later on. Then in 1971, I took a leave of absence and spent nine months at Meteorological Institute at the University of Stockholm. There I was working with Bert Bolin, who was the prime mover of GARP not only in Europe, but also in the whole international community, working with Jule Charney and Joe Smagorinsky. So, for example, I attended a conference in Moscow in 1972, I believe, early part of 1972. And after I spent nine months at Stockholm, I had an opportunity to travel to Germany and Norway and other places to talk about some of the results of the error growth aspect as well as the results of observing system simulation experiments conducted with Dave Williamson. Now, one of the things the GARP committee was interested in was the following. After all, the observing system consists of various platforms, satellites, balloons, buoys, as well as conventional radiosonde unit networks. So you have a mix of different kind of observations, and some of our observing systems don't have complete observations of all kind, including both wind and temperature, but only wind or temperature, not like the radiosonde network. So the question is having somewhat incomplete nature of observing systems, how those data can be used to analyze the global flow patterns. After all, the major objective of GARP was to analyze global atmospheric conditions. So one of the questions is, for example, if you have only an observation of wind and using that wind data, how we can infer the temperature. Now one way to do so, as Jule Charney and Robert Jastrow was working, is to run a general circulation model. Consider the general circulation model which runs on like real time starting from one instance and with complete global conditions, you can forecast, say, next day. But when the next day arrives, you have observations, although they don’t have complete observations, but whatever we observe, they are real data. So a very simple minded idea is to replace the predicted values by the observed values of whatever the variable you have observed. Say, you have only the temperature data, then you replace only the model temperature and leave other variables as predicted. So the results change. So if you continuously inject observed data along with the predicted values, and if you continuously to do so, the hypothesis is that the model eventually assimilates all those mix of observations.

Edwards:

I didn't understand that last part.

Kasahara:

The calculated flow pattern is adjusted to the observed data continuously so that in between observation times, any forecast pattern is considered to be representative of the actual atmosphere. But that's only a hypothesis, so the GARP committee was very much interested in whether that kind of approach can be used to analyze global data set. One way to test this hypothesis is to use a general circulation model and create simulated observed data, taking out of a long climate run. Then, since the observed data have errors, we add some errors to the simulated observed data and pretend that these are real observed data. Then, using a general circulation model, you start making forecasts and then whatever observations you have you start replacing predicted values by your observed data. So you could simulate an assimilation procedure and that's called Observing System Simulation Experiment. It turns out that when you inject only wind data, sometimes it's not possible to recover the temperature field by that process. Because the atmosphere has a special kind of character, even though you inject only wind data, unless that those wind data are agreeable with the calculated temperature field, those observed wind data which are injected are rejected in the next time step. Now what happened is that, say, you imagine a lake, and you perturb the water surface by throwing a stone, you find water wave propagates out. Oh yes, when you perturb the height field, the associated flow field changes too.

Kasahara:

The height field changes temporarily, but after the induced waves are propagated out, the lake becomes calm afterwards. That means even though you tried to change the height field at a particular instance, you are unable to let the shape of height to remain. As kind of a similar manner, in the atmosphere, when you try to change only the temperature or wind, then some perturbations may develop, but unless you force both the temperature and wind, which are balanced each other, you may not be able to keep the observed data to stay in. That leads to the question of how to analyze observed data from various sources for the initial conditions to numerical forecasting models. That particular field of study is called the initialization of prediction equations. So around 1970s, beginning of '71, '72, so on, there were rather extensive studies concerning how to utilize observed data of a somewhat incomplete nature to analyze the global weather patterns.

Edwards:

Now, the work that you had done earlier on your own GCM here with those people we talked about yesterday had built some global data sets for you to use to test them out in a forecast mode. Did that feed into...?

Kasahara:

Well, that was done by old manual procedure. Using observed data and with some climatological knowledge, you essentially drew global weather maps in the same way as done in '50s. During '50s, we only had more or less regional observations and so in some areas we could analyze quite well but over the oceans and so on where there were no observations, we could hardly analyze the flow patterns. So, one of the novel aspects of GARP was to analyze weather conditions over the entire globe. But those open areas we didn't have a complete observation and one of the things NCAR contributed was to develop a device, called a Constant Pressure Balloon. Vincent Lally and his group —

Edwards:

That's right, we talked about him last night, that's L-A-L-L-Y.

Kasahara:

— developed constant pressure balloons floating around 200 millibar level. His ideas are pretty interesting. He actually drops some radiosondes from a floating balloon, so you can measure the vertical profile of wind and temperature and so on. Since the mother balloon is floating and moving freely, you never know where it's going and in fact one of the problems in those days was to avoid the balloons to get into the Soviet Union —

Edwards:

Oh.

Kasahara:

— where they tried to avoid, you know — some of the countries did not permit passage of balloons. So the floating balloons were mostly deployed in the Southern Hemisphere, where there is no such a problem. At the same time that's where it's really difficult to make observations because of relatively sparse population. And of course NCAR made substantial contributions by conducting field program related to GARP. The Line Islands Experiment and the Barbados Experiment —

Rabson:

Oh, I know those. Right.

Kasahara:

— and GATE. GATE happened somewhere around 197. . . Is it '74?

Kasahara:

Something like that, '74. In 1972, I was in Europe '71 to '72. When I came back toward the end of 1972, and then around 1973, NCAR started reorganizing GARP activity. By some reason, I was asked to serve as acting head at the time NCAR was participating in the GATE experiment. Before then the field programs, Line Islands Experiment and Barbados Experiment and so on were managed by Will Kellogg and Dan Rex. Dan Rex was Director of Facilities at NCAR and of course, Will Kellogg had been Director of LAS — Laboratory of Atmospheric Science — and I still don't know why — John Firor obviously knows — but I was asked to head that group, essentially taking over Dan Rex’s responsibility. I did that for one year. It was a purely administrative job. Actually, I was just a kind of figurehead, because Stan Ruttenberg and John Masterson had been handling matters — and also Karyn Sawyer —

Rabson:

Sawyer.

Kasahara:

She was also involved in the field program management. I think the reason I was asked this job may be to somehow try to combine the modeling activity and field program activity. I think that may be the reason. By that time I pretty much left the general circulation model development to Warren Washington, because I left to Stockholm. And I was thinking about doing something different. Actually I spent quite a bit of time thinking about how to initialize observed data, which I talked about. So one of the things I did in Stockholm was to work on the normal mode of the global atmosphere.

Edwards:

What does that mean?

Kasahara:

Normal mode is kind of free oscillations. Now like a piano, when you hit the keys you hear various tones. The atmosphere has different modes. You know when you perturb a fluid, a variety of waves start to propagate. In the atmosphere you have sound waves which propagate vertically as well as horizontally. You have gravity waves too. In addition, you have large-scale so-called planetary waves. Now, in the so-called primitive equation model, the propagation of sound wave in the vertical direction is suppressed so sound only propagates horizontally. But you have horizontal patterns associated with the propagation of waves. For a simple geometry of course you can construct the pattern of oscillations relatively easily. But when it comes to waves on a global atmosphere, their horizontal structure as well as their frequencies are not easily determined. On this subject, actually, an interesting scientific development took place starting from the 19th century. One typical, well-known phenomenon is the atmospheric tide. The tide is a mode of oscillations, excited by an outside force such as the gravitational force of the moon or heating due to the sun and so on. But here we are talking about slow propagating, weather-producing waves. For example, large-scale planetary waves were discovered by Rossby in late 1930’s — that's why Rossby's name attached — which are a different kind of waves existed in addition to gravity and sound waves in the atmosphere. Actually, there is a relationship between the nature of tidal motions and what Rossby found in a fairly simple geometry. But, for global flow because of the spherical nature to find out what precisely the horizontal structure of such patterns wasn't easy to calculate until high speed computers became available. The work done by Longuet-Higgins in 1968 is classic. But still there is a lot of questions remained and I was interested in trying to find out more about this particular problem. So I started working pretty much after 1974 on the normal modes of a global atmosphere when I became senior scientist after the position of GARP acting head had finished. Now there were a lot of changes occurred at NCAR around that time. That was a time of so-called JEC era – Joint —

Edwards:

Oh, Joint Evaluation Committee. Yeah.

Kasahara:

— Joint Evaluation Committee. So I think that's a kind of interesting time of NCAR and you may find interesting, too. But, I don't know too much about it, obviously, because I was not in the position of redirecting changes of work. My impression was that NCAR was trying to really get into the work of GARP, and there was a lot of pressure from the communities that NCAR should start analyzing global data sets after the GARP experiment was done. Now originally the GARP experiment was planned somewhere around mid 1970s. Actually it was carried out in 1979. So obviously the GARP committee had to make a plan how to utilize observed data and analyze global data sets. Clearly you have to use a general circulation model to do the data analysis as I described to you. And of course, the technique of initialization was very important. So there were a lot of scientific problems as well as a lot of anticipation to use global data set. Of course the final objective is to demonstrate, using those global data, that we can forecast the atmosphere much more accurately than before. There is the question of predictability too from the study of predictability, but we know that the atmosphere may be predictable up to two weeks. Clearly there were a lot of expectations, and I think around that time, Francis Bretherton became Director of NCAR —

Edwards:

It says here 1974.

Kasahara:

— as a result of JEC. But then my impression was that NCAR management was worried that if we could not demonstrate improvement in weather forecasting capability, and if we took that kind of task and then we could not succeed, then it could be a tremendous blow. And moreover, the argument went that NCAR should not be the place to do that kind of semi-operational task and that task should be left as a function of the Weather Service. Of course, I don't know what kind of discussions took place between some of the influential people in the US in those days. I doubt Jule Charney was involved in the discussion, because many committee members participating in the GARP program in those days were obviously concerned about how to utilize observations coming from the global weather experiment. So it looks like NCAR was shifting gear from GARP activities, rather than operational-type work, to more basic scientific research. And around '78, yes, that's right — from '78 to '87, I began to serve as Head of Large-Scale Dynamics Section to take up a more basic research, which obviously aimed at contributing to the GARP program but not taking over any of GARP responsibilities. So from '78 to '87 or so, pretty much my effort was directed to do basic research on various large-scale aspects of atmospheric motions in support of GARP.

Edwards:

Let me just be sure I'm clear on what you just said about NCAR's attitude towards GARP after 1978.

Kasahara:

Yes, I hope that Diane some time may go over on such policy question, if we're going to talk about the history of NCAR.

Rabson:

Yes, we’ve got quite a bit of information about the JEC and different viewpoints —

Edwards:

And so you're saying is that the fear was that if NCAR became too deeply involved in GARP projects that were related to improving weather forecasting and if that failed, that would be a blow to NCAR as a scientific research organization so NCAR started to emphasize more basic science — distance itself. Oh, OK.

Kasahara:

I think that was my impression. Whether that's what management felt so or not, I don’t know. From my perspective I liked that idea, too. I feel that NCAR should not be a semi-operational institution, but doing more of basic research to assist the Weather Service. So during that period, my research interest was to solve the initialization problem. Also, the actual GARP observation stage was completed during 1979, so-called the Global Weather Experiment. Then National Academy's National Research Council decided to continue the GARP research for the next ten years. That means '79 to '88 or something like that. And during that period we had a number of data coming in starting from '79 after the GARP experiment was conducted. One thing the Large-Scale Dynamics section was interested in was to analyze the data like Dave Baumhefner had done. You probably like to ask him about his effort to analyze GARP data and we continued to work on the question of predictability. That's to try to find out more precisely what is the length of possible forecast. And one of the things during this period, as I said as one of major activities, was to initialize data sets for weather forecasting, because we found that a simple-minded idea to insert data into a running general circulation model was not a viable option.

Edwards:

Is that true? It's not a viable option because...

Kasahara:

Unless we add more other procedures to let data to stay in the model rather than being rejected. You know, there is an interesting analogy I guess to organ transplant (laughter), namely if you put the data into a running model you have to avoid the data being rejected instead of . . .

Edwards:

And this is because the data is incomplete and the model is generating its own calculations of some variables that are not available in the data, so when you put the data in the model the parts of the model that are not consistent with the data are going to drift away from the data.

Kasahara:

When there is a substantial discrepancy between observed data and predicted value, then this observed data became like throwing a stone. But if the surrounding values are also adjusted to conform to this piece of data then it can stay. So there were a lot of studies made during the '70s how to eliminate production of noises. And one notable development was called the normal mode initialization technique which is somewhat technical. Ferd Baer of the University of Maryland and Joe Tribbia here both developed a technique of so-called normal mode initialization. Now what I mean is this. Let suppose you prepare a data set by performing some simple data analysis. By a simple data analysis, I mean you can think of replacing predicted data by observed data. So you have a kind of very rough analyzed data set. Now, the purpose is to utilize the observed data without being rejected. So one thing you can do is to analyze what kind of noises to show up from this data set. If you analyze it in terms of the normal modes, then you can see how much is contained in planetary-scale motions and how much in the gravity wave motions. Since the acoustic wave in the primitive equation model do not propagate vertically, we can forget about the acoustic mode and just try to separate between the gravity mode and planetary-scale motion. Then, we eliminate the gravity wave portion from the analysis. So a simple-minded view is to take gravity wave mode completely out. Now gravity waves propagate very fast and produce a kind of noise. When you take them out in the numerical model then you hope that gravity waves should supposedly not to appear. But the reality is that when you put only the large planetary scale component into the numerical model, because the model is capable to produce gravity wave component during the time integration starting from such initial conditions with only large scale planetary motions, gravity waves will start showing up in the forecast. So instead of filtering out gravity wave noise completely, the idea is to leave some of the gravity wave component in but try not to grow. So rather than taking out the gravity wave component completely, we should leave something in, but we make sure that this gravity wave component will never grow. Now of course we are talking about relatively small magnitude things, but nevertheless when the proper initialization is not performed, then the noise starts growing up in the forecast. So this technique of so-called normal mode initialization is able to control the growth of gravity wave component so that when the forecast is done you don't see much of noise in the flow patters. By the way Chuck Leith was the director of Atmospheric Analysis Division...

Edwards:

Atmospheric Analysis and Prediction Division.

Kasahara:

Chuck was Director of our division around that time and he was very much interested in that topic. In fact, he himself made significant contributions on the initialization question.

Edwards:

Huh. I didn't realize that.

Kasahara:

So far I talked about an improved technique of initialization, but I haven't talked anything about how to analyze the weather maps. Now there is another story about so-called optimum interpolation techniques.

Edwards:

Were you involved in that, too?

Kasahara:

I didn't work on that, but what I was going to say is that for analyzing the global data set, there were many efforts related to the initialization problems. During ten years after the execution of GARP, many techniques were developed for the analysis of data sets. We thought that with these techniques the problem of analyzing global data was pretty much finished. The actual analysis works were carried out at NCEP (National Center for Environmental Prediction) of NOAA, which used be called the National Meteorological Center in Washington, D.C. and also ECMWF (European Centre for Medium-Range Weather Forecast). By the way, the creation of ECMWF was a significant milestone related to GARP. After this initialization technique was developed we thought that all the problems of analyzing data sets were solved. In fact, I think GARP’s ten year work was pretty much finished at the end of '80s. Now, getting into the '90s, people started realizing that there is still a problem of initialization developed in '80s, mostly for dealing with the tropical circulations. When we say that the data are expanded in the normal modes, the procedure uses the normal modes which are based on free oscillations, not on forced oscillations. So when you expand the data using free oscillations, the data set is consistent only with the free oscillations. So as long as the model doesn't have forcing, then everything works out very well. But the tropics behave quite differently from the middle-latitudes, because the tropical circulation is an essentially forced circulation mostly due to the heat source coming from the release of latent heat of condensation. In the middle latitudes, the major energy source comes from, say, temperature contrast, so it's dynamical in nature. In the tropics you have very large imbalance between what you get from the sun and what you lose from the earth by long wave radiation. So that imbalance has a great deal of —

Edwards:

That's a positive imbalance in the tropics.

Kasahara:

So it's a positive, you get a negative. . .

Edwards:

The pole is negative.

Kasahara:

And as a result of absorbing solar energy, you heat up the surface and as a result you get a lot of convection. When you have moist convection you release the latent heat of condensation. And that becomes a heat source and then a meridional circulation develops. That becomes so-called Hadley circulation which connects motions between the tropics and the mid-latitudes. So when you do the initialization, you have to take into account how much energy is released in moist convection. But the problem is that we hardly have any observation of heat released in convection. So we don't know much about how much energy is released.

Edwards:

This is still true? This has always been an issue in atmospheric modeling and observations in the tropics but …

Kasahara:

Well, obviously in studies from 1990 there were many efforts made, trying to alleviate that situation. One of the prime examples of that effort was the Tropical Rainfall Measuring Mission….

Edwards:

What's the acronym?

Kasahara:

T-R-M-M. TRMM. This work conducted by NASA to make a quantitative measurement of precipitation in the tropics on a global scale. What is unique about this particular program is that there is radar on board of a satellite. The radar is a positive—active measurement device. Now most other satellites measure the outgoing long-wave radiation. When the radiation emits from tall clouds, then of course temperature is lower. In the area of no clouds you have warmer temperature. So from the difference in temperature you can see some indication of where clouds are. But with the radar you send beam and then you get it returned, these are active measuring devices. The device requires a lot of power to send the signals and so that's why it's only recent to try to use the radar to measure the amount of precipitation in clouds. Now the forecast model was also becoming more accurate so that the model can infer where precipitation is occurring much more accurately than in the 80’s. Nevertheless the question of how to calculate convective precipitation in the global model is still a big issue. So how to take into account thermal forcing in the initialization problem was one of the topics I worked on. But then the climate modeling really started developing around 1980’s also. But I think, as a result of GARP, people are paying more attention on the global nature of flow patterns. As the impact of human activities on weather became gradually known, then of course climate modeling activities became more active.

Edwards:

Have you had any involvement in the climate change modeling efforts?

Kasahara:

I had participated in SST — I forget what the name of the program was — it's an effort to try to evaluate the impact of supersonic transport on climate.

Edwards:

Oh, yeah. Right.

Kasahara:

But I was not an active player so I just participated in a couple of conferences. . .

Edwards:

Why did you —

Kasahara:

— because of my involvement in the GCM development.

Edwards:

Yeah. On an SST issue exactly what did you do?

Kasahara:

Actually we haven't done any experiments. Besides I thought a lot of questions were on chemical issues, but not so much dynamical in nature, so actually I haven't done any work. But I think, from1980’s on, I started to see the merging of weather forecasting business and climate simulation business along with the improvement of modeling.

Edwards:

Yeah. So to the point that some places like the Hadley Centre are trying to build a single model that'll do both forecasting and climate...

Kasahara:

In fact, I think that's an interesting current issue. Is the model developed for medium-range forecasts is also a good model for climate simulations? May be the answer is no. The reason is that for extended range forecasting, a range of two weeks or so, the influence of initial conditions is very significant. While if you're talking about ten years or 100 years, the influence of initial conditions is virtually gone. In fact, I would say the effect of initial condition will disappear within a month or so in the atmosphere. Now, in the ocean, things may not be the same. In the ocean, the effect of initial conditions may last as long as 100 years or more. And so talking about say, seasonal forecast, I think it's very tricky because the ocean, talking about seasonal period, the initial condition of the ocean is still actively participating, while in the atmosphere the effects of initial conditions are probably forgotten within a month or so. So the forecasting of atmosphere and ocean in a range of let's say one year I think is a very complex problem.

Edwards:

I just want to notice here that there's an interesting symmetry in your career that you started off working on tropical cyclones and now you're working, especially for the last few years, have been working on tropical cyclones again.

Kasahara:

Yes, well, I can talk about that. Shall we?

Edwards:

Yes.

Kasahara:

About ten years ago, UCAR initiated the so-called MECCA Project. MECCA Project is Model Evaluation Consortium for Climate Assessment. Now as a response to the global change question people in the electric power industry in the United States, Japan, Netherlands, and Italy started to have interest in what is the impact of global change on climate. So those interested people wanted to ask NCAR or UCAR to do research on the question of global climate change. What they did was to contribute funds for supercomputer and then invite scientists to submit proposals to run climate change related problems. They set up a committee to review proposals and they accepted ones which aim at the understanding of global change issues. But the proposals were fairly broad, not just specific one, because they are very broad-minded. So it really contributed to basic research as well. As a result NCAR was given one Cray computer dedicated to that MECCA activity. That project lasted about five years or so with the final report came out a couple of years ago — Ann Henderson-Sellers and Wendy Howe finished the final report. It turned out that one of the participants was the Central Research Institute of Electric Power Industry in Japan. They realized that the global change issues are very complex and it's not that simple to just ask questions and get advices so on. Central Research Institute of Electric Power Industry in Japan, in short, CRIEPI decided to build their own research capability to understand this global change issue, focusing on the impact of climate change in Asia. For example, what is the impact of global climate change on typhoon activities? They contacted NCAR and found that Filippo Giorgi was developing a regional climate model in those days. Though later he left NCAR so he is not here now, but he was working on regional climate modeling.

Edwards:

This is mesoscale model?

Kasahara:

Mesoscale. So CRIEPI sent one scientist Dr. Hiraguchi to work with Filippo Giorgi. That was about five years ago or so. Hiraguchi stayed at NCAR for two years and worked with Filippo’s staff. What Hiraguchi did was to make regional climate runs over Asia using the model developed by Filippo Giorgi. Filippo Giorgi was working mostly over the US and Europe so this was a good opportunity for him to study weather over Asia and in fact they wrote a paper together. But toward the end of his stay, Hirakuchi talked to me that he is interested in the impact of global warming on typhoons. He showed me some his effort along that line. So I was quite interested in, because my doctoral science work was on tropical cyclones. I said that I may be able to work with them on the problem of tropical cyclones. Now, of course, the purpose was to find out what is the impact of global warming on tropical cyclone. That's a terribly difficult problem, but I said that if they are truly interested in doing the work and want to really contribute to scientific society on this issue, then I would be very interested in helping them, because that is a sort of the end of my career, and I don't need to take anything which I don't feel is worthwhile. (laughter) But, if they really feel that they will do in the way I feel it should be done, then I don't have any reason to say no. So I started this discussion with them and the first thing they did was to send Junichi Tsutsui to work with me for two years. The first thing we did was to find out whether the general circulation model can indeed produce tropical cyclones or not. So when he came we started to look at long runs of CCM2, Community Climate Model 2. When we looked at a 20-year run, day by day, and counted to see whether the tropical cyclone we want to see had appeared or not. Then, we found that tropical cyclones indeed appear in the long run, and the result shows a very nice frequency and even the seasonal variability agrees very nicely with the observed data. The only thing different was that the tropical cyclones looked a little bit larger than the real and also the duration of systems was too short compared with observations. Though we know that a T42 resolution of CCM2 is not ideal, we were very encouraged by finding that tropical cyclones appear in CCM2. Encouraged by that result, CRIEPI now felt OK, why not NCAR conduct a global warming experiment. Actually, it so happened that the climate modeling section was very much interested in trying to run a newly developed climate system model (CSM) to conduct a CO2 experiment, namely to run one hundred years with an increase of CO2 of 1% per year. So the climate modeling section staff and I worked with CRIEPI scientists with CRIEPI providing computer time and used NEC-SX4. That experiment was done in the spring of last year. We had a quite successful simulation and we saw the increase of global mean surface temperature of nearly 2 degrees in 100 years. Now, one of NCAR's achievements was to run the CSM without the flux correction and so I felt that it was very credible and everyone was happy. Now we had the records of weather changes at the time of CO2 doubling. So if we compare the statistics of tropical cyclones at present and 100 years later when doubling of CO2 occurs, then we can see how the behavior of tropical cyclones changes. But when we analyzed the CO2 -doubling run, what surprised us was that no tropical cyclone appears, period. Now the CSM uses CCM3 which is improved version of CCM2. So we were in a tough situation, because after making the experiment and knowing CCM3 is better than CCM2, but for finding tropical cyclones, there is no way we can use this run. So there is a question why then CCM2 produced tropical cyclone-like vortices, but why CCM3 did not. One of the differences between CCM2 and CCM3 is the use of different cumulus parameterizations. CCM2 uses the cumulus parameterization scheme developed by Jim Hack. CCM3 uses in fact two different schemes, one is developed by Zhang and McFarlane and the other by Hack.

Edwards:

You mean, John?

Kasahara:

Z-H-A-N-G and McFarlane. Their scheme was developed for handling deep convection, but Hack scheme was for shallow convection. The CCM3 uses both schemes. But in CCM3 the deep convection scheme seemed dominate over the Hack scheme. One of the reasons that NCAR modeling section felt the CCM3 performs better than CCM2 was that CCM3 calculated a better heat budget and also showed an improved climatology of flow patterns. Now as far as the mean conditions are concerned, those comparisons were all done in kind of monthly mean or yearly mean. People don't look at any comparison in daily flow patterns which are very complicated to simulate, but I think people started looking at the simulation of Madden-Julian oscillation in the tropics. It is a unique tropical circulation system which propagates eastward from Asia with a period of 40-50 days. Apparently, CCM2 produces the Madden-Julian oscillations in a reasonably realistic fashion, while CCM3 was deficient in that aspect. However, CCM3 showed a very nice energy balance and produced a good mean climate. I don't think that CCM3 was deficient by not producing tropical cyclones, but simply the grid resolution of CSM, namely T-42 which is about 300 km in distance, is clearly inadequate to describe tropical cyclones. Although the overall scale of tropical cyclones is 1500 km or something like that, but the size of its core is very small. So when you use a 300 km mesh, obviously it is not easy to describe smaller weather systems properly. So in some sense the fact that we found nice tropical cyclone-like activities in CCM2 may be — I wouldn’t say from a wrong reason, but you could have a model which is better in one aspect, but is not so good in other aspect. Unless the model has a fairly fine resolution, otherwise you may not be able to meet two goals. So my feeling now is that trying to simulate tropical cyclones with the T-42 resolution CCM3 was asking too much. So my question is what will happen if we increased the resolution of CCM3. Actually I found that the difference in the performance of CCM2 and CCM3 depends mainly on how to partition use of the deep and shallow convection parameterizations, not from the difference in other physics or dynamics. So, we increased the grid resolution of CCM2 to T-170, roughly 80 km mesh, and made forecasts, but not with the original Hack cumulus scheme, but with Arakawa-Schubert type scheme. It turns out that the use of Hack scheme in T-170 CCM2 produced too intense tropical activities. So I am about ready to look at the runs for hurricane and typhoon forecasts. So I feel now that if we increase the resolution of CCM3 to T-170, I'm hopeful that CCM3 will produce tropical cyclones. Then the next task is to find out what is the impact of global warming on tropical cyclone activities. Sooner or later we hope to perform the CSM experiment with a high model resolution. But if not we simply have to specify the sea surface temperature at the time of global warming and use that data set to drive a higher resolution CCM3. So what I'm trying to do now is to perform T-170 CCM3 long integrations for ten years or so. Actually, that's extremely expensive proposition, but fortunately CRIEPI people made a proposal to the Science and Technology Agency in Japan for machine time. So they will have funds to use NEC SX4, and right now I am involved in that negotiation between CRIEPI and here to make a long run with the T-170 CCM3.

Edwards:

OK. Well, that brings us up to the present and I think for the last question I like to ask people in interviews like this is if you had to choose the two or three things that you think have been your most important contributions to climate science, what would you say?

Kasahara:

Well, you know, by looking at what I have done, it is extremely difficult to say which one I really like and I think sometimes when you have many children people ask which children you like best —

Edwards:

(laughter) Which one is your favorite?

Kasahara:

— and I really feel the same way. I think the reason is that irrespective of what the result came out, and also the reception of particular aspect by the community, I think every step you really put your effort in. Now, of course, not all of the topics I used the same intensity, but for all of the major activities, I have at least five or six, I put pretty much the same intensity. We could talk about what was the topic most effective in terms of acceptance or so, well that's a different story, you know, very often most cited paper is not necessarily the one that the author likes best. Even though some rarely cited paper the author feels he has a tremendous pride for. So I think two things do not match.

Edwards:

What is the work that you think has had the most impact in that sense?

Kasahara:

I haven't looked at my citations, so I have to think now. For example, I did a lot of normal mode work which I enjoyed, but because the topic may be somewhat too specialized, or things are a little bit complicated, it had not been received as much attention as I liked. Instead some of the simple things I have done turns out to be more cited. So it's not simple — it's very complex. But I enjoyed all of the work I have done and very much enjoyed working here, it's a tremendous organization. NCAR is a very exciting place to work. Well, you know, I was thinking about going back to Japan, when I join NCAR, after working a few years here, but as you know, in '60s, the tremendous development of GCM happened and NCAR had provided us computer power and literally Warren and I spent 70% of the whole machine in one year for GCM. I think that kind of opportunity kept me going here.

Edwards:

OK. Well, thanks very much for an excellent interview.

Kasahara:

Well, it will be very difficult, I know, to transcribe…

Session I | Session II