Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
We encourage researchers to utilize the full-text search on this page to navigate our oral histories or to use our catalog to locate oral history interviews by keyword.
Please contact [email protected] with any feedback.
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of Pawan Bhartia by Steve Norton on 1999 June 11,
Niels Bohr Library & Archives, American Institute of Physics,
College Park, MD USA,
For multiple citations, "AIP" is the preferred abbreviation for the location.
The primary focus of the interview is his work on the ozone.
Today is June 11, 1999, about 10:30. My name is Steve Norton. I’ll be interviewing Dr. Bhartia concerning NASA’s detection of the Antarctic ozone hole. And just to confirm what we discussed over the phone and e-mail, the interview is going to be taped, transcribed, once it’s transcribed you’ll have a chance to look at it, make corrections, additions, and then it will be archived in the Neils Bohr Library at the American Center for Physics. Is that okay?
Okay, that’s fine.
I’d just like to begin by getting some background information so far as your education, undergraduate, graduate, and how you got involved in atmospheric sciences.
I got my master’s in Physics from India and I came here in 1967. Got my Ph.D. from Purdue University in Solid State Physics. Then after that I got a job in 1977 to work with a private company, local contractor, to work on the ozone problem. At that time there were a lot of data taken from the Nimbus 4B satellite, which was the first ozone satellite that NASA launched, and they were looking for scientists that could work on the data and process it and produce ozone mass from that data. So that’s what I worked on. I did most of my early work writing the programs to process the satellite data into ozone values. That’s what I worked on for many, many years.
So you didn’t come directly into NASA?
No, I came to NASA in 1991.
What was the name of the company you were working for?
The company’s name changed many times. Originally it was called SAS, the Systems and Applied Sciences. Now it’s being called Raytheon. But I left the company in 1989 or something.
How did you actually hear about the work that was being done on the ozone?
It was advertised in the Washington Post so I just applied.
What year was that exactly, again?
That was 1977.
When you came to, was it SAS (Systems and Applied Sciences), there was the ozone processing team there? Had it already been formed?
Yes, Al Fry was in charge of the ozone processing team. It had been established and they were looking for Ph.D. scientists to work on the data.
So you started working on the algorithms then?
That’s correct. I developed the totals [?] and algorithms for the BUV experiment that was done in there and later on the profile algorithm. So most of the code to process the data— I wrote most of the code early on.
You basically did that by yourself then?
That’s correct. We had consultants. There were two individuals. J.B. Dave from IBM and Carlton Matier [?] from Canada. They were the consultants to ozone processing team, and they wrote the initial algorithm before the OPT was set up. So they remained as the consultant for another ten years or so, so they would occasionally come and visit us. But most of the work was done here at Goddard.
I think I had come across a number of Matier’s stuff. That he had done a lot of stuff in the ‘60s.
That is exactly right. In fact, there were three individuals who put together this entire program. Donald Heath, J.B. Dave, and Carl Matier. In the late 1960s they were the ones part of doing measurements like this from space. Actually, before that, Fred Singer from University of Maryland had suggested that technique. But the actual instrument was put together by Don Heath, J.B. Dave, Carlton Matier.
What was the second person’s name?
J.B. D-a-v-e. It’s not pronounced Dave, it’s [Dahvay].
Didn’t someone by the name of Tormier do some profile work for you guys?
Toomey. He may have done some work on ozone, but primarily he did the early work on remote sensing, as how you take remote sensing data with this temperature, ozone and what have you, take the measurements from a satellite and then convert that into geophysical parameters like temperature distribution or ozone distribution. So I think he worked on it more for general theory of the method. He did not specifically, I think, do very much on the ozone. I think he may have written one paper; I’m not quite sure.
What’s the major difference between the algorithms for processing doing the profile retrieval in the total ozone?
The total ozone is a much simpler technique that’s based on the ground-based technique called Dobson technique where you look at the sun to measure the total column ozone. It basically the satellite does a reversal of that. You know, you’re doing it from the space. But the basic techniques are very similar, and you don’t have to make too many assumptions about the atmosphere in doing that. And it is, I would say, a robust method to divide that. Where in the profiling technique, as people have found out, right after the weather satellites were launched is what is called an impulse problem. Where there is no good solution for the problem and you have to use some mathematical constraints to solve the problem. And it still is a difficult problem to solve is how do you get a profile from remote sensing data. But the problem ozone I think is much more straightforward method.
Two things. Isn’t the total ozone problem, deriving total ozone values, isn’t that an impulse problem, too?
All remote problems are impulse in some sense. Even the Dobson measurements of direct sun are some sense impulse. But it is far more robust. You make fewer assumptions to do total ozone then you do for profile.
And the other thing, too, is when you’re doing the total ozone algorithm aren’t you using profiles in order to derive total ozone values?
That’s what I said. Because it’s an impulse problem in a fundamental sense, you have to use some operating assumptions, is what they call it. You have to assume something about the profile or something about how it was a distributor in the atmosphere in order to even do total ozone. That is also the case for the ground-base Dobson station is that they cannot derive a unique number unless they knew something about the profile. But they don’t have to know a lot of details about it. All they need to know is that most of the ozone is around 20 or 25 kilometers. If you were on some other planet where the ozone was near the surface, for example, as true in the case of Mars, for example. There is ozone there and most of the ozone near the surface, you have to use a totally different technique to do that. So you do require some assumptions or some knowledge about roughly how the ozone is distributed in the atmosphere. But you don’t need to know precisely a lot of details about it.
So just for the ozone profile retrieval you need a lot more assumption.
The ozone profile would need a lot more assumption. That is correct.
When were the different algorithms actually completed and ready for implementation?
Well, the algorithm process is basically an iterative process. You start with a simple algorithm. You know, there is an algorithm [???] and material algorithm. And then we modified it and kept on modifying. We’re still doing it, in fact, after 22 years — we’re still in the process of modifying it. So as we learn more and more about the atmosphere we can improve the algorithm and ultimately start to see smaller and smaller features in the atmosphere or subtle features in the atmosphere that you could not see before. So as you improve your algorithm, as you improve how you handle, say, cloud is a big problem. And how you handle clouds and how we handle dust, for example, is another issue. So the atmosphere is extremely complicated with a lot of different things going on at the same time. So as you start understanding them better, and better you can improve the algorithm and start to subtle movement or subtle features in the data that you could not see before. So it is iterative process. It has gone through at least three major revisions since the original [???] material algorithm — three major divisions and a lot of minor revisions.
Because I’ve seen the version 7 or something like that?
Yes, right now we’re using version 7, but only the last three versions were really the major divisions. Early versions were more of a minor divisions. So I would say that versions five, six, and seven are the ones that have been released to the public, or to the science community.
When the data comes down from Palms then, when it gets sent down from the satellite, what station does it actually go to? Is it the Wallops [?] Island Station?
Well, actually the data comes to what they call Deep Space Network, which is a set of antenna that are distributed throughout the world. That’s what they use to, for example, collect data from planetary missions form Jupiter and Saturn and so on. But the data comes through the Deep Space Network that is controlled by JPL (Jet Propulsion Labs), and then they collect the data and the satellite goes overhead and they send that piece of data to Wallops Island. Then Wallops Island will send the data to us at Goddard, where it is all put together into one piece.
So back in 1983–’84, how long did it take to get the data from Wallops to — it was collected, you say, by Deep Space Network, the JPL, and then they sent it to Wallops. How long between the time that they collected it from Deep Space Network did it take to get to Wallops?
Those things happened quite rapidly. I think that those are all done electronically. Certainly in the ‘80s they were done electronically. I don’t think it was sent by tapes. So the data would come in and basically to Goddard perhaps in three days. I’m not absolutely sure, but it didn’t take very long after the data was collected to come go Goddard. But at Goddard the data were processed. At that time in ’84 we were about one to two year behind real time in actually converting the data into a science product like ozone products. Because there was several different steps. The computers were slow, and it took quite a bit of time to do the processing.
So the data got collected — in October of 1983 the data gets sent down.
So I assume, say like then end of October and you’ve got the October data down on the ground. And you’re saying that it didn’t take that long for it to actually get to Goddard.
That is correct. Now, I’m not quite sure if it took a day or two days, but something of that order it would take.
Yes, because McPeters seem to be of the impression that it took quite a bit longer than that.
Not to come to Goddard, but to get the data processed within Goddard took a long time.
So sometime in early November of ’83 you had the October ’93 data.
I believe that is correct. My memory is not very clear on that. It may that we took more than a day or two, but it was not very long. I don’t believe it was very long.
Would there be anywhere they actually kept records of when the data arrived and stuff like that?
I’m sure there are some records there. The person to talk to perhaps would be Al Flag. He would know quite a bit of that and he may give you references of people who might know more. Mike Farman is another person who you can check with. But the person who would really know would be the person retired. He’s working with right now, I think, working with Raytheon part time. Who used to be in charge of this entire operation. Rob Shapiro. You can certainly talk to him. He’s an easy person to talk to. He probably would remember quite a bit of this information.
Is he still in the area?
He is still in the area. He works here for Raytheon and if you want I can find his number or you can get it from one of the one of the Raytheon people here. Look at their telephone directory. But Rob Shapiro is the person who was in charge in all of this effort so he would give you details of how long it took the data to come from different places.
But he wasn’t in charge of the ozone processing team?
No, he was in charge for the process of raw data into a form before the ozone processing took over. So the transmission from the Deep Space Network to Wallops Island, from Wallops Island to Goddard and within in Goddard some of the processing was his responsibility.
So within a few days—
Let me make it more clear. There were two organizations. One is call NOPS, Nimbus Operations Processing System and the Ozone Processing Team. The NOPS did all of the ugly front end processing and OPT did the science processing.
Stolarsky had mentioned NOPS yesterday.
Shapiro was in charge of NOPS.
Whereabouts were they located on Goddard?
Most of them in building three. That is the mission operations building where all the data came down to there in their computers in that building.
So the tapes actually arrived then and NOPS got their hands on it.
The data from October, 1983, I guess was a couple of hundred thousand data points a day?
Yes, these were all generally put into 64 bits per inch tapes. There used to be these big tapes. They were all put into those. In fact, some of the older data may have been on 800 BPI, which was much higher volume. I think that these 62 BPI tapes were held about two days of data. Something of that order.
So there’s going to be a number of tapes?
Yes, there were a fairly large number of tapes. In thousands, I think, over the period of…
So NOPS gets these tapes, and then how long does it take them to actually — I mean how much of the details of the operation are you aware of and how long did it take…?
I’m not aware of a whole lot of their operations. I only know vaguely what they did, but I think Rob Shapiro or Mike Farman, or Al Flag, actually. Either three of them could give you more detail on that.
Do you know how long it took them from the time they actually got that to the time —
A couple of months. Something within like three to six months. I think it varied depending upon how busy they were.
How fast could they have gotten it done? I mean if they dropped everything how fast could they have done it? Like a couple of weeks?
Well, there was no — well, I guess they have to process the data from many different senses on the satellite. I do not know. If they were given a priority they probably could have done it in a week, I suppose.
So two or three months then you’d get that data. So did you recall actually when the ozone processing team got the data for October of 1983?
I do not specifically recall that, but I would suspect sometime within three to six months after the data were taken.
It’s possible that we had it in by February, ’84.
So you think anywhere from three to six months?
One thing I wanted to ask then, too, was the Thomes [?] data filters. The total ozone value. This is one thing, I’m sure you know, that gets completely confused in all the accounts is the range that was set between 180 DU and 650 DU.
I think it is useful to explain how you would typically process satellite data, which is true of almost any satellite data. Because you are taking such a large volume of data, and since all the data processing and automatic system, there’s no way human beings can individually look at the data forms. So you have to put in what we call checks to make sure that the garbage data or the bad data are not going through. So you have to decide when you process the data. There are checks built into the system in every step that you do to make sure that the data are not beyond a certain limit, and if they start to exceed certain limits you put in some kind of flag. You throw the data away or whatever you do there. So those flags are put in there and at that time the best scientific knowledge that we had was the at the total ozone values in the atmosphere did not go below 180 and above; I think, 600 or something like that. So we put in a check like that. Now, the important thing to realize which a lot of people do not understand is that one of the things you look at when you do the processing is how often those checks that were put in were being set. In other words, how often the ozone values went below 180 or above 600. You constantly monitor that because that is one of the diagnostics that you do. In fact, every day the data you process you produce a print-out, which shows you very clearly how many times this happens. So any time you start to flagging this data you start to know that something is happening there. So in this particular case (I still distinctly remember because I was in charge of the processing system), the person was actually running it called me on the day the data were processed through the system and he was telling me that a lot of data had been thrown away because of this check of 180 Dobson units. So one of the problems that we were facing at that time was that the Thomas data were quite large in terms among the volume of data produced and it really overwhelmed the computers that we had at the time in order to distribute it. We used to distribute the data on the printer output rather than on the computer screen. Well, at that time there were some new systems being put together to do image graphics. Actually, Raytheon had a system that they had bought just recently, and the data were actually put on it and we were able to see right away that the data would be thrown away in the area primarily over the Antarctic continent.
So just showing up as —
Just showing up as a black blob there where the data were being thrown away. Another point I wanted to make was that the data were really not thrown away. What they are done is they are flagged. The basically identify that this data is not good. You don’t through the data. So you can also plot the data if you want to do that. Typically you don’t do that, but if you have to you can always go back and say that I want to show this data. So we plotted the data and the values were extremely low. In some cases they went to 150 Dobson units or 150 units. So we know right away that something strange was happening in 1984 that had not happened in the previous years. But that did not take more than I suppose a week or so to find that out.
There’s questions about the timing there, but I want to get back on who exactly was responsible for setting up this check…?
I was in charge of that.
So you were the one specifically set it between 180 and 650.
Well, but with the approval of the ozone processing team. Al Flag was in charge of the ozone processing team, but I was a manager of the contractor team, was really doing the processing. A typical way that would be done is that you would come to ozone processing team where there are people from Goddard and sometimes Matier would join and Dave. And also there was a Nimbus experiment team, which took many people from outside and other people as well. And this would be agreed upon by the members of the team. But day to day running was my responsibility. But overall setting the policy was the responsibility of the ozone processing team.
So basically you developed the algorithm and it has to be approved —
It has to be approved, be presented to them, and they say, “That’s fine. Go ahead and do it this way,” and then we go and do that.
So the algorithm basically just went through and checked the number of flags?
Yes, it produces a print out that tells you how many times it was set during the process.
And so when you were studying this, I assume you had some sort of idea — well, let me not assume. Did you have knowledge of the fact that there’s transmission and coding errors and stuff like that so you would always have some bad data in there?
Yes, occasionally you will get bad data. Sometimes, for example, during the transmission from the satellite to the ground you will have some bits flipped, so you will sometimes get some totally absurd values, like two thousand Dobson unit or minus fifty Dobson unit. It would occasionally happen maybe once out of a thousand times or something. But generally they would be random so when you make a map or something you’ll find that you’ll have bad bits, if you will, in some places randomly. In this particular case they’re all together at one location. So that was quite obvious once we made the map.
So when you set this did you have any idea of what number of flags could be considered something that needed to be looked at?
Yes, generally speaking, what you do in this case is that you create history and you see one what is unusual based on what had happened in the past. Most of the time, and at least for this particular flag, there were a lot of other flags in there as well, which would get set more often, but this particular flag was very real. We almost never saw that. So it was very clear there was something strange was happening here.
So basically, when you were setting this, you had some knowledge of previous satellite data transmissions?
That’s right. We were processing it since 1970. The first instrument was launched in ’70 and we had processed that and we had two instruments flying, SBV and Thomes, and they’re both started in 1978 so we had a fairly recent record of what had happened in the past.
Did the algorithms that they were using to process the data from those have a similar data range set on them?
Yes, all of them had — we always had that set.
So then you knew from prior experience that it would only be a few.
Yes, one of the things we do any time we change the algorithm is go back and reprocess every data point back to the 1970. That is always our tradition.
Basically, you knew then from experience that you could only expect a couple per every thousand data.
That is exactly right.
Do you recall exactly when — was it David Lee who —
Yes, David Lee was the person who was in charge of the processing. The people who do the computer processing and look at Goddard. I was at that time off site and he called me on the phone basically to tell me that there was something strange happening.
Do you recall what day it was exactly?
I can find out. I have some written notes, but I don’t have it right now with me. But I can find out for you later on.
So the July 31, 1983, ozone processing team meeting. That’s one of the things I got from McPeters.
Most likely this thing must have happened — generally we met every week, so I would suspect that meeting was held the week or so after this thing was found when we reported this in the meeting.
So it happened within a week.
Yes, I could guess, but I can check that more exactly.
So basically, when Lee called you he told you that there’s a huge number of data flags all appearing in this one spot.
So what was your first reaction when he told you this?
My first reaction was that this was an error in the — See, to process the data from the satellite — Well, let me go back a little bit. The first reaction was to find out whether or not these problems were randomly distributed over the entire globe or they were concentrated some place. Because that tells you whether or not it was an instrumental problem or it was something more serious.
So when he first called you though, he just said there was a large number of flags?
Right. We didn’t know at that time whether they were distributed over the entire globe. I think pretty soon he may have looked at that and told me that it was all happening in the Southern hemisphere in the polar regions because that was an easy thing for him to check. Now, when he told me that it was all in the polar regions, my first inclination was that this was caused by what we call in the satellite business long orbit latitude parameter, which means as the satellite moves somebody has to give us precisely where the satellite is located at any given time and what the sun angle is at that position. We have had in the past a history of having problems with that. With the NOBs giving us the wrong locations or the wrong solar zenith angle. If you have the wrong angle of the sun in your processing then you can very easily get very low ozone values, because if you thought that the sun was at a certain angle and it was not then you can get a lot of very low values. And that will give you low values just in a particular location. All of the values are very low. So my feeling was that because of the history of NOBs that was what was happening was that the NOBs had given us the wrong location, the wrong solar zenith angles, if you will.
Where did NOBs get that information?
They calculated it internally, but they had some software problems in the past. I thought that maybe somehow they had some kind of software error that is producing the wrong solar zenith angle.
So there’s like the telemetry coming down from the satellite that gets used to calculate that?
No, orbit latitude is done on the ground. You actually know the position of the spacecraft from radar. Actually, there is this facility that sends the information to NASA and then they can calculate the position of the satellites.
They get radar ephemeris. NOBs does and then they use that to calculate?
They calculate that and based on that you calculate the solar zenith angle using astronomical parameters. So my reaction was that this was the main reason for it is they must have just miscalculated the angles. So the thing we have to do in order to establish whether that is the case or not we looked at multiple days of data to see whether or not how these things were changing at one point. That may have taken us a couple of weeks, and I do not recall exactly how many days it took us, but we had to go and get the data displayed on a color machine. One of the key things we found was that this pattern that we were seeing was rotating. Basically it was not fixed, but it was rotating in space with time — day to day it was changing and that indicated that it could not be a solar zenith angle problem because why would the pattern rotate in that particular way. That it was something more geophysical rather than a problem with the NOBs. So I think we had pretty only I suppose maybe about a week or two ruled out that it was a NOBs problem.
When you did that you say you went and you actually displayed the data and looked at what was going on.
That’s correct. And we see from day to day how it changes. The pattern changes from day to day.
When did you actually first display it?
It must have been right after the OPT meeting.
The one on July 30th?
Yes, it must have been a month after that meeting.
That would be the first time that anyone actually physically saw the entire hole?
Physically saw the image, right.
So the other question is, too, you say that you thought that this initially was a problem with NOBs and their calculation with solar zenith angle.
That is correct.
Had it actually happened before where they had done this and you had gotten this many data flags?
There were a lot of problems with the NOBs and only very early stages like in 1978 when the satellite got launched. All the angles were wrong at that time. I don’t recall there were any significant problems like that after that. Well, there was one other case where there was an error in the angles that they had given us. Not the solar zenith angle, but another angle that was wrong. There had been occasional cases like that where some of the angles were found wrong, but I don’t think that it happened in the last couple of years prior to this event. Most of it had happened in early phases of the project.
Now, at the July 31st meeting when this was first mentioned it says here that Bhartia notes that the low values disappear as latitude decreases?
Right. It’s mostly in the Antarctic, and as you go to low latitudes and mid- latitudes they disappear.
And then it says, “He does not believe that the problem could be with ILTs.”
That’s called Image Location Tapes, which is the NOBs buzzword for orbit and latitude information. ILTs is basically location of image on the ground is what it means.
How did you actually check that to see that wasn’t a problem?
The way I checked that is if you have an error in the ILT, or the orbit latitude, then not only the ozone will be wrong, but you the reflectivity that you measure of the Antarctic continent will be wrong. They both should be wrong at the same time. Normally we see with Antarctic continent, it is very bright — about ninety percent the light had been deflected off. If you have the wrong ephemeris, the wrong solar zenith angle, you also make that incorrect. So ninety percent reflectivity means a hundred and twenty percent or measure sixty percent. So I checked that and that was not the case. The reflectivity was pretty normal.
So that indicated that wasn’t a problem. You say it probably wasn’t a calibration problem?
Right. Because of the same reason, because reflectivity were pretty normal.
So the same reasoning applies to both. Or a problem with the software. I assume that’s referring back to the NOBs problem.
That is correct.
And then there was this issue here. The possibility there may be a forty millibar cloud — I mean a cloud at forty millibars in October.
Right. I don’t remember that that was…
So the forty millibar. You say you don’t recall actually —
I don’t recall saying that, but it’s very interesting because subsequently, of course, it was found that the clouds had [inaudible]. What I was speculating — See, what you typically do when something like that happens it’s a brainstorming, where you’re thinking of all possible things that could be wrong that could produce something like that. I may have said that if you had a cloud at very high altitude, that cloud would basically shield the ozone below the cloud so you’re not going to see that ozone. So you’re only going to see ozone above the cloud and not those below the cloud. So if for some reason there was a cloud present at forty millibar, you could get this low value.
But that cloud would actually cover the whole region of where —
It would have to cover the entire region of that Antarctic area.
How reasonable, at that point, so far as what people knew about that Antarctic, how reasonable would have it be to have expected a cloud that low?
I think it was not a reasonable thing to expect that there would be a cloud with that prevalent. [Inaudible] to really produce this kind of an error. It was purely speculation, I think, on my part.
It’s just something to check just to be sure.
Were there any other checks that had been performed before this July 31st meeting?
I don’t really believe so. I think only the reflectivity check was the main check we actually did is to look at whether the reflectivity was too low or too high.
Then at the August 7th meeting, a week later, it was reported that, “Regarding the southern ozone discrepancy with Dobson in October ’83, Bhartia’s spoken with the Langley group operating the Sam II instrument and they reported no forty millibar cloud.” So at this meeting you sort of brought that up, and then you checked it and discovered that wasn’t the case. But also, the discrepancy with the Dobson in October of ’83, when David Lee actually mentioned this, when did you actually go back and check the Dobson station to see what was going on?
It must have been after July meeting because I don’t think that we did that before that. Somebody must have contacted the Dobson people in the south polar Dobson station, and I don’t know how that was actually done, but somebody from OPT must have taken the responsibility to do so. I don’t know if the minutes say that or not.
It actually says in the October 1st meeting, which a couple of months later says, “Instrumental problems, clouds had been ruled out. Though Thomes also measures low ozones. Emerson Scott station does not [inaudible].”
Somebody must have found out from Emerson Scott station what they were measuring at the time. Most likely it was a phone conversation. Because all of the Dobson data is sent regularly or was being sent regularly to Toronto. There’s a place in Toronto they are archived. But there was typically a one year to two year delay before they would receive the data. So in this particular case, I certainly did not, but somebody must have called the Noel [?] management, which runs the different stations, and found out what the values were. Most likely before, because I’m sure there were — no that is not the case. I have to take this back. I think that by the time the red books had come out — because we are still talking about older data. It is quite possible that Toronto had already received the data. I do not remember how we found out. But clearly we were given the numbers that were pretty normal and they did not show the low value was seen from Thomes.
Actually, in the red book that has the ’83 data does it actually indicate that the — because I know that data was bad.
The red book actually published the bad data.
They published it and they didn’t indicate that it was bad?
That is correct. They did publish it and later retracted it, but it was actually officially published in the red book. Now, whether we got that from the red book or we got that by phone I really can’t remember it now.
So you don’t know if some called Tormier or someone?
Yes, I do not remember that at all. If it doesn’t say that, I don’t…
It doesn’t say that so I may have to check with them. Maybe Flag or someone. See if they have any recollection. So then it just says you ruled out the latitude possibility. Then it was a study continous — it says “options.” It says, “Rule out latitude possibility.” Now, was that referring to the stuff you had already —
Yes, there were two possibility. One is the solar zenith angle is wrong from the NOBs. The other one is the satellite is not pointing downwards, but is sort of by some problem has shifted to some angle.
How did you go about checking that? I assume you checked that after?
The satellite has built in things that tell you where it is pointing so you can go and check those kind of things. The other way we can check that is by looking at the Antarctic continent, is that there’s something wrong. In fact, I remember doing that. As you go off the continent, the brightness of the alveto [?] or reflectivity goes down rapidly from the continent to the ocean, and you can tell if there was any kind of an error in there that you can look in the edge of the continent you can tell.
So the latitude problem would have shown…
Yes, that’s correct.
Then what is the study continuos scan for anomalies?
Yes. SBUV, which is another instrument, the companion instrument on the same satellite, has a mode where you can look at a lot more wavelength. The Thomes has only six million, but this instrument is designed to measure from two hundred forty nanometers to four hundred nanometers, which is a large wavelength range. And it will give you a lot more information, but it was only used occasionally; used once a month or something like that. But it potentially can provide you more information to understand what might be happening from an algorithm point of view. At this point we are still not thinking it was a geophysical effect. We were still trying to rule out every possible either algorithm or instrument calibration or satellite problems.
This is the October meeting. But you had said before, though, that when you actually did the visualization for it that it was actually rotating?
That is correct. I do not know if it mentions any of these things, but we did see a motion of this thing. The shape of the ozone hole at the time was like a teardrop and it would rotate from day to day indicating there’s some kind of a wind driven phenomena rather than an algorithm [???], because algorithm cannot produce that.
So when you actually saw that, what was your initial reaction when you saw it rotating?
We thought that it was something caused by winds as a geophysical effect. It is a mythological effect rather than an effect of satellite or the algorithm
So when you were visualizing that, when was that? After the July meeting was it?
It definitely was after the July meeting, but is probably around August or September or something like that.
So by the October meeting you thought it was a geophysical phenomena and you were just trying to eliminate other things to make sure?
That is correct.
So then there was the issue of getting data from the Showa [?] station?
Right. I don’t believe we ever got that, though.
That was just to provide a check of the algorithms?
Because at this point I think we were getting more and more certain that this is not a problem with the satellite and that it is something real geo-physically happening. But we still had this anomaly with Emerson Scott, and the anomaly was not that Emerson Scott was reporting unusual values. It was the satellite was reporting unusual values. And that made us very nervous that there’s something strange going on. So we had to have another station to control it and I don’t believe that we got data from any other station as far as I can remember. Maybe we did it get it from Showa ultimately.
Then there was a meeting on October 23rd about three weeks after that meeting and then it says here that, “Kruger showed that Thomes detected low ozone values over Antarctic in October of 1983, and that the low ozone values were localized within one or two distinct areas. He reported the areas of low ozone in the region have occurred on previous occasions as well.”
I guess the issue here is exactly how did Kruger show that?
I think he was looking at these images we were making now at this point and he was very much involved in looking at how these things were changing. In fact, the movement of the ozone hole, as I mentioned earlier, with winds was something that he looked at very carefully. Since he was the principal investigator of this instrument (Alan Kruger was the PI of this instrument), so he got involved. At that time he was working very heavily on the Mars mission to measure ozone, but I think he was able to find some time to work on it with David Lee, who was at this time making maps and looking at the data. So he was probably reporting based on that observation.
So he says, “The low ozone values were localized within one or two distinct areas.” Is that pretty much—
It was Antarctic, basically. Did not mean in just one or two pixels. It meant in an area.
Or not within specific areas of Antarctic.
No, no. He meant specific areas in the world rather than…
So what is this in here the report of low ozone and regions have occurred on previous occasions?
The ozone in Antarctic in the springtime in the Southern Hemisphere spring always becomes very low in the spring. Because it was seen in the Nimbus 4VU data, which is in 1970. So we always knew that the Antarctic ozone in the spring is very different than the Arctic ozone in spring. So to look at the springtime in Arctic in the Northern Hemisphere the ozone does not become very low, but the Arctic always becomes very low in the springtime. It happened in 1970, but never below 180. But it will get down to very, very low values. It was very unusual. So that’s he was saying is that these values had occurred before, but have never became as low as this. But there was always a tendency for the values to become low in September and October.
So he’s just referring to the Antarctic’s springtime low. Just not saying that it got this low.
And he always that was an unusual thing, which we did not understand, is why did the ozone become very so low in the Antarctic in the spring, but not in the spring in the Northern Hemisphere in the Arctic. And that I think at that time was not well understood or was not understood at all why that happened.
They did know the pull of vortex and stuff like that?
Right. It was understood it was something related to the pull of vortex, but the mechanism was not understood. In fact, Dobson may have identified that a long time ago that the Antarctic spring was very different than the Arctic spring.
Actually, I was talking to Stolarsky that said it was MacDonald for the British Antarctic survey and actually, Dobson did a review of their work.
It was seen a long time before.
Are there any other meetings? These are the minutes I had gotten from McPeters. Are there any other important?
I don’t think so. I think you probably have almost everything that you have I can look up in my own notes, but I doubt that I will find anything really significant. But this basically summarizes everything that we did at the time. The only other one I can mention is that in the next year — the timing seems to be wrong here. When this problem was identified that was in 1982 data not in 1983 data.
Oh, was it 1982?
1982 spring data. So the Emerson Scott problem was in 1982 spring data. That’s the one regards. So now I recall it correctly that the 1982 data had been published already in the red book and we were comparing that with the 1982 red book data with Emerson Scott. We were about roughly two years behind in terms of processing at this point.
So then the pictures you were doing of the hole —
Those 1982. We were still processing the 1982 data.
Okay, so the Emerson Scott?
The problem was in 1982. I’m quite sure it was not 1983 because 1983 values actually were very low. And that’s exactly what happened.
Let me just check something because I knew there was something in this issue on the geophysical research letters where Tormier actually had mentioned the problem with the —
With the Dobson data.
With the Dobson data. I just wanted to check and make sure if it was ’82. Because I had been talking to McPeters and he—
It is not 1983. I may be wrong by one year. I’m not quite sure.
It’s in here somewhere. I guess the other question is when did you feel confident that you had eliminated all the possibilities, and that this was really—
I think around October of ’84 I think we were pretty confident it was not a — Because we had, by that time, we may have received new data from either Showa or Emerson Scott, and those were actually agreeing better with the Thomes.
In October, right. So we were, at that time, independently pretty confident that what we had was not an error because we had eliminated every other possibility we can think of. So sometime around October. I would say that October-November time frame we were quite confident that this was a real phenomena.
What were your thoughts on actually publishing this at that point?
The problem that we had was we had no understanding of why this was happening. We knew that there was something happening, but we did not have any understanding of what would be causing something like this to happen. That’s one problem. The other problem that I had there was that we forgot to mention the fact that this also happened with SBUV, which was also flying at the same time. SBUV was seeing some similar low values, and we looked at the profile data from SBUV and the profile data were showing similar anomalies. So this was confirmed by both instruments. So there’s another thing that confirmed us that this was not an instrumental problem because both instruments were showing the same thing.
But you had the profile data from the SBUV?
So you actually knew what level in the stratosphere this was actually — the depletion —
Roughly speaking, but SBUV doesn’t have very high resolution. But it roughly told us it was in the lower stratosphere. But nothing more than that. The thing that I personally found very strange was that why was this happening for one and a half months or so? September and October only. Because by the time of December the ozone values would be back up to pretty normal values and there was almost no change. So if you look at December, January, February data, or even November actually, those values are pretty normal and do not show any long term change there. So my feeling at that point was that if this was caused by chemistry, by chlorine chemistry, then it should effect all the months. Not just two months of the year. And I at that time did not see how it could just effect two months and the remaining months that we have there.
Before Farman’s [?] paper came out, did you have any idea that this might be from chlorine?
We were just guessing at the possibility because that was one of the hot theories. I could not personally understand how it could be just two months of a year because I would have thought that the chemistry would take place all the time and not just for two months. So that was in my view strong evidence against it being a chemical mechanism, and I thought it more a hematological phenomena. Something to do with the polar vortex. As we mentioned earlier, the polar vortex it doesn’t always get very low. And I somehow thought that maybe the polar vortex was getting stronger or something that was making it lower still. But I could not understand how this could be chemistry simply because of the fact that it only happened in two months. So I think at that point we decided to submit a paper in the conference because we did not have a mechanism. So all we knew that there was some low values there, and without having a mechanism you really can’t publish it in a scientific journal. So we decided to let it be known to the community that there was a problem like that happening, but we had no idea why it was happening. So we were trying to take a very low key approach rather than taking an approach where we say that something really fundamental was happening in the Antarctic.
And you submitted the abstract March 28th of ’85.
Which is the same day Farman’s paper got accepted.
I had no idea that was the case. The first time I found out about Farman’s paper was — No, actually, the paper had gotten published before I went to Czechoslovakia. It had already come out.
So what was your reaction when you —
I was really very pleased, because I was still at this point very nervous that even though we’d eliminated all possibilities that this may or may not be a true geophysical effect. So I think that Joe Farman was just as pleased to know that satellites had confirmed his results too. So I think we’re both extremely pleased that we were able to confirm each other.
Going back to the Tormier thing. He actually mentions here in his geophysical research letters where he says, “Data previously reported for October of December, 1983 identified as erroneous and uncorrectable.”
Then I probably was mistaken. Maybe 1983, and then when 1984 data came along, which you may have. So I suspect what happened is the 1983 data was actually published in the red book. The 1984 data was given to us, maybe on the phone or something like that, which was normal. Which was agreeing with Thomes, but that probably didn’t come in until late ’84. December or something like that. At that point we decided to put it in the conference.
So the data you were processing then in July of 1983?
Was ’83 then. Somehow I remember differently, but that may be.
Back to the Farman thing. What was your reaction when you actually read this? Did you think that this is really something serious?
Well, my impression was again was that his explanation did not explain why it would only occur in two months and not in the rest of the time period. So I was very skeptical that explanation was correct. Because he did not, I think, as far as I remember didn’t explain why it only took place in two months of the year. I know very well that it did not show any trend at all in the rest of the year. So I was skeptical about the theory that was presented by Joe Farman, but I was happy that he had seen the same thing.
What day did you actually present your paper in Prague in Czechoslovakia?
This is in August.
I know I’ve got the dates of the conference, but I just don’t…
Well, I don’t know the exact date, but somewhere in August prior to the 17th. I would not know anything more precisely than that. Somewhere in the middle of the conference.
How long was the presentation you gave?
I suspect it was only about twenty minutes. It was actually very well attended and a lot of people were very excited because most of them had read the Farman’s paper and they were wondering whether or not satellites had confirmed this or not. But they knew the satellites had been operating. So it was really a lot of excitement because the paper generated a lot of excitement, actually.
This was the first time that full color image had been presented?
Right, and the image is basically reproduced here. This was the image that —
Yes, I’ve got a copy of it here. You have no idea where that image disappeared to?
I may actually have the image. I can look for that. It’s quite possible I have that information. I most likely have this image at home, but I’ll have to look for it.
That’s one of the things I’m actually looking for our archives is like copies of as many documents as I can get. You wouldn’t happen to have a copy of the presentation you gave at the Prague conference? Or draft or something? I mean I have a copy of the abstract.
I don’t really believe that I have a copy. I think what happened was that I gave all of my charts to Lynn Callous [?], and he produced some of the key charts. This plate was produced by him and there was another figure here that was produced by him. I’ll look again. I’m not really sure whether I have a copy of that. This figure, for example, came from SBUV.
Do you have a written version or notes or something from the talk?
I don’t have any more information than what is probably in here, but I’ll take another look. I may have these figures themselves, but this was an oral presentation so typically you do not have a whole lot of text material. They do not require any written paper to be submitted. All I would have even if I have it would be just these images. That’s it.
Can you reconstruct the sequence of what you said? I mean the substance of what you said?
Yes, I think basically I started by saying that the recent paper by Joe Farman, which has report of low ozone values and we are going to confirm this, and basically showed the SBUV data showing that it has gone down, and the shape of the ozone hole that appears in the Thomes maps and some idea of where in the atmosphere it may be happening. Because at that point nobody knew what height it was taking place, and the SBUV provided indication of what height it was taking place. And another thing it shows very clearly is that the effect was —
These are in figure eleven?
Right. It shows here that most of the effect was in the lower part of the atmosphere and not in the upper part of the atmosphere. This was from SBUV ’79, ’81, ’83, ’84. So this was a clear indication that this was — And the only other point I made was that this is only in the September–October time frame and not in the other months. So that’s basically all I would have said.
Was it Callous? Is that how you pronounce it?
Lynn Callous, yeah.
You think they might have this material lying around that you had given them?
It’s possible. It is a long time and he may or may not have kept it, but you can check with him. I’ll check what I have. I have not checked that particular aspect of it, but I may be able to find something.
Let’s see if there’s any other questions. I mean I know I’m going to have more questions about the specifics of the algorithms, but I mean is there anything you can tell me right now where I might actually go to find more materials on the algorithms? I mean I have read some of the technical reports and that for Thomes and SBUV.
Would you like to know more about the algorithm itself? How it was derived?
The best source of that is the reader’s guide that we publish for Thomes and others that discuss the algorithm. You can certainly take a look at that. The paper that I wrote with my colleague, Ken Clink [?], it’s a Clink, Bhartia, et. al paper, which actually describes the algorithm that we were using at the time.
Where was that published?
It was published in JGL, I’m pretty sure.
Is it Clink, Bhartia?
Clink, Bhartia, and there were other people there. McPeters and Flag.
What year did that—
I can give you the entire reference I would have on my computer. I can send it to you by E-mail.
I guess that’s pretty much it for right now then.
Thanks, Dr. Bhartia.