Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
Please contact [email protected] with any feedback.
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of Adrian Tuck by Keynyn Brysse on 2009 June 24, Niels Bohr Library & Archives, American Institute of Physics, College Park, MD USA, www.aip.org/history-programs/niels-bohr-library/oral-histories/33573
For multiple citations, "AIP" is the preferred abbreviation for the location.
Topics discussed in this interview by Adrian Tuck include: ozone depletion; the Montreal protocol; National Aeronautics and Space Administration (NASA); Intergovernmental Panel on Climate Change (IPCC); molecular dynamics perspective; National Oceanic and Atmospheric Administration's (NOAA) aeronomy laboratory; Joseph Farman; Jim Lovelock; Sherry Rowland; chlorofluorocarbons (CFCs); Mario Molina; Bob Watson; Brian Thrush; World Meteorological Organization (WMO); stratosphere; Antarctica; polar regions; National Center for Atmospheric Research (NCAR); Royal Meteorological Society; science vs. religion; Nimbus-7 observatory; Alan Kruger; total ozone mapping spectrometer (TOMS); the Ozone Trends Panel; solar backscatter ultraviolet instrument (SBUV); airborne antarctic ozone experiment (AAOE); Neil Harris; Dan Albritton; Conway Morris.
This is Keynyn Brysse. I’ll be interviewing Adrian Tuck today. It is Wednesday June 24, 2009.
Which of the written histories of this issue have you read? There are two in particular. There are several and some of them are of pretty dubious quality, but there are two in particular that I think were reasonably, carefully done. One is by Ted Parson.
Read that one, yes.
And the other one is by an Australian history student called Maureen Christie.
Yes, I’ve read her book too.
I actually think Maureen Christie did a very good job, and it also has the advantage that it’s written in plain English. Ted Parson’s is kind of constipated prose in my opinion, and I actually don’t agree with one of his central theses, but hey, that’s…
Which one is that?
The way I read it, he didn’t think that the scientific evidence was the main thing that produced the Montreal Protocol and eventual restrictions. He thought it was simply a possibility that there were replacements.
Oh, the technological replacements? [Yes]. The way I read it, he did think that science was important, but only as mediated by the scientific assessments. That, I don’t know, it depends on how you look at it. There wouldn’t be scientific assessments without the science, right?
Yeah, I mean I don’t know when you talked to Bob Watson, but he said in public that —I used to be in the British Met office, and I was chairman of the UK stratospheric ozone research group before I came over here in 1986. So I’ve seen it from both sides, and Watson had been in this country much longer. He was getting very frustrated at the differences in approach that the different countries had, but in particular there was a very discernable difference between the approach in the United States and the approach in Europe. I said to him, “We’re going to have this international head butting until we have international assessments.” That’s how the WMO assessment started. There’s a downside to assessments. They were obviously essential, and they’re going to be essential in CO2, which is a far more serious problem of course. There is a disadvantage in that the assessments have tended to turn the research community from a fairly heterogeneous, actively disputatious community into something that’s resembling a gigantic machine for producing assessments. Once you’ve got an assessment and once you’ve delivered an orthodoxy to the politicians, the people who deliver that orthodoxy tend to be embarrassed by normal scientific procedure, which is to say if you think you’ve discovered something that’s inconsistent. Science is provisional, and it evolves. There have been very detectable instances of that; it’s going to be writ large with the stakes 3 or 4 as a magnitude larger with climate and global change.
Can you give me some instances of that happening? I can think of one where the 1976 U.S. National Academy of Science’s report was delayed for six months because of chlorine nitrate. Is it stuff like that that you’re talking about?
No. My PhD supervisor was the honorary Brit on that report. I had it to review for the quarterly journal, and I thought that it was far too optimistic about the ability to quantify the uncertain. I can take an example. If we take an ER-2 flight into the vortex, and you do a statistical analysis of the character of the turbulent fluctuations of the chemicals, and the first thing you do is plot out the probabilities distribution, they’re all non-Gaussian. The only time you see a random Gaussian distribution, you’re looking at instrument noise, you’re not looking at atmospheric [???]. The long tails on the skewed distributions are a sign of positive feedback and long-range correlation. I don’t know if I’m talking to a scientist?
I have a BSc in paleontology, nothing to do with this, and I did my PhD in History of Science. I have been teaching myself atmospheric chemistry and reading all the assessments, all the papers that I can find and all the books, but I certainly don’t have all the knowledge that a real chemist working on this stuff does. I know what you mean about the non-Gaussian distributions and you can’t — a lot of the talk about uncertainty and probabilities, the way I understand it, assumes a Gaussian distribution with even tails, and that is not the case.
They emphatically are not. Now you asked for an example. During the runabout in the early 1990s, there was a move to reexamine the possibility of supersonic commercial flight, despite the obvious failure of the Concord in an economic sense, not a technical sense. The aviation industry became interested in building a big supersonic plane again, so there was a thing called AEAP. It was the NASA — what was it, atmospheric effects of high altitude aviation — it was something like that, I forget the exact name, but it’s all very well documented. There was a series of NASA reports through the 1990s, and they actually funded some of the continuing polar ozone missions. There was very significant funding from that program of the Eshel [?] Mesa thing; it was based in Christchurch, New Zealand in 1994. I can remember a public discussion there where some people, in particular one of the NASA modelists, a guy called Bill Gross who worked at NASA aviation, he stood up and said, “I think you’re being far too optimistic about how far we can qualify the uncertainties. If you compare different models, what you get is a take on the random errors of your model, you don’t get the systematic error and it’s the systematic error that’s important.” This was said in front of the NASA aviation people, not the research people, or as well as the research people. You could say this in front of the research people and they couldn’t possibly disagree with it, but the aviation people, it came as news as them. They’re basically engineers. They didn’t like hearing that the uncertainties couldn’t be quantified, despite the fact that they had spent millions of bucks trying to quantity. And that’s an example. That gets written large. You get an orthodoxy built up that’s, actually I think, antithetical to the whole way science works. When things get into the political arena, you can see things even more clearly. Scientists are trained to survey all the evidence and then come up with a hypothesis that is maximally simple, that’s consistent with all the evidence. If your previous working hypothesis, or even theory, can’t accommodate all the facts then you have to abandon it. Science is provisional. Most politicians are trained as lawyers, and in any case they operate the same way. They look at the evidence and they pick the bits that support a preconceived line of argument. That is a really fundamental clash. Scientists are trained to ask questions to which they don’t know the answers. Lawyers are trained to ask questions to which they already know the answers. It’s really a fundamental difference. Despite all the popular books on science and despite all the scientific documentaries on TV — that are very good and very watchable; I love watching it whether it’s cosmology, atomic physics, or evolution — we’re very, very good at producing simplified, beautiful images that convey products of science and the products of scientific processes. What we don’t convey, where we’ve totally failed, I don’t think we’ve talked to politicians, scientific journalists, the intermediate communicators who’ve got themselves inserted between the scientists and the politicians. It was 30-odd years ago when I started doing this. It was much simpler. The working scientists talked directly to the politicians. Now there’s a whole professional core of academics and communicators and civil servants. There are civil servants in Washington who have a wall. Instead of the usual “me-wall” which is politicians, photographs with them with presidents and senators, they have little certificates of the molecules that have been banned for which they were responsible. That kind of thing makes everything a lot more cumbersome. It makes the communication a good deal less precise and messy, but that’s inevitable. Politics is an emergent property of populations of three or more, and we’re dealing with a population of 300 million, so it’s bound to be messy. There is a failure to appreciate the fact that you can’t treat scientific arguments, scientific evidence, the way you treat legal evidence in a courtroom. It simply isn’t that disposable. The reason it’s not is because of the scientific process, and we haven’t explained how the process leads to the products. That process means that the arguments can’t be treated in the way that legal evidence is treated. But, you know, given the complexity, given the political process, given the sheer amount of noise and comment in the modern media, twisting of the facts and slanting of it is inevitable and there’s nothing that can be done about it; but it will lead to mistaken views.
That’s an interesting take on the assessments. I’ve heard people say that that’s a problem with IPCC, that it has become so rigid in terms of the methodology of the production that they’re worried it’s just about building consensus and not really truly exploring the science and truly expressing the uncertainty.
Oh yeah, and I can give you an example. If you look at the current IPCC, look at Chapter 8, which is the assessment of the models. It goes on and on. That’s the 2007 one, number four. If you go back to the third one from 2001 and look at Chapter 8, they’ve done a good thing in the sense that the chapters have continuity, so you can compare the progress fairly directly. Anyway, if you look at Chapter 8 in 2001 there are two or three things there, diagrams and comments, which are really very, very telling about modeling adequacies. One of the myths is that if you take all the atmospheric observations you’ve got of temperature and do the simple-minded thing, instead of just dealing with the global mean surface temperature, which is an almost useless concept actually, you say, “Well we’re going to do better. We’ll extend it up through the depths of the neutral atmosphere that we’re concerned with,” which is troposphere and stratosphere, so the surface to 50 kilometers. You’ve got all sorts of temperature observations. Of course there are differences; they have different aero characteristics taken by different means. Radius sums up to 20 or 30 kilometers and various other things. But, if you do the simple-minded thing, say right the pilot, all the bin [?], all the available observations, we’ll make a global mean temperature profile instead of a point at the surface. And then we’ll do the same thing with the models. We’ll take all the models, all the data, and make a global mean average temperature profile. There is not one point between 0 and 50 kilometers, on the models, that’s on the warm side. The models are too cold; they have a systematic cold bias from the surface to the top of the atmosphere, and that suggests that there is something fundamentally wrong because energy doesn’t leave the atmosphere by the absorption emission of photons by molecules. It suggests that there is something fundamentally wrong. Now there are two possibilities. That disappeared completely in the 2007 section.
Not the fact that there’s this bias, but the reporting of it.
Well, right. The discussion of it has disappeared. Now there are two possibilities: either the modelists have solved it and regarded it as such a trivial success that they haven’t talked about it. Highly unlikely. Or as Charlie Brown would say, “No problem’s so big you can’t run away from it.” I think that’s what has happened.
This is fantastic. I’m focusing on the ozone stuff, but Jessica O’Reilly, a post-doc from UCSD who I’m working with, is looking at uncertainty in the IPCC reports, so she will be very interested in hearing this. We’re co-authoring a paper about uncertainty, and I’m having a really hard time with it. Like uncertainty as they’ve been handled in assessments, so I’m looking at the ozone assessments, mostly the ones that are sort of officially associated with the Montreal Protocol.
Well, the uncertainties, that’s a question that goes very, very deep. The easy thing to say is what I said earlier, that you don’t have Gaussian and symmetrical probability distribution functions, that you always have skewed ones. The reasons for that go very, very deep, and in fact I’ve written a book on it.
It’s called, it was published a year ago, Atmospheric Turbulence.
Oh right, I saw that when I was looking online.
Yeah, the Oxford University Press wanted me to put this part of my signature, so I have and there are a couple of them on the molecular dynamics perspective. It goes into the reasons for why you have skewed PDFs. There are other systematic problems with the models. I mean if you look at the upper stratosphere, for example, look at the NOHR analysis, the high resolution one, half a degree by half a degree, you look in the middle of the southern winter, round about the end of June, beginning of July in the southern hemisphere upper stratosphere. The winds reach 200 meters a second. That’s Mach .7. The basic assumption that goes into an Eulerian formulation of a numerical model to solve with [???] — Stokes equations of [???], is that the most probable molecular velocity is much larger than any organized fluid velocity, the hydrodynamic velocity. Well, if the hydrodynamic velocity is Mach-.7, that means that the organized flow of velocity is 70% of the most probable molecular velocity, and that is a clear breach of the basic assumptions. It’s almost certainly connected to the other thing about the temperature problem, because the other big problem in the models in the stratosphere is that as we know from satellite and aircraft data, that the polar winter vortex, the place that’s a huge role in the ozone hole and so on, in the mean it’s roughly circular with the edge around where the permanent edge of the ice is in Antarctica; the vortex boundary is somewhere in the lower to mid 60s latitude. That’s near that tropopause at 20 kilometers in the lower structure. As you go up towards the stratopause at 1 kilometer, if this is the pole, the vortex is shaped like that, so at the maximum wind, crudely the boundary of the vortex, which is about 35 degrees south of 1 kilometer. So the real vortex is like that. You take those climate models and run them freely. I’m not talking about the operation of meteorological models, treat it like a Strasbourg goose, you know that are constantly fed with data so they can’t go very far off the rails, it works like that. That’s really another major deficiency. That also was in the 2001 Chapter eight of the IPCC and wasn’t mentioned in the 2007 version.
I guess that’s too recent for the ozone assessments? There’s the 2006 coming…
There’s a new ozone assessment thing that’s just turning up. I mean they’re meeting at the Royal Society in London right now.
So this is the kind of stuff that pretty much everybody knows about it, it gets discussed at these meetings but it doesn’t make it into the assessments.
Oh no. There’s an ability and quality distribution among scientists just as there is among any other thing you can think of, and some people — There’s a tendency to treat models as sausage machines that produce sausages because the people who pay for it want sausages. Again you need to look at the process: how do you do it? Are you doing it right? And you have to keep asking questions. Again, another criticism of the assessments is that they discourage people from asking questions that aren’t focused on the particular objectives of the assessment.
That reminds me of a concern that Michael Oppenheimer, my boss, has raised, specifically to do with the climate modeling. He’s curious about whether there’s a perception that eventually a model gets to be good enough, meaning you’re not going to keep tinkering with it you’re just going to black box it and use it to generate data. The question is what is good enough? Some new question might come up that requires more tinkering.
There is a famous experimental scientist who worked at the Aeronomy Lab called Arthur Schmelterkopf, widely regarded as an experimental genius, and actually really the guiding mind behind the instruments that built what was then the Aeronomy Lab. I leaned on him very heavily for those polar ozone missions. He had a thick Texas accent — he was six foot five with a crew cut, ex-football player — and he said “Hell, you can’t validate a goddamn model, the best thing you can do is test it.” And that is absolutely correct. There is no two ways about that.
There’s a paper by my other boss, Naomi Oreskes who’s a historian of geology. I don’t remember the title but it’s something about how models can’t be validated.
Yeah that’s right. I mean that’s completely correct. Yeah, it’s a long time since I talked to Michael Oppenheimer. I did a bit in the late ‘70s and early ‘80s when all of this was in its infancy, but I haven’t seem him in quite a while. I get very worried about a branch that’s distantly related to assessments, and that’s this fascination with geoengineering. That’s disastrous, potentially disastrous. I mean we don’t know enough to titrate one pollutant against another. Yet, if you take models, of course, you can always write down some equations, you can always solve them on a computer. The question is how much credibility can you put in it? Models are nowhere near good enough to do it, nowhere near.
Do you think they’ll ever get to a point where they are?
I have no idea. I don’t know. There is a way of looking at it. The best meteorological forecasting model, by common consensus, is the European server for medium range weather forecast. They’ve got better computers, they’re running at high resolution, they’ve had very, very good people working on it. If you feed that with initial data, integrate the equations forward and treat it as a problem in computational physics, you run out of skill somewhere around five or ten days. Very occasionally, in certain circumstances, you’ll reproduce a local area, like an ocean basin with a blocking MV [?] cycling over it for maybe 20 or 30 days, but on average you’re running out of useful skill after five to ten days. And those are the very best models. They have high resolution, the fluid mechanical code is really much more clear; they’re running with spatial resolution, in fact on an order of magnitude better than climate models. Now you run a climate model, but first of all climate models are not very good at forecasting the weather. They don’t have the resolution, and they’re not set up to ingest the data and homogenize it and try to eliminate all those inconvenient things like gravity waves running around screwing up the solution. Nevertheless, there’s a huge logical gap there. The assumption is that climate models are good enough — that they’ll converge to some sort of future steady state, irrespective of the initial conditions that you start off with. That is one of the things that doesn’t get discussed too much, and it really should be. If you look at all the different climate models, all you’re doing is you’re really finding out about the random error rather than the systematic error in the models, and it’s the systematic that matters. If you do something that’s mathematically indefensible, but nevertheless people do it, you take a particular result, say what’s it going to look like at 2100 under scenario A1 or some such bloody thing, and you’ll get a probability distribution. The models that do the least, maybe they’ll have some solutions but will only give you a mean global surface temperature warming of maybe 2 Kelvin. The most probable will be around 3 or 4, but out of the ten percent probability level there 10, 11, 12 degree solutions. Nobody knows what’s right. Book writers drive Jaguars and they want to sell fur coats betting on their own horse races. I know it’s not a horse race, but there’s a very significant probability. I would argue that any responsible government cannot ignore a ten percent probability of a 10 or 11 degree rise in surface temperature. But the discussion’s not going to be rational. You can tell, I mean we’ve had eight years of lunacy basically, the denial of fact. It’s not a matter of opinion, it’s fact. They just denied that it was happening. It’s dreadful! Hopefully that has passed and it’s not going to recur.
That would be nice. That’s something that we’re struggling with understanding, trying to write a paper about uncertainty and assessments, and something that I’m sure all of the assessment writers are struggling with is there is a tension between — I guess I want to call these scientific conservativism and maybe social or political conservativism. On the one hand, as scientists, you don’t want to be a scaremonger and you don’t want to emphasize something that only has a ten percent chance of happening; you want to present the probabilities in a realistic way. But on the other hand, to be socially responsible, we need to make people aware of the possibility that a 10- or 12-degree temperature rise could happen. You could make the argument that if you don’t emphasize that over the lesser changes in temperature that could occur, you won’t be listened to.
Yeah, I think some of that is going to happen inevitably if you look at the history of the ozone. When Sherry and Mario published their paper in 1974, their prediction was for a 10 percent decrease at the end of the 21st century in the total column, located at low latitudes and mostly in the upper stratosphere. What we got was 50 to 100 percent reduction in late winter and early spring over Antarctica in the lower stratosphere. So there’s no way that can qualify as a good forecast from a societal point of view; however, scientifically it’s good enough to win a really justified Nobel Prize.
Yeah, well and they predicted there would be a big problem and there was a big problem. Something needed to be done.
Yeah, but it was nothing like — I mean the magnitude of the problem was something they grossly underestimated, and they’ll be the first to tell you that. Nevertheless, provided the scientific debate is honest and proceeds in the spirit and methodology of science, which is hypothesis, trial and error, test it, and it stays a hypothesis until it’s adequately tested when it becomes a theory. The real worry is; will the assessment process become sufficiently straightened and elephantine that it’ll restrict that? I think it’s a real worry.
Does it really constrain the science itself? I mean the actual research that people are doing when they’re not at the assessment meetings?
Not directly, but indirectly I think it does. The indirectly is that depending on the agency and depending on to some extent the country, what the assessment does is it creates a sort of directionality about what’s relevant when it comes to funding grant proposals. People who are going to upset the apple cart, or be heterodox, or be out of 5 or 6 sigma on the skewed distribution, they get chopped off. Even outside the context of socially relevant science is a big worry, which even in the funding of ordinary science the true originality gets chopped off. The really bad stuff gets chopped off on the minus 4, 5 sigma. There aren’t many of those anyway, but the really bad ones. Of course the peer review system filters all that, but it also filters out the true originality. Back about 10 or 12 years ago, I sat in my office over in the Aeronomy Lab there, and in the space of six weeks I had three visitors. Two of them were getting honorary degrees from CU. That was Joe Farman and Jim Lovelock. Third was Sherry Rowland who was visiting [???]. You know, I’ve known those guys 37 or 38 years now. And we were sitting there nattering about various things, and each one of them said, “The work that I did that is so famous and started off so much research would never get funded now.”
Wow, that’s scary. Going along with that is all of the alternative funding that can be found for stuff that calls itself science but isn’t at all, all of the creationist junk.
You can’t have a serious discussion with anybody who believes that stuff.
That’s something to think about for sure. I understand and I agree with you that uncertainty is not making it into the assessments and maybe it should…
Well it does. They always discuss uncertainty because they have to. The pressure from above, from the political process, is to have numbers inserted, and that’s where the problem is. In a coupled nonlinear system, which the system manifestly is, it is desperately hard to produce error estimates, particularly when it’s systematic error that you’re dealing with. The random error is relatively easy to do, but the systematic error is by definition almost what you don’t know, what you can’t do. We simply don’t know. It’s a big bone of contention, and it will continue to be a big bone of contention and I don’t see any way around it.
That’s what I was going to ask you next, if you have any idea what can be done about that. There is that tension between the politicians need firm enough information that they can make decisions based on it that has to really include some kind of numbers, but on the other hand to be honest to the data, there’s all this uncertainty that you want to present.
Yeah, there is a perspective on that that you can get from the history of the ozone assessments. Let me think before I start talking. Yeah, I mean however much experimentalists and observationalists might not like it — the models are the only mechanism for providing a currency at the science-politics interface. How you view that currency is really important, and it continues to be a very debased currency. I guess that’s just a colloquial way of saying what went on earlier, but with the CFCs thing, the observation left after the Molina and Rowland paper in Nature in 1974, things start to wind up pretty quickly. You can wind up a model much more quickly than you can a good observation program. The models already existed because of the Concord and the SST issue. You can trace this tension between what do we know from direct interpretation of observations using known physical and chemical principles, as contrasted to model predictions. Of course in the 1970s, one-dimensional models were the currency and two-dimensional models were a bit expensive to run, but once you knew what you really wanted to do you could run them for a bit. Of course that’s all moved up; nobody uses one- and two-dimensional models now. Nevertheless, the fact that from 1974 to 1986 we went through 12 years where there were DuPont, skeptical politicians, and skeptical scientists who were justifiably skeptical, who could dismiss the model thing saying the models aren’t good enough. I published a paper saying that one- and two-dimensional models would never get there, that they were basically useless for quantitative forecasting. As soon as we had observations, Joe Farman’s paper, what that gave us was a very real luxury that hopefully we’re not going to get with global warming. We had a 50 to 100 percent signal that was testable, and because Watson had funded Ed Dianson to do the investigation of how he had got dried in the rain [??] in the stratosphere, all we had to do was add Jim Anderson’s ClO instrument to that payload in 1987 and there we were: we were set up to test the mechanism, the PSCs and the chemical balance. As soon as that was done, things started to happen in an avalanche. I remember (I don’t know how much of this you want to hear, some of it is anecdotal), but there was a meeting at Boulder in March of 1986 after Joe Farman’s paper came out. At it, Susan Solomon talked about her idea on the heterogeneous chemistry, and Brian Toon hadn’t gotten that far, but he had gotten the idea that nitric acid would be incorporated into ice crystals. He thought maybe HCl would too. Of course if HCl had done what nitric acid had done, we would have had a de-chlorinated vortex as well as a de-nitrified vortex so we wouldn’t have lost the ozone. That was a pretty heated meeting. I lost a certain amount of respect for some people after it. That idea was clearly Susan’s, and she came under vitriolic attack from two individuals: Mike McElroy and Paul Crutzen. They had very different reasons for saying that it couldn’t possibly work. When they went away from that meeting, within a few weeks or months, they submitted papers to Nature saying the same thing. This far down the road I’m willing to say this: I reviewed all those papers for Nature, and I thought that was unethical behavior. The scientific community doesn’t need to start acting like that. Of course it’s easy to forget the sort of fevered atmosphere there was from ’85 to ’87. At that meeting, Art Schmelterkopf stood up and said, “Hell, all you need to do is put Jim’s ClO instrument on the L2 and we’ve got a test.” He said, “We’ve got this UV visible spectrometer that we’ve been looking at NO2 and ozone with,” and he said, “it should do OClO as well.” And it did, so that was how Susan went down in 1986 on the NOZE I. The missions that were planned for the ER-2 were to look at the tropical cumulus from Darwin in January and February, and there was a vote and all the PIs said, “Yeah we’ll put Jim’s ClO instrument on and go down to Punta Arenas in August and September, and that’s how that happened. I remember, we did the test flights at Ames in August, and I and two or three of my engineers and a meteorologist from the British Met office went down to Punta Arenas four weeks before the planes got there to set the thing up and make sure everything worked. August 12th was the day the planes were supposed to leave NASA Ames, and I got a phone call down in the hotel in Punta Arenas, and it was Bob Watson and he said, “We’ve got a problem. The State Department has given the Chilean embassy an answer to a question they asked. They said, ‘Who’s liable if the DC-8 and the ER-2 collide over the middle of Punta Arenas?’ and they said, ‘You are.’” Obviously this wasn’t acceptable to the Chileans. So Bob said, “It might be called off. They might stop the planes from going.” Well, the guy who was in charge of the high altitude mission branch of NASA Ames was a guy called Jim Shervanou [?]. He’s dead now, but he’s a young man he’d had flown over the Soviet Union in U2s, and he’s not impressed by bureaucrats who do things like that. He had some pretty good contacts amongst the national TV stations in the Bay Area, so he called them up and he said, “Do you know the planes are all leaving for Punta Arenas to explore the ozone hole tomorrow morning? If you want to come on the base with your cameras you can.” So all the TV things were there, the planes were in the hangar, and these guys phoned him up and they said, “You’ve got to cancel it, from the State Department.” And he said, “Well, we’ve got national TV here. They’re waiting to see NASA’s airplanes go down to explore the ozone hole.” He said, “You can explain to them why I’m not going to go.” They backed down. That’s how it actually happened.
That’s fantastic. Wow, that’s exciting! I’m really interested in the heterogeneous chemistry question, because it seems to be something that people sort of had in the back of their minds, but either thought it probably wasn’t important in ozone depletion, or if it could possibly be important it was really hard to test. So they just sort of didn’t think about it until suddenly the Antarctic ozone hole is discovered, and then it pops up again. I know I need to ask Susan about this directly, but I’m really interested in how that came up. Even before going to Antarctica, people like Susan had theories that heterogeneous chemistry was the cause.
Well, anybody who has worked in laboratory on gas phase chemical kinetics [?], and that includes Bob Watson who did his PhD for my supervisor’s first research student, so my academic father is his academic grandfather, and a whole load of people, Sherry Rowland, Mario Molina, all worked on that kind of thing, and all of those people know in their bones that you always have to worry about surface reactions. Of course the atmosphere has got a very different surface to volume ratio than a chemical reactant. And everybody has always known that in principle the rate at which say an oxygen atom or a chlorine atom carrying a chemical kinetic chain in structure, the rate at which it would collide with the ambient aerosol was about the same as the rate at which it would collide with its partner in the chain carrying the rate determining step. In principle, everybody knew it but nobody discussed it because nobody thought anything interesting would happen on sulfuric acid aerosol.
Okay, so people knew that the sulfuric acid aerosols were in the stratosphere, but they didn’t think that reactions would be happening on them, or happening fast enough to matter?
Yeah, well what happened was — it’s a long and tortuous history. There was a meeting at Feldafing in Germany, in Bavaria in 1984. This was organized by Watson to try and have a discussion of where we were. It was meant to be the prelude to the first, the blue books, the report number 16, the massive three volumes. Sherry Rowland reported some results at that meeting, where he said, “You know, HCl and chlorine nitrate don’t of course react in the gas phase, but when we passed it down a quartz flow tube, they both disappeared and we got chlorine out, so obviously they were reacting on the quartz surface.” They said, “I can’t see that this is of any relevance to the stratosphere because there are no quartz particles in the structure.” All right, so that was 1984. But it did trigger a hunt for heterogeneous mechanisms that would get the chlorine out of the HCl and chlorine nitrate and into the reacted forms. But it was a very low level, nobody’s interested in this, it’s a long shot kind of thing. Of course that all changed when Joe Farman published his paper. I don’t know if you know the history of Joe’s thing?
A little bit, I talked to him a bit.
Joe Farman, my former PhD supervisor — I did my PhD for Brian Thrush from 1965 to 1968 at Cambridge. After I had been at UCSD, I went back to the UK and got a job in the Met office. My job was to put photochemistry in the meteorological models to model the effect of Concord. We wound up doing it for nuclear weapon testing in ’61 and ’62. Again, that’s an observational thing. However much the models predicted that 2000 Concords would wipe out so much ozone, the fact is that there was so much nitrous oxide from those nuclear weapons in ’61 and ’62 that there should have been a massive reduction in the ozone, but there wasn’t, it increased. It was probably just a coincidence, but there certainly was no evidence for a massive reduction of the size that the models were predicting. There again, right from the very early stage — that goes back to ’73 — there’s this tension between what you deduce observationally and what the models predict. That led to some very well-grounded skepticism about the models. Now from the point of view of the heterogeneous chemistry, Joe Farman’s and my supervisor Brian Thrush, there was a scheme in the UK called CASE awards, Collaborative Awards in Science and Engineering. What it was, was an academic could jointly supervise a PhD student at his institution with somebody who’s nonacademic. Joe Farman had all this very carefully collected and calibrated ozone data from Antarctica that he refused to put in what he regarded as a refuse tip, the red box, the WMO, and quite rightly.
The Ozone Trends Panel?
No, no, the WMO always had this thing called the red box, this was before computer archiving, this red box that was a record of all the ozone observations around the world. Joe knew that a lot of those were very, very bad and he wasn’t about to put what he regarded as his high quality data in with all that rubbish. That was why he was the only person who really had the data from Antarctica. He had noticed something, which was there was what looked like a photochemical decay through late summer and into autumn. It looked like a photochemical decay. It didn’t look like the dynamical turbulent break-up of the vortex; it was the reverse, the decay of the radiatively driven polar anti-cyclone. He noticed that there was this photochemical decay and not only that but when the darkness first started to return you got very short periods of darkness to start with, not at the pole but at the peripheral stations like Halley Bay. It goes from 100% sunlight, and eventually in midwinter it gets to 100% darkness, but you get this period where you get tiny little periods of darkness. There was a kink. There were two exponential decays. He wanted to investigate this because he rightly thought it might have to do with NOX chemistry, nothing to do with chlorine. They had this PhD student, a guy called Mike Sarniscus [?], who submitted a PhD in 1981. Because I was the Met office’s photo chemist and I was starting observational programs as well as modeling, and because I had worked for Brian Thrush as a research student, I was the external examiner for Mike Sarniscus. He did a good defense, he had done some modeling, and he came to the conclusion that it had to be photochemistry, but you also had to have downward sinking motion to account for the observations in addition to the chemistry. He did a pretty good job of defending it, and as is traditional when somebody defends a PhD, when you know that they’re going to get it, you test the destruction, “Let’s see how much this kid really knows.” So the other external examiners started asking questions, and I said, “Well, you know this is summer and autumn, what happens in winter and spring?” And he panicked, his eyes rolled wildly around, and he looked over to Joe, and Joe started coughing and he muttered something about it was still under investigation. But it turns out, of course, that they had seen the ozone hole already at that stage. 1977 is arguably the first year where the October minimum went down below the historical mean, and certainly by 1979, ’80, ’81, it was very, very evident. So Joe knew about it in ’81, and you know the story about how he wrote to Goddard?
Yeah, I know the story about how the NASA satellites detected it but dismissed it because they were programmed to disregard anything under 180 I think?
Yeah, that’s right. After Joe’s paper came out, Watson went down to Goddard and said, “Haven’t you looked at a map of what your aero flights are?” And he said “No.” And he said, “Well, let’s have a look,” and of course they were all over Antarctica. [Chuckles] Anyway, so Joe knew about it four or five years. I mean he went through cycles of changing out the instruments, cross comparison, cross calibration, and he asked Derek Miller, who was the resident expert on Dobson spectrum photometers at the Met office, and “I don’t know whether to believe this! Should I publish it?” and Derek said, “You’ve got to publish it.” I got to review it, and I said the chemistry is off the wall but the observations are a revolution. Then, of course, as soon as that was published it reignited the interest in how can you get the chlorine out of the HCl and chlorine nitrate and into the reactive forms. The very first measurements that were done on sulfuric acid droplets had been done in — well there had been speculation about it in 1975. Dieter Ehhalt…
Oh is that Cadle, Crutzen, and Ehhalt… the model?
Yes. There had been some speculation about it. And actually, while I still stood at the Met office, John Austin worked for me. He’s at Princeton, I don’t know if you know him, he’s just finished his fellow’s editor at JGR Atmospheres, editor in chief of JGR Atmospheres. But he was a young scientist working for me at the Met office at the time. We were trying to do chemistry around trajectories, get away from all the mess of Eulerian models, and get something that preserved the three-dimensional time-dependent nature of the problem, and which could be compared against any given observations, so you could run the chemistry in a long trajectory, start the trajectory back from where the observation was. Anyway, we were doing this to widespread skepticism, and we got the LIMS data from the Jim Russell and John Gilley LIMS experiment on Nimbus 7 and we had the nitric acid, and John discovered that there was huge production of nitric acid in the polar vortex in the dark.
Did you have the polar vortex in your climate models?
Oh yeah, I mean the trajectories went around and around and around like that. Of course there’s no dissipation in the Lagrangian model — you can never escape dissipation. I was talking to Susan about it at this Feldafing meeting, and we said we’d gotten this result and we thought perhaps it happened heterogeneously on particles. So there’s a paper in 1986, Austin, Garcia, Russell, Solomon, and Tuck. Jim Russell was the PI of LIMS and Rolando Garcia and Susan Solomon were running their model, and John Austin and I were doing this at the Met office. So we got our heads together and said, “Oh, there’s a story here.” So we published this paper, and we were damn nearly there. The only thing that was missing was applying it to HCl and chlorine nitrate. It naturally led to Susan having the idea that in the end was on the right track. David Golden had a research student, Maggie Tolbert, and they did these experiments on do sulfuric acid droplets in the lab get the chlorine out of the HCl and chlorine nitrate? And they got a null result. It didn’t. All the focus went onto the nitric acid tri-hydrate and later, when the experiments were re-done, they got much bigger recombination rates just on liquid sulfuric acid aerosol droplets. People have different views of it, but it’s fair to say that there’s not an agreed consensus that processing above the 195 K threshold temperature really does anything.
That’s just what I was going to ask.
Some people believe that it does and others believe they don’t. The people believe that it does tend to be the laboratory people and the modelists. The atmospheric observation people don’t believe it. David Fahey, he doesn’t believe it. I don’t believe it. It’s highly controversial. If you go through the papers, there’s no specific observational evidence that anything happens above that 195 K temperature. There is generalized modeling from Eulerian models, but they spread everything around so you can’t tell what’s causing what.
Right, and there was a Molina and Molina paper in 1987, maybe I’m thinking of the wrong paper, but there was one where Molina was involved in this, where they were testing some sort of reaction with HCl or something, and they thought that the heterogeneous reaction didn’t happen, but they were only testing it a few degrees below freezing.
Yeah, that’s right. There’s a load of people that think that something happens between 195 and some number like a 201, 202 temperature. I don’t think there’s any evidence of that at all. In fact, I think the evidence goes right the other way.
That it happens at any temperature?
No, I don’t think it happens at those warmer temperatures unless you have a lot of water. If you’ve got a lot of water vapor, I mean the key to this catalysis is how much water is there available on the surface. The more water there is, the more efficiently it catalyzes the thing. Now, when AASE II was done out of Bangor in Maine in ’91 and ’92, there’s a paper that Eric Thyme and Dave Fahey and their bunch found. They found a layer — when the plane took off from Maine there was a layer of air that had elevated ClO. Of course it was at the time of the volcanic eruption so there was a lot more aerosol in the mid-latitude stratosphere than there ever is normally. It’s mostly sulfuric acid aerosol. It’s absolutely clear that there’s elevated ClO and low NOX, which are the signatures of PSC processing in this chunk of air. But, it was just above the tropopause, and it had about 20 parts per million of water — not 3 or 4 parts per million of water. There’s a condition on it: you do get some processing, but you have to have more water there than typical water mixing ratios. There’s a paper by Darren Tuey [?] as recently as 2007 when we did some B-57 flights out of Houston and accidentally came across this layer over Wyoming where the ClO was elevated — not by very much, maybe only 10 parts per trillion, but it was sure as hell elevated compared to the background. The best explanation for that is a cold, relatively moist event processing right at the very, very bottom of the stratosphere, but you have to have the extra water. It won’t work with 3 or 4 parts per million.
I’m really interested in the different things I hear about what people did and didn’t know. Like Rowland said he knew that there were no particles in the stratosphere, so even though he knew about heterogeneous chemistry, he didn’t think there was anything for it to happen on.
Well he should have read the literature a bit more carefully because particles have been known in the stratosphere since the first episodes of nuclear weapon testing.
Right, and Crutzen told me about the Junge layer, which is named after…
Right, his predecessor. But he didn’t think it mattered much because he thought it was lower than where most ozone depletion was happening and that there just weren’t enough sulfate particles for it to really matter, and then there were all these laboratory tests that seemed to show that either heterogeneous reactions weren’t happening, of if they were happening they’re happening on laboratory surfaces, which may not mean there are surfaces in the stratosphere for stuff to happen on.
That’s that there’s no quartz in the structure.
Exactly. And then the polar stratospheric clouds, Joe Farman knew about them since the 1950s because he had been studying Antarctica that long. They weren’t called PSCs then; I think they were mother of pearl clouds or whatever, but then other people didn’t know that they were there so they didn’t think that there were particles in Antarctica.
It doesn’t matter scientifically, but there are two historical names for clouds in the stratosphere in the Polar regions and they’re very different beasts. One is iridescent clouds, and those are the classic PSCs that are mixed. They’ve got nitric acid and water and they condense at temperatures 5 to 8 degrees warmer than ice. Then there are mother of pearl or nacreous clouds, and they are rather higher up and they are just water. You need frost point to get them.
Which one is type I and which one is type II?
Type II is the one with mother of pearl nacreous, the water ice caps, and the type I’s are the mixed phase nitric acid and water. Again, controversially the sulfuric acid plays some rather unknown role as a condensation nucleus. Something that has happened subsequently is that Dan Murphy in the Aeronomy Lab, who — When I retired my program split into two. David Fahey became head of part of it, and then Dan Murphy became head of part of it. One of the instruments that were built was a thing called PALMS, Particle Analysis by Laser Mass Spectrometry. It was funded by NASA and it was built by Dan Murphy at the Aeronomy Lab. The idea was to get the chemical composition of the particles. Now it has never flown in the vortex, which is what it was designed to do, to fly in the vortex and get the chemical composition of the PSCs. Things moved on, and what became important was the general composition of particles in the lower stratosphere and upper troposphere. That led to the discovery that 50-60% of the material was organic in the upper troposphere. The other thing is that 50 or 60% of all the particles in the stratosphere have meteoritic material. Actually on the same flight that Darren Tuey discovered the ClO over Wyoming, it was in April 1998, the B-57 out of Houston, there was a big chunk of vortex air was peeling off and the plane flew through it. And, you know, the ozone was up and the traces were down. But we actually got particle analysis, and you can estimate how much that air had been diluted since it left the vortex. The conclusion you come to is that basically 100% of the particles in the vortex are meteoritic. The reason is that micrometeorites come into the top of the stratosphere and burn up in the mesosphere, up at 100 or even 200 kilometers. The air in the mesosphere has been known for nuclear weapon tests in the 1960s. They exploded one or two nuclear weapons at 400 kilometers to see what would happen. What happens is that all the stuff up there in the ionosphere sinks down in the winter vortex — it is sucked down. So all these little micrometeorites, they impact in the mesosphere between 50 and 80 kilometers, and they burn up. They just vaporize. All the metal, carbon, stone, everything vaporizes. Of course the air is very thin up there, and they immediately re-condense to form a very uniform, sort of a 20 nanometer-sized population of particles. Basically meteorites orbit and evaporate, and then they re-condense and form these 20-nanometer particles that have negligible fall rates in the stratosphere under gravity because they’re so small. They only fall very, very slowly. Up in the mesosphere there’s almost no air, so they form like bricks until they get in the top of the stratospheric vortex, and then they come down and go out. In the meantime, all the sulfuric acid condenses on them, so the sulfuric acid aerosol is really condensed sulfuric acid vapor that’s nucleated around meteoritic material. What that does chemically, stratospheric ozone chemistry is sort of…it hasn’t been abandoned, but it’s very definitely had the wheels taken off it. That would have been a very interesting and hot question, but it isn’t now. If you put in a proposal to say, “I want to put an airborne mass spectrometer that will chemically characterize the aerosols in the PSCs,” you aren’t going to get any support.
Why is that? The perception is that the ozone problem has been solved? [Yes]. We don’t need to study it anymore?
It’s on a care and maintenance basis. There’s still a stratospheric research effort, but it’s funded at a low level, at a base funding, and there’s none of the $10 or $20 million transfers we got to do aircraft missions.
Right. That’s kind of alarming in a way, because you’d think the example of the Antarctic ozone hole would demonstrate that there is the possibility that we don’t know everything.
Well, however convenient it is as a thunder pause, the tropopause is not a physical barrier. It’s becoming increasingly obvious that the lowest 4 or 5 kilometers of the structure have a very significant role to play in climate change. Just the water content — when you go up through the troposphere, the water doesn’t suddenly go from 50 parts per million to 5 at the tropopause; there’s a transition spread over about 3 to 5 kilometers. There’s enough water there that it dominates the water content of the stratosphere. There’s far more water there simply because of the number density, but it’s also significant radiatively. It means that you cannot treat the tropopause radiatively as the top of the atmosphere. The water content of the very lower stratosphere matters for the radiated balance, and hence climate. You do get cirrus clouds formed above the tropopause, and cirrus clouds, as we all know, are very important. There is nothing magical about the division of mass between the stratosphere and the tropopause; it’s the balance between the number of dynamical and radiative processes and convective processes, and it happens to be where it is. It can most certainly vary, and I think the bottom message is that the stratosphere is irreducibly part of the atmosphere that matters.
I’m interested also in the changing metrics that have been proposed to deal with ozone depletion, like the ozone depletion potential, the chlorine loading potential, and the new EESC. Were you involved with those at all?
I was involved in sneering at them as scientifically inadequate and the sop of the politicians. I’ve got a very low opinion of their merit because they bury some important parts of the science. If you scale everything to F-11, you’re making an assumption that it’s all linear, and it isn’t.
Right. It’s like the model uncertainty again.
I mean politicians need to think in terms of weight and mass, metric tons and stuff like that, and not molecule for molecule. There’s a paper that was published in JGR Atmospheres in 1992. It was a collaboration between the NCAR and the Aeronomy Lab. Basically Susan Solomon used her model, and I put in a few ideas. It was basically on looking at what’s the effect of polar ozone loss on the conventional calculations of ozone depletion potential. The reason that it was so effective was because you know what the self-healing effect is. Under the conventional gas phase Molina and Rowland mechanism, if you put some chlorine up there and it destroys some ozone, the UV can penetrate down a bit further through the lost ozone. But, it produces more ozone, immediately below it, and that’s called the self-healing effect. Now of course, when you have loss in the polar vortex that’s basically driven by very low angle sunlight in early spring at very low altitudes that ozone loss is happening below the ozone maximum over the rest of the globe, so it comes down and out and it undercuts the self-healing effect. That’s one of the reasons that the polar ozone loss is not confined to the polar vortex. It is not a contained reversal. It spreads out to mid latitudes. That’s how it happens; it undercuts the self-healing effect and that sort of meant that things like ozone depletion potentials had to be recalculated, because where we were flying the ER-2, right in the midst of the polar ozone loss, the F-11 had got to almost zero, one or two parts per trillion. There was still quite a lot of F-12 and some of the other stuff left. It screwed up all the ODP calculations. I understand the sort of political and engineering necessity for things like this, but you shouldn’t take them too damn seriously, and certainly not accord them much scientific weight.
So is that why the CLP started to be used? Because ODP suddenly got inaccurate or hard to calculate?
Well, it was partly that, and partly the fact that some of the halons and various other things were coming online. There was also a lingering question about well, we’ve put [???] now on all the molecules that are allowed to manufacture as refrigerants, so that they would mostly be wiped out in the troposphere. And that’s true, but we come back to where [???] and the tropical cumulonimbus short-circuiting all that, and you can put air from the surface into the lower stratosphere in an hour or two in a deep thunderstorm. That’s not enough time for the OH to wipe out these things, so some of the chlorine replacements — there are halogens, not just chlorine, from the replacement and from the halons and from methyl bromide, again in transport up into the stratosphere. Now I don’t think it’s a very big effect, but if India and China start using replacements on the same scale as was being done in North America and Western Europe, then it could be a problem.
So these are things that are usually thought to be at least partially removed in the troposphere, but under special circumstances they can get up to the stratosphere.
Yeah, but it’s only a small fraction of the total.
And how about the EESC?
I got the impression, and I’ve mostly been told by people that I’m nuts, but this is how I read the paper: the 1995 paper by Solomon and others that first proposes the EESC. To me it looks like they’re putting it out there because it allows conversion into GWP. It’s a nice handy metric for chlorine loading, but it also presents it in a way that you can GWP from it, which is increasingly important.
Yeah, yeah. And it’s a convenience for modelers and the people who use the model results. That’s also true if the things get replaced. There are a lot of people at the science politics interface who are not scientists and who can’t understand anything simpler than simple arithmetic with NASA.
Right, you can’t get a straightforward conversion to GWP from CLP though, or ODP. [No]. Okay, good, I’m not crazy!
But, you know, any properly formulated model is going to have the things represented properly scientifically anyway.
Do you feel the same way about this focus on pre-1980 levels that’s showing up on all the new graphs? The 2006 scientific assessment panel report seems to present all of its figures in terms of a gray bar that shows when chlorine and bromine levels are going to get back to 1980 levels.
There’s this whole thing of how does the countervailing effect of CO2 increases and its amelioration of ozone? How does that work? I’m going to be immodest here. I published that in Nature in 1978. It was only with a one-dimensional model, but it clearly had the mechanism right, it had the CO2 right. And John Austin and I, and Keith Groves, followed that up with some full-length papers in 1980 in the quarterly journal of the Royal Meteorological Society. That was 1978 where we looked at CO2 and ozone, and then in 1979 we had a paper in Nature that included the CFCs and the chlorine chemistry. We came to the conclusion that after 2030 (I mean this of course is way pre-ozone hole or anything like that), but if you took the conventional Molina and Rowland gas phase chemistry, the CO2 increases that were projected would compensate the ozone loss out to about 2030. It was 10 or 12 years ahead of its time and it got widely ignored, but we buried it as a review article in Nature. In 1990, I had left the Met office by this time and John Austin was doing three-dimensional modeling, he and Neil Butcham [?] and Keith Shine had a paper saying that when the CO2 is doubled you could have an ozone hole in the Arctic. See the CO2 upstairs in the stratosphere in the conventional gas phase cools the upper stratosphere, and that’s happening — the evidence is very clear. Doubling it will drop the temperature about 15 degrees, but it drops the temperature a lot less in the lower stratosphere but it does still drop it. Once you know about the polar vortex and PSCs, you drop the temperature and you increase the PSC prevalence, so you make the ozone hole worse. It gets to be a much more complicated thing.
What year did you say that paper was?
About 1990, 1991, somewhere around there. Then Drew Shindell and the people at GISS had a paper ’95, ’96, ’97 or somewhere around there, where again they made a big song and dance about it. Of course with three-dimensional models and 20 years down the road they could do a much better job, but the basic fact of the matter is that CO2 increases — of course the CO2 cools the upper stratosphere and the ozone production reaction O + O2 + N has a negative temperature dependence. So CO2 radiatively cooling the upper stratosphere increases the ozone production. That we published in 1978. That’s not going to go away, and the polar ozone just makes that coupling much stronger, because the effect the temperature has is highly non-linear on the PSCs.
That non-linearity that pops in everything, that is a big problem with the uncertainty.
Yeah it is. People are used to thinking linearly, and the atmosphere is not linear. Just look at the equations.
Yeah, I guess I already asked you if you thought there was a better way to convey this stuff to the politicians, because that’s one of the big problems.
I think that what the scientific community needs to do is make a concerted effort to explain not the products of science — the public have seen magnificent, colorful movies of evolution at work, we’ve seen supernovae being born, the Horse Head Nebula, Saturn’s rings, every damn thing you can possibly think of, colored movies of the ozone hole, but we haven’t explained the process as to why we’ve converged on this result, and why all science is provisional. Things that are observationally and scientifically based, and they’ve got a theory that’s been observationally tested cannot be used like evidence in a legal argument. It is not optional.
Yeah, it can be challenged and it has been challenged, and that’s precisely why…
You can’t only bite bits of it. You have to take the whole damn thing. In David Merman’s phrase, “Science is not a thread that can be cut with a pair of scissors. It’s a cross-woven tapestry that’s very robust.”
I agree with that. I don’t suppose you could ever convince creation scientists that…
Oh don’t get me on that. Veronica and I and a protein chemist at Cambridge called Chris Dobson and Bonnie Olson here, we had an accidental idea. It came out of Dan Murphy’s results with the — this is connected with the polar ozone hole indirectly. Dan Murphy built that PALMS instrument, and it was designed to observe the composition of PSCs, and it never got used for that; but put on the B-57 instead of the ER-2 we did all the tropical tropopause stuff out of Houston, and the really big surprise result was that 50-60% of the aerosol was organic. We were looking at this and thought, “Well, it’s so damn cold up there, minus 80, that organics aren’t soluble in water at minus 80 Celsius, how the hell can there be 50 or 60%?” We came up with the idea they have to be surfactants and organic coating around the outside and the soluble core. So, a surfactant molecule has a hydrophobic wax-like and a polar end, you know how soaps work. What we said is that you’ve got the sulfuric acid aerosol. The polar heads are all going to be on the inside and the wax is going to be like a rain-skin on the outside, it’s hydrophobic. Then OH and HO2 will attack this chemically and start to transform it to something soluble. We had a paper that was eventually published in JGR in 1999 I think, and it’s gotten a lot of notice. Chris Dobson, he’s a king protein chemist, I mean he’s one of the — he’ll probably get a Nobel Prize for what he’s done. He’s monstrous in John’s college at Cambridge now. He and Veronica were assistant professors at Harvard in the Chemistry Department when they were young scientists. He’s kept in touch with Veronica, and he’d gotten some prize out at Stanford and he dropped in on us to see us, stayed with us over night and had breakfast, and the pre-print was there and he said, “Oh, that’s interesting. These things are the same size as the bacteria.” So, that led to a paper in PNAS a year later with Chris involved as well as the three of us saying, “Maybe this is how life originated.”
Right, I didn’t read them, but I came across these titles as I was Googling to prepare for our meeting. It looked really interesting.
Well, that led us into torrential abuse from creationist nuts. I mean God almighty, there’s no possibility of rational debate with those people whatsoever.
I had an epiphany about that one time when some Jehovah’s Witnesses came to my door, and I stupidly answered the door, and he was talking to me about theories about the universe and I had read this sort of semi-crackpot book called The Physics of Immortality by a physicist who has this theory that basically makes the Big Bang and the future of the universe correspond almost entirely to Christian faith. So I was telling the guy about that…
What in 4004 B.C. at 9:30 in the morning on the 23rd of October?
No not that, not precisely, but that the point — You know, the singularity point before the Big Bang is for all intents and purposes God. You know, it’s God, everything, and that at the big crunch that will be like heaven and every being that ever existed or ever could exist will be in there. Most people don’t take it seriously, but I mentioned it to this guy thinking that he would love it; I thought he would jump right on it, scientific proof of my religious beliefs, and he wasn’t interested at all. I was dumbfounded. He said, “Well, that’s very interesting, but it’s a book written by a human being, and human beings are fallible. But the Bible was written by God. It’s the only reliable source so that’s what I follow.”
Well my reply to that is, “Well, the Pope’s infallible and he was wrong about Galileo wasn’t he? And it took his successors 350 years to apologize.”
Yeah, but that’s what made me realize that they just won’t listen to scientists because they’ve decided that they don’t have to listen to the evidence.
Well they don’t understand the process.
Exactly, like you said.
If you go to Caltech and JPR to talk to any of those guys, you can really embarrass them over this. I detect a Canadian accent, right?
There’s a Canadian called Hugh Ross who is an astronomer. He did a bachelor’s degree in physics at UBC in Vancouver, then he did a Ph.D. in Astronomy at the University of Michigan, then he went to be a post-doc at JPR at Caltech and he’s still there. He’s a NASA employee, and he has a website, and somebody said, “You might be interested in this, Adrian!” So I said, all right I’m always prepared to listen to one lunatic once. His argument was that the ozone layer was an example of intelligent design by God to protect us from the UV.
How is the ozone hole protecting us? Because it’s not spread all over…
No, no, the ozone layer. But this guy is a creationist. He thinks that it was all created in 4004 BC at 9:30 in the morning on the 23rd of October. He’s absolutely bloody nuts. He talks about Darwin as the antichrist and so on, and he’s a PhD astronomer at JPR Caltech!
Yeah, that frightens me.
I don’t come across these guys nearly as much as I used to, but I never lose an opportunity to say, “How’s your friendly local creationist with his PhD in astronomy?” It must be a real embarrassment.
There was a paleontologist, I don’t remember the guy’s name or where he went to school, but he’s a Christian fundamentalist who got a PhD in Geology and then promptly went out and started working for some creationist science establishment using his scientific knowledge the wrong way.
Yeah, that’s that bloody museum in Kentucky that’s got human beings and dinosaurs cavorting around in the same landscape? [Yes.] Oh God. Intelligent, even rational debate is impossible with those people.
No, I completely agree. I’m not sure what else I was going to ask you. So, you were involved in the AAOE; were you involved in the NOZE experiments too?
No that was right when I left the Met office. The meeting in March ’86, I was still at the Met office. I left the Met office in July and came to the Aeronomy Lab. There was a meeting in August that Watson called, a regional scientific advisory committee that he had set up on this, and the committee had Mike McElroy, Jerry Marmand from GFDL, Brian Toon, and one or two other people. Susan was on it and Arthur Schmelterkopf. The NOZE expedition was going to leave in two weeks’ time to go down. That was in August of ’86. And we also set up and discussed the payload for AAOE. NOZE went down and first of all showed that there was high OClO and not much NO2 and that was what they could do, but, crucially, they didn’t really have a handle on the ClO. Post-fact they did, because Phil Solomon and Bob Desafra had the microwave radio from the ground, but the ClO transition they were looking at was right underneath a vibrationally hot band from ozone. They didn’t really have the confidence to believe the results from NOZE I. Even after NOZE II they didn’t really, but once they knew that Jim Anderson was seeing a part per billion and more, that gave them a lot more confidence in their retrievals. There’s always been some tension about that because the NOZE people think that the NASA publicity machine and the aircraft mission took some of the credit away that they deserved. I don’t think that’s true. If you read the papers and the articles, if you read the AAOE mission booklet (that’s actually quite hard to get a hold of now), but if you read it we actually say there’s enhanced OClO and long in O2. The other thing that the microwave people had as a result but they didn’t think it was important was a negative result that they couldn’t detect N2O. That was because there’s so much downward motion that the vortex was full of air much higher up in the stratosphere that it’s depleting the N2O through the photochemical sync. Subsequently, it all made a hell of a lot of sense. I think if you look at the AAOE’s special issues that were published in 1989, D9 and D14. The NOZE II expedition agreed to publish all their papers in the same issues as all the aircraft people. That was agreed explicitly to avoid a lot of argument and jealousy about who got there first and so on and so forth. Collectively it’s a much more powerful statement than either would have been individually. That’s my take on that story, but there’s no doubt that some of the NOZE people — Susan to some extent, but mainly Phil Solomon and Bob Desafra — they really think that the credit was taken away from them. My take on it is that they didn’t have the guts to publish it the first time around and once they knew about the aircraft results then they went ahead and published the full story. That’s my take, but I couldn’t prove it.
One of the first things that you said was about a person talking about scientific assessments, and whether or not they have an influence. I’ve got a related question: in general people tend to say that assessments don’t present novel scientific results, they just sort of collate what’s already out there. I sort of agree with that and I sort of don’t. I mean I don’t think they’re just throwing together results that have already been published and that’s it; I think there’s a lot more to it. Whether you’d call it original scientific claims I’m not sure, so there’s the general question: what do you think assessments do that scientific papers don’t?
I think it varied; it depended on the assessments. The first ones in 1986, there’s a tiny little paragraph in the back of one of the chapters that acknowledges the existence of the ozone hole because they happened in parallel. I can remember the meeting at Les Diablerets, and they were going to say certain things and I said, “Look, I happen to know that there’s a paper coming out that will really embarrass the hell out of all of you if you go ahead and say that!” That period in late 1985, early 1986, there was a lot of really fanatic activity. I was chairman of Chapter 5, which was nominally the stratostrophic exchange thing. Together with Alan Kruger, we were trying to show that the TOMS show really very explicit structures of where jet streams were and tropopause falls and how you got air out of the stratosphere and into the troposphere. Of course TOMS became of great interest at that time — this was April 1986 — for two reasons: one was that Joe Farman’s paper was about to come out, and Watson knew the flag map story by this time, but he had never shown any interest in the structure of the stratosphere other than to say it’s got to be included. All of the sudden, because it was TOMS, now NASA was going through all this — I only found this out in September in Punta Arenas in 1987 I could put the full story together. I knew part of it. Alan Kruger said, “Look, it’s really important that you turn up and present a very good case for continuing TOMS.” Because Nimbus 7 has got a budget problem, and as an economic measure they want to close down some of the instruments on Nimbus 7. The current thinking is that there’s nothing useful can be gotten out of ozone column measurements, so the things that measure profiles are going to be given priority and they’re going to close down TOMS. I said to Watson, “Don’t close down TOMS, because there’s a paper coming out in Nature in two months’ time that is going to make TOMS very, very important.” Watson is a quick study, and he has extremely good judgment in addition to being quick, and of course TOMS wasn’t shut down. Later, as part of these economy measures, after the ozone mission was a huge success out of Punta Arenas, you know the ER-2 and DC 8 really answered all the questions, Marty Knutson, who was the head of NASA Dryden — The planes were all off in a field at NASA Ames at the time, and he was head of Dryden and head of the whole airborne thing at NASA Ames. We were sitting in a bar in Punta Arenas in September, and we knew what the story was, and he said, “This is really great, but what you probably don’t know is that this has saved the airplanes.” I said, “What do you mean, save the airplanes?” He said, “Well, back when a year and a half ago NASA was saying, ‘We want satellites that give global coverage. The aircrafts were just an embarrassment. There’s nothing useful they can do and we want to shut down the whole aircraft operation.’” The suggestion that Jim Anderson’s ClO combined with Ed Danielson’s tropical cumulonimbus payload could solve all the questions not only turned out to be right, but it saved the airplanes, and it gave them another 15 or 16 years. Actually it’s going to go on to the global Hawk. We had the B-57. You know, so there was a real turning point there. You’ve got to give Watson credit. He’s forced by budgetary and political realities to consider all the options, but he came to the right decisions on both TOMS and the airplanes and he came to it very quickly. But it could have gone the other way and for the money that was spent on those aircraft missions and TOMS, NASA could have launched a lot of ozone ones from McMurdo. But that’s all they could have done, and that wouldn’t have solved anything.
Nope, that’s really exciting! Were you involved with the Ozone Trends Panel at all? I mention it because that’s one assessment you could argue had novel scientific contributions.
[Sigh] You’re talking about the Silver books right, the one with the silver cover?
No, they’re red.
Well, yeah. There are two if you look very carefully. Now, the Ozone Trends Panel, that report was supposed to follow the blue books, the number 16. If you look at the numbering in the reports and the dates, they don’t entirely align. The reason was again the ozone hole came out and of course it threw the Ozone Trends Panel for a huge loop. They kept re-analyzing the TOMS data, and they couldn’t really get a coherent story. NOAA’s calibration of the SBUV was a national disgrace. And I have to agree with Harold Johnston. I was a NOHR [?] employee. It was a national disgrace the way that was done. Don Heath from Goddard didn’t cover himself in glory either with the TOMS calibration and so on.
When he made claims that he was detecting global ozone depletion, was that with the TOMs data or was that with SBUV?
That was SBUV.
But he was in charge of TOMS?
No, but he was involved with it. I mean TOMS was basically son of SBUV. There were so many protests about Don Heath’s obsession with sunspots (this is my outside interpretation) that they gave it to Alan Kruger. Alan Kruger did a good job. He was thorough, he had to put up with a lot of very unfair flak, and he did a good job. In the end, TOMS certainly demonstrated in a dramatic fashion that the ozone hole was real in a way that you never would have done with ozone column measurements. As to getting quantitative trends out of it, that’s still controversial. Rich Stolarski has worked on that for 20 years and there are still question marks over it simply because it’s a damn difficult thing to do, particularly when you’ve got a series of instruments that don’t always overlap, and in particular when you try to put different instruments together, whether you’re talking about the NOHR temperature trends or the total ozone trends from SBUV. The systematic differences are enough that you can’t extract a reliable trend. I was not on the Ozone Trends Panel. The Aeronomy Lab representative on that was Art Schmelterkopf because he had experimental and physical insight that very few people have, and Watson really, really trusted him for good reason. Art was the honest let-it-all-hang-out scientist; he was never constrained by political considerations. He always said exactly what he thought. That approach led to the Ozone Trends Panel report being delayed and delayed and delayed. In the end they had to include what, under your terms of reference would be original science, simply because it had been delayed so much. It was being done under a gag order, basically. That wasn’t what it was called, but it really was a gag order. People said, “You mustn’t publish this and you mustn’t talk about it until the report comes out,” because it was politically sensitive. All that meant that when it did come out it contained work that hadn’t been published in the open literature. So you’re right. I was only involved with it for that one year, ’87, and that was simply because I was the project scientist of AAOE. We did that ozone in the comparison in the AAOE special issue because if we couldn’t get the ozone instruments to agree within better than 10%, nobody’s going to take [inaudible]. There are a lot of other things that are a lot more difficult to measure.
Do you know when the Ozone Trends Panel report was supposed to come out? I know it came out in March of ’88, I think.
Well, that may be the date on it, but that’s not when it actually came out.
That’s when the executive summary was issued supposedly, which I still can’t get a hold of because it’s not in…
The report was not physically available on the date that’s on the cover. There are some scientific journals that do that as a matter of course because they’re amateur operations, they’re not big professional operations, and they can’t always get things done through a timetable. You really should talk to someone who was on the panel.
I have talked to Neil Harris and Bob Watson.
Watson was in charge of it, but I do know that it was delayed and not just once. I may be right, I may be wrong; it needs checking. I think that the date that’s on the cover is not the date that it came out.
I think I read that somewhere else too. So when you talk about the silver book, you mean the 1989 science assessment panel report? [Yes.] So they were supposed to have something to do with each other?
What actually happened, because the AAOE mission was so visible and it was so political, or the visibility was so political, the subject wasn’t political, we had an agreement that we were going to publish in the two special issues of AAOE and we were not going to publish it in Nature, Geophysics Research, any of the fast publication things. We all published together. We’re not going to have people scrambling for publicity, and who can jostle each other out of the way in front of the microphones and cameras. We all agreed to it. It was a protocol, it was in the mission booklet, and everybody agreed to it. Some people got extremely frustrated about that, but it’s what we decided to do and it’s what we did. When we were down there, originally I was going to do the flight planning for the ER-2 and take the decisions about when to do the flight, and act as the aircraft scientist on the DC-8. The workload turned out to be too much, so Watson became the aircraft scientist on the DC-8. He was in charge of executing the flight planes and getting the plane orientated for the instruments and stuff. There was one flight where we were going to be able to measure OClO from the DC-8 inside the vortex. That was the night of the full moon, September the 8 and 9. Because the plane still flies so much faster near the pole when the earth is rotating, we had 15 moonrises and moonsets; Brian Toon planned it out. Watson wasn’t there for that flight, so I was the aircraft scientist. The reason he wasn’t there was because the Montreal Protocol meeting was going on in Montreal at the same time as AAOE. He disappeared for a week. He flew up from Punta Arenas to Montreal. One thing you can believe is that Bob Watson flew from Punta Arenas to Montreal and said absolutely nothing to anybody about what we knew, because by that time we knew the story. Or you can believe that he talked to one or two people he really trusted and that the process knew what was happening.
Knew that it was definitively happening through anthropogenic chlorine.
I don’t know which of those two is right, but anybody who’s ever spoken to Bob Watson knows that he’s not only characterized by intelligence, but he’s rather loquacious. When he gives a one-hour talk, he’ll typically have 80 view graphs. Not only that, the soundtrack stays ahead of the movie. Anybody who believes that he flew 8,000 miles for a week in Montreal, and said nothing, doesn’t know Bob. But, that’s all hearsay, and certainly nothing public leaked out. In another two weeks, Dan Albritton, who was the boss of Aeronomy Lab and my boss, flew down to Punta Arenas to be there for writing the final thing. Bob Jones came over from the Met office because the Met office had done all the meteorology, and that was basically what made it possible was the fact that we had some Met office forecasters who had done time in the Falklands conflict and knew how to do the forecasting down there, because Ron Williams, chief pilot, had his private estimate of how many ER-2 flights we’d get, and I had my private estimate, and there’s what we actually did. Ron’s estimate in advance was three flights. I thought we might get six because I thought we might get one in each cycle and that way we’d be able to get three flights in three days, and we actually got twelve and that was because the forecasting was so good. Yeah, it was pretty windy down there; we had quite a few close calls. But it worked, and in retrospect something like that comes along once in a lifetime, that sort of thing. Most scientists work on things that are scientifically exciting and get them all wound up within the scientific community, but to do something that was that visible and that public, and executed that well is very rare, very rare. I mean everything worked. Christ we were lucky! Subsequent experience shows just how lucky we were.
Well and the results are still talked about as the “smoking gun”.
Yep. There’s no two ways about that. It was unequivocal — you couldn’t really debate it. Even Fred Singer couldn’t debate it, speaking of idiots.
And that’s powerful [laughter]. That might be all the questions that I had. I agree with you about getting the public and policymakers exposed to the process of science. In fact, I’d like to think that’s part of what I’m trying to do. I’m very interested in the mass extinction debates. That’s what I did my master’s thesis on and I want to go back to it someday. I feel like there’s a bit of that going on there. Not necessarily with the public, but even among different scientists.
So you know Simon Conway Morris then?
I do. I’ve interviewed him a couple of times.
Veronica and I, there was a conference in Verona on the role of water and the origin of life. We got invited to go, and Simon Conway Morris was one of the organizers of this thing. There’s a book coming out in November, actually. Anyway, we wrote this chapter about the origin of life and so on. Conway Morris edited our thing and he was at the conference. He’s a very impressive individual. He said to Veronica, “Veronica you’ve almost convinced me that this is how it happened.” We’ve tried to push it into statistical mechanics for a maximization of entropy production, gives it a geophysical inevitability. Anyway, I’m just reading his book Life’s Solution. I’ve just finished the first eight chapters; that’s what I was reading when you came.
I like it, just for purely emotional reasons, not because of scientific evidence. I don’t wish to agree with him that there are no other planets with life in the universe, but other than that I enjoy his arguments.
Yeah, I think he goes astray in pushing where he wants to get to too hard. But he writes extremely well; he’s very intelligent, he knows how to master an argument. Large numbers in both space and time. I think what he calls convergent evolution you can express its statistics mechanically by saying again it’s this argument that you don’t have random motion. Anything that’s in ionized atrophy [?], imposes organization, and all scalar variance is, is an expression of the non-equilibrium statistical mechanical position where you maximize entropy and that the organization originates in the boundary conditions. Anything that’s in ionized atrophy; on this planet, gravity, rotation, and the solar beam. You cannot possibly have randomness, but the system does its best and it maximizes the entropy production, because if there’s a preference for a particular scale, that’s an expression of organization, and we don’t have it. The organization we do get is the result of gravity and rotation and the solar beam. That’s the argument, anyway.
It’s really interesting. I did at least one chapter of my PhD thesis was on Conway Morris versus Gould about contingency and convergence. It’s an interesting subject.
Well Conway Morris is at St. John’s college and Chris Dobson is now in the masters at John’s college. Veronica and I were dinner with Chris a month ago, and yeah it’s an interesting place.
Well thank you very much for talking to me.
Sure, glad to help.