Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
Please contact [email protected] with any feedback.
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of Robert Kirshner by Ursula Pavlish on 2007 August 3, Niels Bohr Library & Archives, American Institute of Physics, College Park, MD USA, www.aip.org/history-programs/niels-bohr-library/oral-histories/32418
For multiple citations, "AIP" is the preferred abbreviation for the location.
Robert Kirshner (of the High-z Team) is Clowes Professor of Science at Harvard University and the author of The Extravagant Universe: Exploding Stars, Dark Energy, and the Accelerating Cosmos a book that deals directly with the topic of this interview, and which includes the story of supernova cosmology with a first-hand account of Kirshner’s personal contribution. As part of his graduate work at Caltech, Kirshner discovered supernovae at Palomar in 1971, studied SN 1972e, the brightest Type Ia supernova in 35 years, and invented the expanding photosphere method for finding distances to Type IIs. His work covers various aspects of extragalactic astronomy and observational cosmology including galaxy surveys to measure galaxy clustering, the value of Omega_Matter, and the luminosity function of galaxies in addition to the co-discovery of the accelerating universe and Dark Energy. Many of the key players on the High-Z Team were, or had been his students and postdocs at Harvard. His students include Brian Schmidt, Adam Riess, Saurabh Jha, and R. Chris Smith and his postdocs include Bruno Leibundgut, Peter Garnavich and (later) Tom Matheson. Peter Challis is a long-time Harvard staff member supported by Kirshner’s grants. He describes the conceptual and observational developments that the High-Z Team used on the path to submitting the first publication with evidence for cosmic acceleration.
The interview will be in the AIP archives? That is good. They keep asking for money. I would not want to put them in a compromising position by giving them any. [laughs]
[laughs] It is August 3rd, 2007. I am here in Professor Robert Kirshner’s office at the Center for Astrophysics in Cambridge, Massachusetts in Perkin Laboratory. I have ten short questions for you, Professor Kirshner. Now, I should say, for the record, that Professor Kirshner has written a book about the discovery of the accelerating universe called The Extravagant Universe: Exploding Stars, Dark Energy, and the Accelerating Cosmos. That is background material, and material to be consulted.
Highly recommended to all readers. [laughs]
[laughs] First, I would like to ask you a two-sided question. I would like to ask you to share what the milestones or the turning points are in the discovery of the accelerating universe; first, in general, and then for the High-z Team specifically. The goal would be, that any story should include these important milestones.
Well, it is a long story because it goes all the way back to the discovery that supernovae exist: the idea that there are exploding stars almost as bright as a galaxy. When Hubble discovered that the universe was big by looking at the Cepheid variable stars (this was in the late 1920s), he commented that there were some novae, new stars, that were a fair fraction of the brightness of the fuzzy thing, of the nebula, or the galaxy (we would say today) that they were located in, which were quite puzzling. The distance scale he was proposing, in which the distances to the nearby galaxies were millions of light years, rather than thousands, implied their luminosities must be a million times bigger than for ordinary novae seen in the Milky Way Galaxy. So right away, people realized that there were very bright novae in other galaxies. In the 1930s, people began to pursue the nature of the bright novae in galaxies. Fritz Zwicky to start with, at Caltech, began the systematic study of the supernovae. Zwicky perfected methods for discovering them by looking during the dark of the moon every month, which is the natural cycle for this sort of thing and the same way people have found supernovae in recent years (though the LBL group seems to think they invented this.) Walter Baade, who was at the Mount Wilson Observatory, made photometric measurements of the news stars. They began to see, in the 1930’s, that the supernovae they found, which coincidentally all happened to be the kind that today we call Type Ia supernovae, resembled one another quite closely. We know now there is more than one type: different types have different origins and different luminosities. But by pure chance, the objects that Zwicky found in the thirties and that he and Baade studied, all were of Type Ia, the thermonuclear type.
They already took spectra of them?
Minkowski, also on the Mount Wilson Observatory staff, took spectra. They published a few papers and then this early work was written up much later in a compendium; the ‘Stars and Stellar Systems’ volume (Aller & McLaughlin volume VIII, p. 367 (1965)). Zwicky’s review article summarized what had been done up till then, and a lot of it was from the thirties. [laughs] So you would have to start with the first idea: the supernovae are extremely bright. A second important point, that Baade and others began to suspect, was that supernovae have a narrow distribution in their intrinsic brightness — because that’s what makes them useful as distance measurement tools. There was a very early article by Olin Wilson, also at the Mt. Wilson Observatory, a very young man at the time, who pointed out that if the expansion of the universe was real there would also be time dilation. The light curve of a supernova, the time it takes to get bright and get dim, seemed to have a very uniform shape according to Baade, and Wilson pointed out that this shape would stretch out at high redshift if the redshift were truly due to cosmic expansion and not to “tired light”. So these two things: that the supernovae might be standard candles, and that they might provide a direct test of the expansion hypothesis were both pointed out very early. Things simmered along in a slow way [laughs], the observational side of the supernova business did not make much progress. Through the late 1960s, people had made pretty good measurements of a couple of dozen supernovae. There was a good review written by Charles Kowal, who worked at Palomar, doing the things that Zwicky had done (by that time Zwicky had retired.) Wal Sargent had an NSF grant to support Charlie as they continued Zwicky’s work. Charlie’s article (Astronomical Journal 73, 1021 (1969)) looked at the redshifts of supernovae and at their apparent brightness. He found that the relation between redshift and brightness, was very narrow. Well, not very narrow, but pretty narrow [laughs]. This was evidence that Type Ia supernovae, (just called Type I then), were potentially good standard candles. Kowal went on to say that supernovae would someday be good to ten percent in distance. And, he said that if you could find supernovae that were distant enough, you would see the curvature in the Hubble diagram, which everybody expected as a result of cosmic deceleration. Prophesy! I would say that 1969 paper is the beginning of the idea that supernovae could be an important way to measure cosmic distances, and that you would see the signature of cosmic deceleration.
Would you separate the history of these different lines from the history of the cosmological constant?
Yes. The cosmological constant is a highbrow thing. It goes back to Einstein. It is a conceptual thing. But making measurements of the relation between apparent brightness and redshift, two observable things, is simple. If you had objects that were of exactly the same brightness, they would lie on the line predicted by one (or more) of the theoretical pictures. Allan Sandage wrote an extremely influential article in the Astrophysical Journal in 1961, (“The Ability of the 200-inch Telescope to Discriminate Between Selected World Models” Ap J 133, 355) that set out the methods and the theory. One thing about that article, which I did not remember until much later, was that it works out the solution for a cosmological constant — which is exponential expansion — and explains how that would show up in Hubble diagrams. I think it is fair to say that nobody took that seriously in the 1960s, the 1970s, nor in the 1980s, although there were periodic eruptions of the cosmological constant when people were having trouble understanding cosmological data. The empirical tool for measuring redshifts versus distances was pretty much on a separate track from the conceptual work. The supernovae were certainly not the favorite candidate for measuring the history of cosmic expansion in the 1970’s. Sandage assumed that the supergiant galaxies in clusters of galaxies would turn out to be the standard candles. When I was a graduate student cD galaxies were going to be the path to measuring cosmic deceleration. Wrong on both counts! [laughs] You want landmarks for the supernova program. The next thing is: there had always been outlying objects in the Type I class, the kind that had no hydrogen. Even though there was a good theoretical idea that SN I might be exploding white dwarfs, there were some funny objects mixed in with the rest. In 1983, 1984, and 1985, we got a bunch of these that were pretty bright. I worked on these. So did Craig Wheeler at the University of Texas, Alex Filippenko (who was just a kid), and David Branch. The usual supernova people. We showed that these objects were not the same as the other Type I supernovae. These became the Type Ib supernovae. They are interesting in their own right, but they are not thermonuclear explosions. They are core collapses. They do not have hydrogen, just as the Type Ia does not have hydrogen, but they are completely different inside and they are found in different places. By getting the Type Ib interlopers out of the class of Type I supernovae, you were left with a much more homogeneous group of objects.
The Type II had always been separate?
Yes. That dates from the 1940s. Minkowski saw the first of the Type IIs and he said, “Hey, this is not the same. It is a supernova all right because it is bright.” But it had hydrogen lines, so it was something distinct from the objects Baade and Zwicky had studied up to then. People understood that from the 1940s. This business of the Ib’s was new. There had been objects we called ‘type I-peculiar’, (I had studied some of those) but it was not clear what was going on until the 1980s. I would say sorting out the Type Ib is the next important milestone. It is subtle because it is not even about Type Ia’s — it is about Type Ib’s [laughs]. But it is important because it improved the homogeneity of the class, and homogeneity is the key thing in getting precise distances.
Now, when this group was working on supernovae, though, they were not thinking of them as distance indicators. This is a later question, but a theory in the history of science that I just cannot stop thinking about because I am so excited about it, is this idea of ‘scientific objects.’ In short: how does an object become scientific while scientists are looking at it, analyzing it, making representations of it, and taking data. About supernovae, it seems rather obvious, that they can be looked at as objects or they can be used as tools.
Right. There is a little autobiographical stuff in my book: in 1971 I had Charlie Kowal’s job of searching for the supernovae at Palomar for a month during the summer while he was on vacation. I took the photographs at Palomar. I used the 18-inch Schmidt, the telescope that Zwicky had built, and the 48-inch Schmidt, where I found a couple of supernovae. I was very proud of that. Then, in 1972, Charlie found the brightest supernova since 1937. That was unusual. SN 1972e became a big part of my thesis. And it was a Type Ia supernova. I had been very interested in supernovae, as what you call, “objects” [laughs], because they are important. They are important in the transformation of the elements. A star starts out being hydrogen and helium, but when it explodes as a supernova it produces half a solar mass of iron. The transformation of the chemistry of the universe takes place through supernovae. We have known this for a long time and now we want to know the details: which elements, and which supernovae? It turns out you make a lot of iron in Type Ia and a lot of oxygen and other elements in the Type IIs. It is all very interesting. To answer your question a little more directly: what I had been working on was properties of supernovae, but also supernova remnants, and supernovae of type II. Those were directly connected to cosmological problems. In 1975, John Kwan and I invented a method for finding the distances to Type II supernovae. This was a new way to use supernovae for cosmology — to find the Hubble constant, which is closely linked to the age of the Universe. I was very interested in both things: the supernovae in themselves, and the supernovae as tools for cosmology. Our way of using the Type II supernovae was to get their distances without reference to any other distance: the expanding photosphere method. This was a way to determine the Hubble constant. Bob Wagoner (at Stanford) worked out how you could do cosmology with this method: if you could find SN II at big enough distances, you would see the expected effects of deceleration. Ramesh Narayan (then at Arizona) thought a little more precisely what the measurements would tell you about cosmology. So, I was really interested in the problem of cosmology, too. But the Type II have not yet become the great tool for doing cosmology. They are not quite as bright as the SN Ia, you need a lot more information to make the distance measurement, and there was some theoretical uncertainty. This method has not become as important a distance measurement tool as I thought it might. Not yet.
That work was simultaneous with your work on Type Ia’s?
Right. Both things are in my thesis at Caltech from 1974. And Brian Schmidt’s thesis with me, in the early 1990’s, was exactly on that subject. It was on using Type II supernovae to measure cosmic distances.
So it would be unfair, and incorrect to say, that the High-z team were people who were interested in supernovae for themselves and then decided, we can also do cosmology, and that the SCP were only interested in them as tools rather than in themselves.
The second part, about the SCP, may be true [laughs], I think it is true. But the first part is too simple because I had done galaxy surveys to measure galaxy clustering (in 1981 we found the first evidence for cosmic voids), and to figure out what Omega_Matter was, you know, the density of matter, and estimates of dark matter density. I had worked on the luminosity function of galaxies. I had done many different things in extragalactic astronomy, what you could call observational cosmology. It would be true to say that the High-z team included people who had done a lot with supernovae, but it was clear that the SN Ia might be important in cosmology, and this was a strong motivation for many of us on the High-z team starting long before the SCP got started. The next milestone, I think, is the thing that Mark Phillips did. Mark Phillips was on the staff at Cerro Tololo. I remember there was a meeting about supernovae held at Santa Cruz in the late 80’s. Mark and I were there along with the usual crowd. He said to me: “Would it be worthwhile to do a search for supernovae in Chile?” And I said: “Yes, but” you have to be really sure to follow them up, because just finding them and doing the statistics, has not been all that productive. But, studying the individual objects is really good. Observing conditions in Chile, you may know, are very good. So Mark, and a bunch of other people: Nick Suntzeff, Mario Hamuy, the whole gang, got together and used the Curtis Schmidt telescope at CTIO. The University of Chile people were willing to do the incredibly tedious work (the same as I had done that one summer), of searching for the supernovae on photographic plates, by eye. They took the plates with the Schmidt telescope, they developed them on the mountain. They put them on the bus, the plates went down to Santiago, and the people there would set them up on a blink comparator — a gadget that lets you see the same spot on one picture and then the other. It is a microscope that shows you the same little tiny field on two different plates taken a month apart and it blinks back and forth every second. If there is a new object, it appears to flash.
That is what you used too?
Yes, yes. This is an old-fashioned, manual, analog way of finding supernovae.
By this time people must have been doing this at other telescopes as well?
Would you say at half a dozen telescopes?
No. At just a few. It is a slender field, in a way, and it takes persistence to succeed. Zwicky had inspired a team of supernova searchers, but that was fading away. So they set up the Calan-Tololo Search. Cerro Calan is the name of the observatory at the University of Chile. And, they did another thing, which was brilliant, I thought. Because they searched a large enough area, they were quite sure they could find supernovae every month, so they scheduled follow-up observing even though they had not yet found the supernovae. It was quite courageous of them, I thought. [laughs] They could do it because they did not need time on the big telescopes and their rate of discovery was high enough to make it a good bet. You know, there’s a four-meter telescope in Chile. They didn’t need that. There was a one and a half meter telescope in Chile, and they used some of that time. There is a one-meter telescope. That is what they really needed to do the photometric follow-up, with just a little data from the bigger telescopes.
This was in the early 1990s.
This was in the early 1990s, the Calan-Tololo Search. They got that going, which was quite good, and they perfected the technique of search and follow-up. They did it with the rhythm of the moon, the way that Zwicky had done thirty years before, which was very effective. So that was a landmark, I would say. The Calan-Tololo Search set the standard. Although it was a throwback, too. It used photographic plates and the human eye for the search, whereas the thing that was really new about their program was that they used CCDs (Charge Coupled Devices), the solid state silicon detectors, for measuring the brightness of supernovae. This is important, because you can do a lot better photometry with CCDs. Electronic detectors are much better. The amount of electrical signal you get is directly proportional to the amount of light. With photographic plates the amount of blackening has some complicated relation to the amount of light.
They took another image after they counted the supernovae?
That is correct. After they discovered a supernova, they carried out pre-arranged CCD photometric measurements of the light curves using relatively small telescopes. After the supernova faded away, they took another image to make excellent subtractions of the galaxy light. That was a big technological step and there was an important result that came from it. Another milestone, I guess. Mark Phillips looked at the supernovae for which they had some data and for which they also had good ways to estimate the distances, not just from the redshifts but from other properties of the galaxy. He noticed that there were Type Ia supernovae that were intrinsically bright, and there were supernovae that were intrinsically dim, but the ones that were intrinsically bright had different light curves from the ones that were intrinsically dim. Now, people had looked at this before. There was Psovskii, a Russian guy. There was an Italian group that had looked at this. They had gotten contradictory results because the old photographic data were not so good, because measuring the brightness of a little dot in a big fuzzy bright galaxy is difficult. If you have a CCD you can subtract the galaxy light in a computer. That was a big step forward. The technological step was the CCDs. The intellectual step was that Mark noticed the bright ones had broad light curves and the dim ones had sharp light curves, ones that went up and down more rapidly. A type I supernova gets bright in about three weeks, reaches a maximum, and then falls off. Mark invented a way of talking about this, which was to measure how much the supernova dimmed in the first fifteen days after maximum light. The jargon became ‘delta m_15’: how much did the magnitude, ‘little m’, change within fifteen days. Mark showed there was a good correlation between ‘delta m_15,’ the rate of decline after maximum, and the intrinsic brightness of the supernovae. This was important for the cosmological program because some SN Ia are significantly brighter than others. As astronomers built up the sample, in Chile and also from my group at the CfA, we found that the range in brightness, between the brightest and the dimmest SN Ia was bigger than a factor of three. Well, if you want to measure distances precisely from the apparent brightness, a factor of three in brightness is not good. You could do a lot better after you applied a correction based on the light curve shape. You can find out from looking at how fast the supernova went up and down: are you looking at a hundred watt bulb, a seventy-five watt bulb, or a fifty watt bulb? Good to know if you’re going to judge distance from apparent brightness. That eventually led to distances that were better than 10%. Getting that started was the contribution of Mark.
How about the Danish team in the 1980s?
Right. I was completely unaware of what the Danish team was doing did until they did it. It is a funny story. I put it in The Extravagant Universe Brian Marsden, who ran the International Astronomical Union Circulars, the reports of supernovae, was in charge of giving them alphabetical names — issuing their license plates and circulating the news around the world. He came to me one day and said, “I’ve got this report from these Danish fellows at ESO [European Southern Observatory] and they found a twenty-first magnitude supernova.” I said, “Oh great, who is going to study that?” I said, “Unless they have already arranged for somebody to do it, no one is going to take this on because it is too faint. Too much trouble!” And he said, “No, no, no. Read what they said. They arranged with people in Australia to do the follow-up,” which turned out to be Richard Ellis and Warrick Couch at the Anglo-Australian 4 meter.
Just to interrupt, I’m sorry, but would this be a separate milestone?
I don’t know. This is sort of a funny thread. You would have to call it the beginning of deliberately doing cosmology with Type Ia supernovae, so sure, the work of the Danish team is definitely a milestone.
Yes. Because if you read their articles, in the ‘ESO Messenger,’ the house journal of the European Southern Observatory, they said very clearly that they wanted to measure cosmic deceleration by making a Hubble diagram for Type Ia supernovae. They searched every month during the dark of the moon. Then they came back and took images again after the full moon, just like Zwicky. These were digital images with a CCD. They did the image registration. They did the subtraction. They did all the stuff that everybody reinvented later. But they did not find very many supernovae. They found one, which they reported in Nature, 1988U. By the time they wrote this up, I was completely aware of their work. I was the referee. I wrote the ‘News and Views’ (you know how the magazine sometimes has little editorials in front) that went with the article. I was admiring, though a little skeptical that you could get the cosmology out reliably.
Why is this the beginning of cosmology with supernovae rather than something previous?
Because that was their purpose. Their purpose was to find supernovae at a sufficient distance that the acceleration or deceleration would have a measurable effect. To put it in perspective: at a redshift of a half, which is where people were hoping to get (SN 1988U was at a redshift of three-tenths but they were hoping to get to one half), if the universe had enough density so that it was flat, Omega of one, and you compared that to Omega of zero, the difference in brightness would be about twenty-five percent. Once the scatter among the supernovae was reduced to the ten percent range by using the light curve shapes, that began to seem very interesting. But the Danes did their work before we knew that the supernovae really could be standardized to that caliber.
But they did not find enough supernovae to do that.
They did not find enough to beat that down with brute force statistics. If you find a hundred objects, even if you have a big scatter, you can find what the mean is. But they only found one that they reported. In my opinion, the only reason they didn’t revolutionize the field is that the technology was not quite there: The CCDs in the late 1980’s were too small. The telescope they were using was too small. So, even though they had all the right ideas, they did not succeed. If they had been just a little bit luckier or their equipment had been just a little bit better, so that instead of finding one supernova a year, they found one every month or two, they would have eventually discovered the accelerating universe. But they did not. I think there is an idea here about technological readiness. In the book I talk about Lief Ericson, how the Vikings discovered North America, but so what? It did not develop into the history of North America. What the Danes (close enough to Vikings for me) did was completely competent, and they invented all the techniques used later, but there were not enough results, so it was not very well connected to what came later. All the Danes quit doing it, and went off to different subjects. Richard Ellis eventually joined the SCP. Several years ago, I went to give a talk in Denmark, this was after we had gotten our results, but long enough after theirs of 1988 that I had forgotten. It was only when I got on the train to the summer meeting of the Danish Physical Society that I remembered the whole story. All the guys who had done this work were on that train. I changed my talk, too. I put in much more about the Danish work! Anyway, that is the Danish development. Then, there was Mark Phillips with the beginning of the light curve shape stuff that made it much more practical to get a meaningful result without a giant sample. Richard Ellis who is now part of the SCP was part of that pioneering Danish team. Richard invited me to Durham where he was Professor (this was before he went to Cambridge and then to Caltech), to give the Grubb-Parsons lecture. I talked about supernovae, and I talked about all this stuff. My talk is in “The Observatory”, one of those slightly obscure journals, as the Grubb-Parsons lecture. It shows that all of this stuff was in the air at the end of the 1980’s. Anyway, the big thing was Mark showing that you really could get the scatter in the supernovae down. We realized right away, Mark knew this, we all knew this, that even with this method there was a very serious problem that had to be solved before you could do cosmology. That was to account for dimming by dust. Dimming by dust looks just like dimming by cosmology, so you need to be able to tell the difference. Mark had a master’s student from The University of Chile, Paulina Lira. She had the light curves from the Calan-Tololo search. She found that when they sorted out the ones that did not have very much dust, either because they were in elliptical galaxies or because they were far from the center of a spiral galaxy, in places where you did not think there was much dust, all those SN Ia reached the same color about forty days after maximum light. If they all had the same intrinsic color forty days after maximum, then when you measured the color for a supernova at that time it would either be that color, which means no reddening, or it would be redder than that, because the interstellar dust absorbs blue light more than red light. This was a master’s thesis. It was not based on a huge amount of data. But it looked pretty good. This was in about 1992, 1993. I thought: this is important. We have to solve this problem before we can do any cosmology with Type Ia supernovae. At this point, Adam Riess was a graduate student with me. Brian Schmidt was finishing up his Ph.D. with me, trying to measure the Hubble constant with expanding photospheres of Type II supernovae (we published a value of 73 km/sec/Mpc!) We were doing cosmology with supernovae. Bill Press was here at Harvard in the Astronomy Department. Bill is quite a brilliant fellow, especially in inventing numerical methods for analyzing data, he wrote the fantastic book, Numerical Recipes. Bill had an idea about how you could take the light curve data and do something well-defined mathematically with it so that you could tell which ones were brighter and which ones were dimmer with no guessing or witchcraft. We knew you had to measure two things to get accurate distances to supernovae from their light curves: the light curve shape to find out the intrinsic brightness, and the reddening to see how much they distant supernova was dimmed by dust. If you were really smart, you would do them simultaneously. That is, you would use the light curves measured in many colors to figure out both how bright the supernova was and how much of its light had been affected by dust. To make a long story short, Adam solved that problem. Bill Press suggested, and Adam developed what we called the Multicolor Light Curve Shape Method. I kept them from doing completely crazy things, but I didn’t have the mathematical ideas.
Yes, MLCS. Now at the same time, the Chile guys had a slightly clunkier way, but they also understood exactly what was going on with the reddening. I liked out method because it also generated estimates of the uncertainty in the distance. That was a really important step. Adam and Bill and I had the low-redshift samples from Calan-Tololo and from the CfA — we used our own telescopes in Arizona to measure supernova light curves for Type Ia and to get the data that we needed.
Did the data come first or did the MLCS come first?
The data. We used some of the Calan-Tololo data. They were very nice to us. They let us use some of their data because they were ahead of us on that, but they are really good people. We also got data of our own at the CfA. Then about the same time, this would be around 1993-1994, we began to solve this problem. That was Adam’s thesis.
I know that was an award-winning thesis, throughout the nation. Was that important enough to merit a milestone on the order of the others?
Yes, I think this multicolor lightcurve shape method does merit designation as a milestone. But the Chile guys had an equivalent thing. They also understood that you had to do this.
So the reddening as its own milestone.
Right. This is really an important distinction between what we were doing and what Saul’s group was doing. Saul’s group gave no indication of understanding this problem. That was really surprising to me. For example, in the first paper that they wrote, which was on supernova 1992bi, they had made the measurements in only one filter. If you measure through one filter, you cannot in principle measure the reddening, the absorption by dust. You need at least two filters for a color, astronomers would say. And three is a lot better! In their first publication the SCP showed they did not understand you absolutely had to measure dust absorption before you could say anything about cosmic deceleration. You had to distinguish between the dimming due to cosmic expansion and the dimming due to dust. A universe that is accelerating makes distant objects look dimmer. You had to distinguish between that and the alternative: that distant supernovae had more dust than the nearby supernovae you were comparing them to. Those effects would look the same unless you measured the colors. I thought that was a very important point. We understood that and they did not.
This was a recent understanding in the community of supernova experts?
Oh, yes. This was in 1992, 1993, 1994. By 1995 we knew how to do this. That is when we really got going on the High-Z program. Brian had finished his thesis with me and was then a postdoc here at the CfA. Adam was working on his thesis here with me. We were talking about this all the time. Peter Garnavich was my postdoc here. Bruno Leibengut who had worked on formulating the template for supernova light curve shapes had finished up as a postdoc with me and was at Berkeley at that point. Peter Challis had been working with me for some years. Saurabh Jha was a beginning student. A lot of the High-Z team was my research group.
This was around the time when you took that first spectrum, when you took that spectrum for the Perlmutter group?
Oh, yes. We knew what they were doing. Saul had a very difficult problem getting enough observing time. For example, that first paper not only had measurements in just one filter, they had no spectrum of the thing they called a type Ia supernova. They had a spectrum for the galaxy taken later, so they knew the redshift, but they did not know for sure that that object was a Type Ia. You could ask any big telescope astronomer of the era Wal Sargent would be a good example — and they would tell you stories of being called up in the middle of the night by Saul. He had the difficult job of persuading you that you would rather do an observation of interest to him than the thing you had come to the telescope to do. People who were rich enough in telescope time would do it because the science was so interesting. Not all the reports of SCP discoveries were reliable, and there were some that were wrong or poorly measured, but Saul kept calling people up and asking for help. I did it. In 1994 I took a spectrum at the MMT. We reduced the spectrum and it was sent to them. Maybe there was some misunderstanding about it. But I thought the terms of the exchange were completely clear. Usually, if there is a publication of your data, you would be a part of that, or at least you would have a chance to turn down being part of the resulting paper. You could say: well I did so little, never mind. But I remember being quite surprised, I have told this story before or it is in the book.
It is in your book.
I went to a presentation from the SCP, and they showed a supernova spectrum and said, “Here is our data.” I thought: Hey, that is my data!
My impression was that there was some mix up about publishing it in the IAU or something; that you would have liked to have it out right away and they kept it quiet.
That might also have been true. But there was a promise to include us in the real publication of the scientific result, whenever that was. It hasn’t happened yet, and the data are from 1994. Our attitude was: you find these things, you immediately announce them, and maybe someone will help you by observing it. If they do, you give them a chance to be part of the scientific result. Their view, I think, was: this is our private object and we wish to control all subsequent observations made of it. Their attitude toward publication, or notification, or asking for community help, was very different. That, I think, comes from not being acculturated to the astronomical world.
That is interesting.
How private is a scientific object, or how public? So you would say, that in the astronomical community if you find an object in the sky with a limited lifetime like a supernova, then it would be the appropriate action to call other supernova astronomers.
Right! If you see a comet you do not say, “This is my private comet, I am not telling anybody.” We are much more used to finding things, telling everybody, cooperating on getting the data together, taking data from many sources, and in some amicable way putting it together. Their approach was quite different. It was much more controlling. I sent them the spectrum and then they apparently thought that since they had discovered the object, this was their spectrum. Anyway, there were some misunderstandings.
Now, you also were on the review committee for the Center for Particle Astrophysics?
Yes, I was on the external advisory committee, or whatever it was called.
Was that at the same time as when Mike Turner was on it?
Mike Turner was on the LBL Physics external review committee. I was on the Center for Particle Astrophysics external advisory group. Both of these groups were looking at the progress of the supernova program at LBL. It is possible that Michael joined the CfPA group after I left it. I think this must have been before that.
Before, ok. I am a beginning History of Science student, and I do not understand the way review committees work. But it seems important that you gave the group approval, gave them advice, and also learned what they were doing. Was this an important back and forth?
The CfPA director Bernard Sadoulet was a reformed particle physicist working underground on Dark Matter detectors, so he was not very familiar with the astronomical stuff. He had to decide how to allocate his resources and to evaluate whether the supernova program was being run well enough that it should be continued, or stopped.
Would you have given them advice to recruit more astronomers to their group, or something like that?
They had Alexei Filippenko at The University of California at Berkeley, who is outstanding. But there was not a good fit between Alex, as a person, and Saul as a person. There was a personality clash. The expectations they had of each other were different. You can talk to them about that. They were supposed to be collaborating, but they were not enjoying it. It was in part the clash of cultures. The astronomical way was based on community and sharing data and information.
Sharing within the astronomical community?
Yes. You would get the data, and then work out how to cooperate. Alex and I worked on many, many nearby objects where he would observe one aspect and I would do another. We are on a lot of papers together from the early 90’s. We are also on competing pairs of papers — one by his group and one by mine. That was another way to get the results out. He would do the light curve and I would do the spectra, or the other way around. Sometimes we would put the data together and sometimes we published separately. We cooperated on space telescope observations, and in many other ways before the High-Z team formed. And after. This seemed normal to us. Where are we here? We have the light curve shape stuff, and the multicolor light curve shape stuff.
That was the fourth milestone.
I think that was very important.
We had a lot of technique. We knew how to study the spectra. We knew how to use the multicolor light curves. Then we could use them for distances. We had shown that we could decrease the scatter down to the ten percent level, as prophesied in 1968. [laughs] And you could write on the back of an envelope: suppose that you could measure the distances to these things to a precision of ten percent each, and the difference between a decelerating cosmology and a coasting cosmology amounted to twenty-five percent. Then it was not too crazy that with just 10 objects, you could make this measurement in a convincing way. We knew that. This is, of course, what Saul had been talking about, but he had shown no sign of having mastered the business of the reddening by dust. That is what finally drove us to compete: we had developed these techniques, we had been working on them for the nearby supernovae. We knew we could do it. And, the other guys, who had been talking a lot about measuring cosmology, did not seem to have an elementary understanding that you had to measure through two filters, or any idea of what to do, once they had made those measurements. And they had no nearby sample — they were depending on us.
Your function being on the review board did not include giving them advice?
Oh, yes, it certainly did. I told them: you have got to measure in more than one filter. If you do not measure colors, you cannot do the cosmology. I told them that as plainly as I could. But after a certain point, when you tell someone the same thing over two or three years, you sound like a broken record, you sound like a crank, you sound like what Gerson called the naysayer who “pooh-poohed our work.” I said, “You cannot do cosmology if you do not make the measurements in the proper way.” And, they said: “it is just the cranky old astronomers saying we can’t do this measurement.” It was not that. I was trying to help. But there came a point when it seemed like the only thing to do was to quit that committee. So, if you really want to be industrious, you could find out what year that was. But, anyway, it was 1995 or 1996. At that point, in 1994 and into 1995 Brian finished his thesis and became a postdoc. He went down to Chile and they gave him a bunch of Type II’s from the Calan/Tololo project that they were not going to work on further, since the Chile group was concentrating on getting a low redshift sample of Type Ia supernovae for cosmology. Brian and Nick Suntzeff got together and said: you know, we can do this problem better than the LBL guys, based on what the people in Chile had done with Calan-Tololo and based on what we were doing at the CfA, using some their data and some of our own too. The problem was, we had done nothing on the problem of finding distant supernovae. All of this was the analysis after the objects were found: measure the light curves, measure the spectra. We had done that. That we were good at. But we had done nothing on finding distant supernovae. If you ask the SCP what was their great technical achievement, I think they would say it was that they wrote the software to be able to find distant supernovae. I said to Brian, “These guys have been working on this software for five years. How can you say that we can catch up with them?” He said it would take “about a month.” Part of that was youthful arrogance, of course. But part of it was also a different attitude about community. For astronomers, software is a community project, so there was a lot of stuff that was public, stuff that people had written at the National Observatories or that Paul Schechter wrote at Carnegie. There were things out there you could use to orient images, then subtract images. There was a whole set of utilities for dealing with images available in the astronomical community. You did not have to reinvent the wheel. There was a whole junkyard full of steering wheels, bicycle tires, flat tires and other kinds of wheels lying around. I do not mean to diminish what Brian did either. First of all there were some quite creative ideas in his software. And it was a complicated piece of software. It would break all the time. Anyway, we are only on the first question!
Something I thought of while listening to the story: the way you tell it, the milestones come not at the beginnings of things, but when the results happen. One could say: this team started at this point, that team started at that point. But the way the story is going now it seems that the beginnings are not as important as the endings.
That is right. The results, the published results, are what really matter. “I woke up in 1968 and had a dream of doing cosmology” that is one thing. But “I found a supernova at redshift 0.5 in 1995”; that is another. So, to get to this, Brian wrote that software.
Is the software a new milestone?
The formation of the High-z team, the writing of the software, and the finding of the first supernova by us, all happened in 1995. It was by the skin of our teeth. We put the ESO proposal in late so we got it for the next semester instead of the one we intended but that worked out OK. We were very lucky. By 1995 we had a supernova, SN1995K at z =0.479, the highest yet. We published it in 1996. It had a two-color light curve, and we had a spectrum that showed it was a SN Ia. Bruno used it to show that we saw the time dilation effect predicted back in the 1930’s by Olin Wilson. At that time, the SCP had published exactly one object: the one for which they had no spectrum and no colors. We were in the game, and, based on the publications, ahead of them in 1996 despite their 5 year lead and much more generous funding. Now, they also had a handful of SN Ia they were working on, I think they had seven. They would go to meetings and show the data. They claimed over and over that it showed the universe was decelerating. Decelerating! They published a paper in the middle of 1997 in the Astrophysical Journal that said: based on our seven supernovae, we believe we see evidence for cosmic deceleration, a universe that is consistent with Omega_Matter of one. Omega_Matter of 0.88 was their best estimate. And they claimed this ruled out Omega_Lambda. Wow! [laughs] I thought: they have beaten us. We got started late and we are not done. They have beaten us. I thought: That’s too bad for us, but let’s see what result we get. By 1996 we had a few supernovae. Into 1997 we had a few more. So we were going to get a result.
If the number five milestone is the formation of the High-z team and its finding of its first supernova, should not the SCP get a milestone like that?
Yes, the SCP is an essential part of the story. They picked up where the Danes left off. But their first cosmological result published in 1997 was wrong, wrong, wrong. I told Adam that the punishment for being wrong should be as big as the reward for being right! He said “I am getting a reward?”
That as a milestone?
I do not know what you would call it. Millstone? Pothole? They never refer to it. They skip over it. That is the way anyone of their team would later tell the story. They would say: well, we developed the techniques and the High-Z team copied them. This is not technically correct — we copied the techniques all right — from Zwicky and the Danes and Calan/Tololo and NOAO’s IRAF and lots of other places. But the SCP usually don’t talk about the 1997 results. They published a result in a refereed paper, where they claimed that the supernovae were showing that the universe was slowing down. That was in July of 1997. Now, the pace of the story picks up. It is 1997. Both teams had gotten some HST time. The SCP had measurements of one supernova with HST from 1996. We had three. And both teams wrote those results up and talked about it right at the beginning of 1998. UP: That is mostly based on the Hubble data?
It is kind of peculiar. In the case of the Supernova Cosmology Project, you should check with them, it is the same data set from which they got an Omega of 0.88.
Right, minus one, plus Hubble.
Minus one and then plus one, I think that is right. They threw one out. You’ll have to ask them what the criterion was, I am not sure, plus one object from the Space Telescope. And then, with almost the same data set, they presented a completely different result in their Nature paper from the first of January, 1998. We knew that they had a paper coming. I did not know ahead of time what it said. This is when I was on leave at Santa Barbara. Gerson Goldhaber came to give a talk. This is in the book. He has since written up a little memoir, maybe he gave it to you, where he says, “At Santa Barbara I showed this result.” I did not remember that. Now, I am sure he showed it. But I was so curious about what the difference was between the new sample and the old sample that had ruled out Lambda that I have no recollection of the histogram Gerson showed. The way he tells the story, he says, “I showed them the result and that is how they learned about cosmic acceleration.” What is interesting to me is how little I learned at that talk. We had a plot at that time, but I do not remember showing it to him, which had five objects, maybe, and they were all on the accelerating side of the Hubble diagram. Now, whether I showed that to him or not, I could not swear. I do not know. Probably not. I doubt if he asked about our work. The SCP viewpoint was that we were far behind them and they had nothing to learn from us. This was not accurate. Anyway, the published record is the following: their paper at the beginning of 1998, the Nature paper, has almost the same objects as the July 97 paper, only a different result. And, the paper by Peter Garnavich for our group came out in the ApJ Letters in February of 1998, which of course had been submitted many months earlier, had just a couple of supernovae, but including 3 beautiful space telescope observations. Both groups concluded that the data were consistent with a universe that would expand forever. Well, that is good. That is different than Omega equals one as the SCP had claimed in July. It was consistent with a low Omega_Matter. If someone told you that there was some other reason to believe that the total Omega was one (and of course people believed that based on the theoretical idea of inflation) you’d need something else to make up the difference. Today we’d call that the dark energy. The experimental evidence from the cosmic microwave background (CMB) that eventually showed Omega_total was one, was not yet very secure, but brave people did make this “subtraction” argument. It goes like this: If the total Omega is one and what you measure from clusters is Omega_matter of 0.3 then there must be something else that is 0.7. This is the argument that Jerry Ostriker and Paul Steinhardt, Mike Turner and Lawrence Krauss, made separately in 1996, 1997. They just knew in their hearts that Omega_Total was one, so if you kept finding that Omega_Matter was 0.3, there had to be something else. This was not a new argument. And that is not the same as finding cosmic acceleration. This is the point on which Saul and I have had some back and forth, the ‘what did he say at the January, 1998 AAS’. What I think he said, was: we have got the data but we do not know what to do about the reddening. It could be that we live in a universe that is accelerating. Or it could be that there is two-tenths of a magnitude of absorption that is making distant supernovae fainter. He said “we do not know which one it is.” If you read the newspaper accounts from January 1998, Saul emphasized the acceleration to some and the possible systematic errors to others.
Their poster does include the different colors, right? It includes histograms of different colors but that does not mean it is accounting for the reddening?
Yes and No. What Saul says is “anybody would look at that poster and see that we had evidence for cosmic acceleration.” But if you read the words on the poster, it says that the “systematic effect”, by which they mean the reddening, could be as big as the cosmological effect and that they hoped to do a better job on that soon. So they did not stick their necks out and say: we know the amount of reddening, and after we take that into account there is cosmic acceleration. They did not say that in the poster. You can read it. You cannot have it both ways. You cannot both use the subjunctive and say: if there is no reddening then we would see cosmic acceleration, and then later say, we announced cosmic acceleration. That is not the same thing. Really, if you were boil down this discussion that Saul and I are having, it is about that. They certainly said: if there is no reddening, then there is acceleration. But they didn’t know how to handle the reddening. (Some of this is summarized at my website: http://cfa-www.harvard.edu/~rkirshner/whowhatwhen/Thoughts.htm)
Would they have been able to make measurements about reddening?
They had all the data they needed. They did not yet know how to use it. They had not yet developed the equivalent of MLCS because they did not have their own low-redshift sample, as we did. I think that is the key point. The first paper from SCP that does account for reddening was published in 2003. What they showed in their 1999 paper was that the colors of our nearby supernovae and of their distant supernovae have the same distribution. That suggested they have about the same amount of reddening. But the SCP did not have a method for determining the dimming by dust to individual supernovae. We knew what to do but we did not have the data.
The Hubble Space Telescope observations began in 1996?
In 1996 I think. You can check. That is a whole story on its own.
It seems that there is a slight misunderstanding there as well. They put in a proposal for HST time.
There is a complete misunderstanding, absolutely. Anybody can propose. The Director of the Space Telescope Science Institute assigns the time. Yes, the SCP group feels they owned that idea, even though it had been articulated by Gustav Tammann in 1979. In astronomy, we are used to competition where there might be two groups that have similar ideas. The teams submit their proposals and the observatory has to decide. Give one time. Give neither time. Give both time. We were perfectly happy to lose. That is, we were perfectly happy to compete and have a fair outcome. But the SCP seemed to feel that they owned this idea.
I guess my question was that they thought that the director gave you their proposal.
No. Not to my knowledge.
You independently put in your own proposal.
The director encouraged us to propose. The STScI Director, Bob Williams, had been director of Cerro Tololo. He had been Mark Phillips’ boss, and Nick Suntzeff’s boss. He wanted to know if the proposal from SCP was sound. I said: you can make these measurements without the space telescope. At redshift 0.5, they are not that difficult and they can be done from the ground, and so you should not give any time for this program. How about that? Bob Williams said, “That’s all right, I think that the Space Telescope should be involved in the most important science. So even if it is true that you can do it from the ground, I want the space telescope to do it because this is really great stuff.” So it is true that the director of the Space Science Institute encouraged us. And it is true that I stood up at a meeting and said, “you do not need Space Telescope to make these measurements,” which was technically correct at redshift of 0.5. I am always on the wrong side of these things. Every time they for advice I give them the wrong answer. [laughs] Should you do the Hubble Deep Field? No, a waste of time! [laughs] I am remembering the circumstances better now. There was a meeting at the Space Telescope and Nino Panagia got up and said the SCP had asked for director’s discretion time. The Director is perfectly capable of deciding if he wants to give it to people. He’d usually get independent advice from competent people. He asked Mark and me and others if this was a sensible proposal and he also encouraged us to apply for director’s discretionary time. I think that’s it. Check with others. The STScI director sets the science program, and he invited us to apply, too. I think that’s right. Yes, that made the SCP very mad. Bob Cahn who was the head of Physics at LBL called me up and yelled at me. He seemed to think they owned the idea of using HST. He didn’t know this had all been discussed in 1979 by Gustav Tammann.
So milestones are the results.
Yes, the results.
So the sixth one was their announcement of deceleration?
Well, I guess so. You are keeping count. The January 1998 stuff was like a progress report. It was not a milestone and it was not an “announcement of deceleration.” But, look what we learned. We learned that the SCP had forty supernovae that they did not know what to do with because they were paralyzed by the reddening question. They had no nearby sample of their own. We knew they were not stupid people and that they would, sooner or later, figure this out. On the other hand, we knew exactly what to do. We had measured the reddening to each of our nearby objects and to each of the distant ones we had. We only had ten good ones and a couple that were not very developed. But that was enough if you had the MLCS. All the nearby objects were from our sample. In December of 1997, Adam called me up on the phone. He said, “This is really crazy. The mass keeps coming out negative.” I said, “You have forgotten something. You have forgotten to divide by the square root of pi or something. This cannot be right. Do it again.” Adam was doing the heavy lifting and Brian did an independent analysis. He was by that time in Australia. And by early January, right after the AAS, we were terrified. We saw that the other guys had this big data set but hadn’t announced a result. We knew that they were going to figure out what to do about the reddening. We had gotten our data analyzed and it was still coming out with these values of negative mass, or, acceleration. Well, there is a long account in my book including the email where I said, “In your heart you know this is wrong.” Really, it just seemed like a terrible result, a horrifying thing. Brian said that later. So, in January, we really worked hard. We got the data analysis done. We got a paper written. So when Alex Filippenko and Adam went to the Marina Del Rey Dark Matter meeting in the third week of February in 1998, we agreed it was OK to have them announce that we had definite evidence for an accelerating universe using supernovae. So, Alex did. That is reported in Science, the New York Times, and Dennis Overbye’s book. That was the moment when somebody announced that the data for supernovae showed that the universe was accelerating. I would put that as a big milestone. The announcement of the discovery by the High-Z Team in February 1998. Then, we promptly submitted that paper to the Astronomical Journal, in early March. That paper came out in September of 1998.
Were you at the Chicago straw poll that was held when they called a workshop one weekend in the spring? Do you remember what that was like?
Oh, yes. Bill Press was the master of ceremonies to help us get started. His attitude was that neither of the teams had enough data to back up their claim. But, we made the case. I spoke for our team. We had been working like crazy to submit 3 papers by this time. Rocky Kolb probably has a good memory of all that, because he was trying to be cautious. On the other hand, this was so exciting, and it fit together so well with what was needed to have a flat universe, that the theoretical acceptance of this result was extraordinarily quick and easy. Most people seemed convinced. But a poll doesn’t determine what’s true. Because of that subtraction argument and getting the age of the universe right, it fit with a lot. Plus the Cosmic Microwave Background (CMB) stuff was getting better too, pointing toward a total Omega of one.
In spring or summer?
I do not remember exactly when those results came out. People were working really hard to beat the COBE satellite.
That interestingly overlaps with one of my advisors’ books called How Experiments End in which he at first discusses how different philosophers might answer the question, how do experiments end. Someone like Thomas Kuhn who wrote in The Structure of Scientific Revolutions that experiments end when there is a gestalt switch, between the duck and the rabbit, when the experiment fits what the theorists would like to see. Or, the social constructivists would say that the experiment ends when it fits the interest of theorists, which is again different from the Kuhnian conception. Peter Galison says, no, that is not what it is. The book’s goal is to figure out through case studies, when does the experiment end? What would you say in answer to this question?
First of all, the experiment is not over. That is what I would say. The measurement of cosmic acceleration through supernovae is not done. Yet. The sample size was so small that it was clear that what we would gain in the precision of the measurement, just by having more supernovae. We were not yet up against any systematic error or other variation, provided that we understood the dust well enough, which, for these purposes, our team did. It is only now that the number of high redshift supernovae is getting to be a few hundred, and the number of low redshift supernovae is still paltry, only about a hundred — but by next year it will be four hundred — that we will have done the measurement. We will have a sample that is big enough so that the errors will all be due to things that we do not understand. We are now coming to the end of the easy part where the errors are due to the sample size. Up until now, if you get nine times as much data, you’ll have three times as good an answer. And basically, that is what we have been doing since 1998. But it is coming to an end.
Do you see that it is going to happen that not only the supernovae as cosmological indicators but also the understanding of them for themselves, so as scientific objects, will improve with this greater sample size? Or is that not the case?
That is the curious thing. We had a meeting at Santa Barbara last year that was focused on the question: how do type Ia supernovae happen? And the short form of the answer is: we do not know. There is a nice article that was written in Science Magazine, by Tom Siegfried summarizing the Santa Barbara conference. The idea that the supernova is an exploding white dwarf seems right. There are very detailed calculations about how a white dwarf might explode. But on the astronomical part of how you get a white dwarf so it is ready to explode, and what history of a star would lead it to that brink of disaster, there are two schools of thought. One is that it is in a binary system and it has a normal star as a neighbor that transfers mass onto the white dwarf. The other is that it is in a binary system, but there are two white dwarfs, they merge and explode. You would think that we could tell the difference. I would say the very curious thing is that the supernovae as “objects in themselves” is where the mystery still is. We do not know which stars become the supernovae that we see. It is almost certainly the case that there are two paths. There is a long path to make Type Ia’s that might take three billion years that accounts for the Type Ia’s we see in elliptical galaxies. Then almost certainly some Type Ia’s come from stars that have had a fairly short, recent evolution, less than one billion years from when they were hydrogen burning stars to the explosion.
Did the work of the two teams have the spin-off result, which was to improve the understanding of supernovae as scientific objects, or do you think that that was really separate?
That is coming now. Our team has always been made of people who love supernovae.
But was it not also in the 1980s?
Yes, there were big steps forward in the 1980s: the classification, the light curve shape.
The 1990s were not really a time for that?
I would say that the explanation of the relations between light curve shape and luminosity.
The Phillips relation?
Yes. Have people really understood that? I would say, yes and no. If you had a real theory, even a phenomenological theory, you would know whether the simplest thing, which is the amount of radioactive nickel that is made by the explosion, determines the luminosity and the light curve shape. That is one thing determining two things. That would be good. [laughs] I think that is probably true. But the definite evidence is not really there. There is a lot more to do on the supernovae as astronomical things. How do they explode? What do they look like before they explode? Who are their neighbors? What are the effects? All that. That is part of why the meeting in Santa Barbara was so much fun. If they really are white dwarfs, then we should talk to people who study white dwarfs. So we did. These are people I have never met. I have been the president of the American Astronomical Society. Gone to meetings for decades. They are from another part of the forest. It is a different set of people. I just do not know them. The white dwarf people: sounds like a small group. [laughs]
They are very good. They have been doing surveys on how many binary white dwarfs there are. If you take all the white dwarfs that we know, how many supernovae would they produce in a hundred years? And the answer is: none. Whereas the rate for Type Ia supernovae is about one in a hundred years. So this is not good.
That is very helpful.
If you ask: where is the big uncertainty right now? Of course the cosmology is fantastically important and everyone cares about it. But, the big uncertainty is in supernova as objects, as you call them, as astronomical things. And understanding how they are connected to star formation and the evolution of galaxies, all that stuff. Putting them in their context: What are their rates? What were their rates in the past, and what will the rates be in the future? Oh, this is good. If we understood the astronomical stuff we could answer those questions.
When you call them astronomical things, are you familiar with the book from History of Biology called Toward a History of Epistemic Things, by Hans-Jorg Rheinberger?
No. I do not speak your language.
The language you use is overlapping with that, though. That is really neat. One last thing, may I ask you: had you considered a nonzero Lambda before 1998?
Yes. It is very interesting. You know, there was a very good review, written for the Annual Reviews of Astronomy and Astrophysics about Lambda.
It is not “The Cosmological Constant is Back” by Turner and Krauss is it?
No. This is a paper by Bill Press, by not Michael Turner but by Ed Turner, and by Sean Carroll who was a graduate student at Harvard who was just finishing up.
Now at Caltech?
Yes. You can find it. It is on the cosmological constant. It talks about: what is the astronomical evidence and what is the theoretical story?
You read that and many people were aware of that paper?
Oh, everybody knew about that paper. Sean Carroll was right here and Bill was right here, and there is not one word about supernovae in it, because nobody thought that was going to be the method. People were talking about gravitational lenses. That was one time. The other times I remember reading about the cosmological constant was when quasars had just been discovered, and there were people who were not taken seriously, the Barnothys, (this is fodder for the social construction people), they seemed like outsiders.
Were they the steady staters?
I do not know, I was so young then. I may have this garbled. But they said that the quasars are not what you think. I think they said quasars were gravitationally lensed galaxies. Well, it turns out that there are gravitationally lensed galaxies, but they are not the quasars. There was also a whole discussion about whether the cosmological constant could make the universe loiter and account for the clustering of quasar redshifts near z = 2. This was a red herring with the cosmological constant at its center. [answers a phone call]
You started talking about a paper by Jim Felten…
Yes, there was a very long paper by Jim Felten working out cosmological histories for different cosmological constants. It seemed like an odd thing to do, but Jim has always been a contrarian, and follows his own interests even when the mainstream is flowing in another direction. I guess he gets the last laugh — we are all doing the same thing numerically every day when we try to fit the supernova data. So this was a subject that never died. But it was disreputable. You always felt a little slimy.
It was disreputable?
You always felt people were using the cosmological constant to solve a problem that they should have gotten right the correct way, whatever that was, but they were somehow sneaking out of it by putting in the cosmological constant, in which nobody believed. But they would say: what if?
Do you think that has something to do with the renaming of Lambda as Dark Energy?
No. That is more subtle. The cosmological constant is just one possibility. It is constant. The other models change over time. The quintessence models, and then a very big variety of other things that have more specific origins, comprise a much bigger class. The cosmological constant is just one possibility. Calling it Dark Energy helped us avoid being simple-minded.
In 1998 did you think of it as Lambda?
Was there a turning point when you started to think of it as Dark Energy?
Mike [Turner] was good at promoting names for things. I think the reason we call it Dark Energy is that Mike got us to do that. It is good in the sense that it sounds like a tug of war between Dark Energy and Dark Matter. That is good drama. But of course, the thing that makes the universe accelerate is not its energy, but the pressure, the negative pressure. We should call that “tension.” And it is not dark, it is clear. It does not absorb light. [laughs] It does not emit light. It is just clear tension. Sean Carroll pointed out that would be the correct name, but we do not use it.
Yes. [laughs] Formally, that would be more correct. There is more to the story, I guess. We wrote three papers in the spring of 1998.
The Riess, the Garnavich, and the Schmidt.
Right. The Garnavich paper terrified me. This was trying to find out the equation of state for the dark energy. Is it the cosmological constant or something else? This is the first paper that did this with our data. There is also a paper by Martin White that came out earlier. He did not have our good data, but he had the idea even better, being a real theorist. But Garnavich said: I am going to do this. I said: Oooh, I do not know. This is where you feel like you are getting off your turf and into the other guy’s turf. I was worried that we would make some elementary mistake and everyone would laugh at us. So, we recruited Sean Carroll who had just written his review article, and had been a graduate student here, and Brian Schmidt’s officemate, somebody we really knew, to be part of this. I wanted somebody who knew better than me, to say: yes, this is okay. I was very uneasy that we were going beyond our competence. Garnavich was fearless. Saurabh Jha? Fearless. The kids said: we know what we are doing. I said: Okay, but I do not know what you are doing and that is not good. I wanted to be reassured by somebody who had the theoretical competence that we were not doing something completely stupid. I was very proud of that paper, after having been convinced that we should submit it. Because it is the first of the papers on constraining the properties of dark energy, and that is a huge industry now. There are hundreds of people talking about how to do this. Really, that Garnavich paper is the first one.
Is that a milestone in itself?
It is for me. You may have more of an outsider’s perspective on it. There was the Martin White paper, and there was our paper. Now there are a million papers. Well a thousand. Some of the most useful put everything together: Max Tegmark, using the Cosmic Microwave Background, the galaxy clustering, and the supernova data all together made this new picture of dark energy seem real. That kind of exercise came very quickly because the CMB results started to get good just after the supernovae showed there was acceleration.
Would you count that as included in this story, the CMB as a confirmation?
I think so. Suppose we had all the supernova measurements that we have today and we had no measurements of the CMB. Would we believe that the universe was three tenths Dark Matter and seven-tenths Dark Energy? I do not think so.
That is a bigger question that I do not really know. That is a good question for a historian, right? It is hard to say.
I have two questions.
I am in no hurry.
Was this a discovery in the conventional sense?
I think so. I think this was something where we made measurements. We knew what the subject was. We even knew what the question was. But we had gotten it backwards. We thought that we were going to measure cosmic deceleration. And we did not. We measured cosmic acceleration. There is no other measurement, as far as I know. The CMB does not show that there is cosmic acceleration. The galaxy clustering does not show you that there is cosmic acceleration. This is, as far as I know, the only measurement that shows that the acceleration actually happened. And it shows when it happened. Some-time between redshift one and now. Of course, putting the supernova data together with all these different measurements makes a coherent picture with a physical theory. But the actual measurement of acceleration — just the phenomenon, is there acceleration? There is no other way to do it that I am aware of. I think it is a big deal. It is a qualitative fact about the universe that is completely upside down from what was in every textbook in 1990. A very advanced book would say something about the cosmological constant as a possibility. But, look in any elementary book and there is no discussion of it. It is all about: is Omega_Matter equal to one, and is space flat? Of course all those books are being revised now [laughs]. Nobody has started from scratch and written about a universe dominated by Dark Energy. I’m doing that.
A formal way of asking this question would be: how does the resurrection of the cosmological constant change our view of the universe. A more informal way is: when did you start teaching acceleration versus deceleration to the students who just take Astronomy 101.
Right, which I do teach. This idea of diving into Einstein’s dumpster and picking up his discarded ideas, that is really interesting.
Even though we are not thinking of it in the same way that he did. We are really thinking of it in terms of vacuum energy. Well, I suppose we could go back and look. I have my course notes from different years. But I have basically thrown away that whole section and I have a whole new section, featuring our fine work and requiring my book to be read. [laughs] What I have discovered is a lot of students buy it used. There is a huge market for used books. Anyway, I changed my views as the microwave background results got better. That made this whole picture of a Lambda cold dark matter cosmology seem more certain.
You mean in 2000?
There were the balloon ones: Boomerang, when was that?
I think that was in 2000.
At the point where the scale of the fluctuations of the microwave background had been measured, which is a good measure of total Omega. I always believed that Omega_Matter was 0.3 because I had done clustering measurements. I had studied galaxy clusters. I knew about all that stuff. Brian and I had also measured the Hubble constant and I thought we knew the age of the universe. We could get that constraint on acceleration or deceleration. So, it was somewhere around that time. Of course, the art of putting all of these things together got a lot better too. Max Tegmark who is now at MIT, became the great artist of convergence, putting all these things together. [laughs] You have to be careful, because forcing agreement is not fair. Well, it is fair from the outside. But from the inside, people who were working on each of the methods, should never say: Oh, it must be right because it fits with the other stuff. You should look very hard to make sure that you have done all of the internal tests.