Alex Kim

Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.

During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.

We encourage researchers to utilize the full-text search on this page to navigate our oral histories or to use our catalog to locate oral history interviews by keyword.

Please contact [email protected] with any feedback.

ORAL HISTORIES
Image not available
Interviewed by
Ursula Pavish
Location
Lawrence Berkeley Laboratory
Usage Information and Disclaimer
Disclaimer text

This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.

This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.

Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.

Preferred citation

In footnotes or endnotes please cite AIP interviews like this:

Interview of Alex Kim by Ursula Pavish on 2007 July 31, Niels Bohr Library & Archives, American Institute of Physics, College Park, MD USA, www.aip.org/history-programs/niels-bohr-library/oral-histories/34519

For multiple citations, "AIP" is the preferred abbreviation for the location. 

Abstract

Graduate work at U. C. Berkeley, starting in 1991, joined SCP in 1992, when it was called the High-z Search. On the discovery of group’s first supernova, 1992BG, at the Isaac Newton Telescope, and concomitant paper. Kim collaborated with Ivan Small and Matthew Kim to write IDL, the supernova search, analysis, and slice plot display software. That software has been converted to a C++ version in use now, for instance with the Hubble Space Telescope. Second batch of supernovae, approximately five found, at INT. Kim binned the first spectrum, taken by Robert Kirshner for the group, and found in it the first supernova spectral footprint. After that, began the use of the better, CTIO 4-meter telescope. Some observing done at Kitt Peak. Kim explains the taking of supernova photometry and fitting them to established light curves. Kim modified the SN-MINOW light curve fitting software. He also wrote the code to produce the Omega_Matter versus Omega_Lambda plots. He spent time in France, starting in 1997, after graduating with PhD in 1996. Kim’s attitude toward Lambda, on the process of discovery, and a few comical stories.

Transcript

Pavlish:

It is July 31, 2007. I am here at Lawrence Berkeley National Laboratory to interview Alex Kim who was a coauthor on the Supernova Cosmology Project’s paper, which stated that the universe is accelerating.

Kim:

Even before it was called the SCP it was called the High-z search. I came to graduate school at Berkeley in 1991. They gave all the students a brochure about the research that was going on. I saw this little paragraph about trying to measure the weight of the universe. It sounded very interesting to me. I had no idea what kind of physics I wanted to go into. The professor in charge of that was Rich Muller. I came up to talk to him. He talked about various supernova projects that were going on. They were building a telescope to do a nearby supernova search. There was a nearby supernova search; Bob Ship was starting a near-infrared supernova search. There was a high redshift search going on. I had no idea at the time I was talking to Rich, that they had found zero high redshift supernovae. The nearby supernova searches were quite successful but they were on hold and I believe that this new telescope was meant to continue that work. At the beginning of 1992 I started attending group meetings. That is when there was a search going on that was run at the INT. You have probably heard that there were some searches done at the Anglo-Australian observatory, which were not successful. This is my understanding of what was happening. We had made an agreement with the INT to provide a camera, or at least a chip for a camera. In turn we would get time to do our supernova searches. As far as I understand, we never delivered that camera. One of the fun stories about that, is that a CCD was stolen out of Gerson’s car at a certain point. I believe this was the chip that was meant to be on that camera. Ask Gerson about that. I think insurance covered that. Unfortunately, I don’t think the camera ever worked. But, fortunately for us, the INT was a great site. During that run, our very first high redshift supernova was discovered.

Pavlish:

That is in the Canary Islands?

Kim:

That was at the Canary Islands.

Pavlish:

INT is short for?

Kim:

Isaac Newton Telescope. At that time, I was just starting out. The graduate student who was pushing that through was Heidi Newberg. She should have good perspectives as to the early days of the project. Also, when I started attending the meetings, that is when Ariel Goobar came.

Pavlish:

The meetings were held up here?

Kim:

That is actually, kind of a funny story. On the fifth floor, in a corner room where Greg Aldering is now, that is where the meetings were held. That is where there were a number of computer scientists. Also, Heidi’s office was in there. It was not even a sit down meetings. We would just mull around and talk. Even though Rich was the nominal head of the group, Carl Pennypacker the person who was practically in charge of the program. He was the one who led the program. Of course when you have the meeting in a little office like that, where people are behind their desks, people are working more than paying attention. That really got Ariel upset. That this is not professional. We ended up having meetings in a real room. I could not remember which room that was. My memory of that time is that Saul was a term scientist, he may have been a Postdoc , but probably a term scientist, and he was in charge of the software. The big project that he was in charge of was converting VISTA to UNIX. I do not know if you are familiar with this, but VISTA was the standard astronomical data analysis software for Lick Observatory. Those ran on VMS machines. We were starting to go towards the UNIX machines. There was no UNIX version of that. So, one of Saul’s big responsibilities was to make the conversion. That is another thing that never worked. We ended up using IDL as our main software instead of sticking with VISTA.

Pavlish:

This was the software to find supernovae?

Kim:

This was actually the software to do nearly everything. We would collect the raw images from the telescope, but then we would have to process them, to make them science quality, taking out biases in the data, or renormalizing them so that they all had consistent fluxes because each pixel may have a slightly different throughput.

Pavlish:

You wouldn’t say there was one set of software to find the supernovae, and one set to get spectra?

Kim:

The supernova search really is decoupled from the cosmology analysis. We do not have to be very careful when we discover supernovae, at least in terms of the photometry. We can get it right to ten percent or so. All we had to be sure of was that there was this little blip going off that was not there before. When we are doing the cosmology analysis, we have to be much more careful, because these numbers really affect the final answers. Although one could build light curves and do the cosmology analysis with the search software, we do not do that. It is a separate software package. Also, the search software is meant to be fast and pretty robust. It does not use the most optimal algorithms possible. Whereas, if you are very careful, you can study where the supernova once you know where it is in the image, and you can do an optimal extraction of the information from that.

Pavlish:

And that is separate software?

Kim:

That is separate software.

Pavlish:

Is that when you do the spectroscopic analysis?

Kim:

The spectroscopic analysis is yet another step.

Pavlish:

The basic software that Saul Perlmutter was working on for example, is the software to just find the spot in the sky?

Kim:

Really, at that point, he was working on this VISTA implementation. But in terms of the software that we actually used to do the discovery of the supernovae of the SCP, it was really Ivan Small, Mathew Kim, and myself who wrote that.

Pavlish:

Looking from a simplistic standpoint, can we put the software into three bins?

Kim:

I would say that there is a first bin for the imaging component. The first step is something that is standard, which is something that almost any astronomer going to a telescope would apply. It is not specific to supernovae. That is the cleaning of the images. There are a number of canned software programs that do that. I believe that VISTA included something like that. We wrote our own version in IDL. That is just cleaning the images. The second part is doing the supernova search, which involves taking old images, taking new images, transforming them so that you can directly compare them, subtracting them, looking for new blips on the subtraction, and gathering statistics about those blips. Those statistics were the most time consuming part of writing that software because you do get a lot of background. At least, early on we got a lot of background at the INT. Even though you saw a blip, you were not completely convinced that this was a real supernova. Those days, we actually made a printout of every single candidate that we had, in which we had a picture of where the supernova was, before and after. Also, it had some statistics like the shape of the supernova; did it look like it had moved; the distance in the host galaxy.

Pavlish:

Did you write the software for that?

Kim:

Yes.

Pavlish:

Gerson Goldhaber showed me that.

Kim:

The way that worked, I think that Ivan wrote a lot of the basic tools. Then, Mathew and I wrote the things that were on top of those tools that allowed us to put together in one GUI all this information and also make the printouts.

Pavlish:

It looked like in some of the earlier ones you had the topography of light from the supernovae.

Kim:

Yes. We made little contour plots.

Pavlish:

Later, it looked like you started showing three-dimensional plots.

Kim:

Surface plots. Well, one plot we used a lot in the old days was a two dimensional projection of the mountain plot. What we would do, is we would plot rows separately and on each row we would have the flux of the reference image, the flux of the new image, and then the difference. These are what we called slice plots. Matthew and I were writing all this code. Once in a while Saul would come by and say, “I want this.” One of the things he wanted were those slice plots. It was irritating to implement it, but it turned out to be very useful for us. It allowed us to see whether or not there was some subtraction.

Pavlish:

Those mountains that I saw, they do not look like slices.

Kim:

If Gerson has those things we can go and see if we can find them. If I see one, I can say, that is what I mean. I believe that the surface plots were after that. This is the first software that we used to discover the first large number of supernovae. The first supernova was not discovered using this software. The 1992BG, I think it was, was done. Ivan was there at the time, and with Heidi. When we started doing searches where we found more than one supernova, so the second round of searches at the INT, we started using this new software. Subsequently, I could not tell you when, specifically, Robert Knop, rewrote a lot of this software into C++. It is some version of the C++ code that is being used today, in the HST cluster search, for example.

Pavlish:

Those visualizations, the way you supernovae with the lines around them, is that a standard method in science, or were you thinking of something else to make that?

Kim:

I was not aware of any precedent in doing that. In terms of doing subtractions like this, I am trying to think if I know of who was doing this before we were.

Pavlish:

It is almost like a visual subtraction, right?

Kim:

The people who were doing supernova searches: there were the searches being done here, the infrared search led by Bob Tripp, and the nearby supernova search, they were doing subtractions. They were looking at nearby galaxies so I do not know if they had the same kinds of problems as we have, where the galaxy is so compact that the shape of the galaxy really matters when you are subtracting out the supernova. I believe that the people at CTIO, they just flashed images up and they actually saw: Oh, here is a new spot where there was not one. We may have been the first people who did that but I could not swear that was the case.

Pavlish:

Do you remember how much time passed between the first and the second data takes? I am interested in every time you took data.

Kim:

I could not tell you off the top of my head, I am sorry. But, we have the data sitting around some place. All the follow-ups that we have are labeled by what day the thing was observed. I am sure we have somewhere, unless somebody burned them, our observer logs. We had binders full of lists of all the observations that were taken. Gerson may be the keeper of that information as well.

Pavlish:

At this point you were midway through your thesis? Done with your thesis?

Kim:

I have not even gotten very far. When I was still just beginning my thesis, in my first or second year here, I was told by Heidi that there was going to be a new regime. Saul was going to be taking over from Rich and Carl. I heard the rumors about what was going on, but ultimately I think it was a good thing. Rich was very interested in climates and climate change. Carl was also working on ‘Hands on Universe.’ Saul was really able to put full-time commitment into the project. Also, I think that Rich and Carl had alienated people in the community. So, it was better to have a cleaner slate with Saul irritating people [laughs] instead of Rich and Carl who had already irritated people. When I say irritated people, the biggest problems that we had with our first searches is that we did not have any spectroscopic follow-up. At the beginning, it was just an exploratory program. People were very skeptical whether we would be able to churn out the supernovae. So, we really had a very minimal amount of time allocated to us to do a real proper supernova cosmology analysis. We had a minimal amount of time just to find the supernovae. When we had our second batch of supernovae from our second INT run, we did not have sufficient spectroscopic time. I do not know how much spectroscopic time we had, if we had any. Saul had to go and call people at observatories and say, “Can you please take a spectrum of this and this and this.” I think he irritated people that way. But, it was something he had to do in order to be able to do this. I believe that the very first spectrum that we have of a supernova was obtained this way, by Bob Kirshner or one of his underlings. I remember, I got this spectrum from Bob, and him saying, “Here is a spectrum. There is no supernova here, just some ratty stuff.” But then I binned the data and by binning the supernova spectrum actually came out.

Pavlish:

What do you mean by binning?

Kim:

The spectrometer has quite high resolution. That means that each little datum represents a very small breadth in wavelength. What ends up happening, is that each element of the spectrum can be quite noisy. However, for a supernova the spectral features are broad. A whole bunch of bins can be combined together to reduce the noise. But still you can see the broad spectral features underneath that.

Pavlish:

So you did that. You received the fax?

Kim:

Yes, I did that. He gave me the data. I got an email from Saul, but the providence was from the CfA for that spectrum. That was the very first supernova spectrum that we had for one of our objects. It was taken by somebody who was kind enough to point his telescope at one of our objects. We were at that point, at the INT, getting a couple. I think our big successful one was that we got five supernovae in one run if I remember this correctly. That was our second run. We had another couple of runs where we were not as lucky. I think we got a couple more. The next real advance, for us, was starting to use the CTIO 4-meter, where they had a much larger field of view, and it was a 4-meter telescope as opposed to the INT.

Pavlish:

Step back a bit. When you did this binning was that a standard procedure? Or was it going in the face of authority?

Kim:

It was something like the latter, though I cannot say that it was profound. [laughs] Filtering of data is quite a standard thing. I do not know why Bob did not do it. I would say that, probably anybody who thought a little bit about it would have done it.

Pavlish:

Did you do it right away when you got it?

Kim:

Yes, sure. It was almost immediate, yes.

Pavlish:

You told him right away?

Kim:

I think we did tell him that if you actually smooth this, you can see something come out. Maybe the reason why it was so easy for me was because of IDL. IDL has all these ready functions in it, like ‘smooth’ or ‘convolve.’ For me, just having the tools sitting there allowed me to do it quite easily.

Pavlish:

That was an important step.

Kim:

I guess it was an important step. Although, I do not think it was profound.

Pavlish:

You were at the CTIO.

Kim:

Yes, we were at the CTIO. At the beginning of the searches, like at INT, the first INT search that I was directly involved with, the searches were quite a stressful experience. I do not know if you know how the mechanism of this works. The data is taken and then transferred electronically to the computers here. The data was analyzed here. Almost every time that you observe there is something a little bit different about the camera. There is something catastrophic happening. You have to baby-sit the software and you have to dynamically fix things, because things would crash. It was really, almost an all night affair. When we were doing runs, the first couple of runs, we would have to have basically do 100%, well, 24/7 monitoring of the software and then fixing the software. It was last minute, seat-of-the-pants kind of running.

Pavlish:

Those plots of the contours and things, you would at the observatories?

Kim:

No, we produced them here. We would get the data, and then we would run our subtraction software on them. We may have cleaned at the telescope; I think we did that at least at the CTIO. We would actually run the search software and it may or may not work. The quality of the data, if the seeing is very different from the reference images, would cause problems. But by the time we got to CTIO, things really ran very smoothly. A search, at least for the people who were at Berkeley, was kind of boring. You would get the search running, it would run by itself, the subtractions would trickle out and you would look at them, and there would be almost no candidates on there, and proceed. There was not the stress that makes things exciting, because everything worked so well. To give you an anecdote about us monitoring the data: I think this was a search at Kitt Peak. We also used the Kitt Peak 4-meter to do some searches. Saul was at the observatory. We were here looking at the data. Before we would do these runs, we would come up with all these sheets about, given the telescope, given what we were observing, what would we expect the signal to noise to be? We were doing checks of data quality to make sure that things were running well. We were collecting data that looked wrong. I remember Saul was calling us, and saying that you have to double check to make sure that your predictions, or the analysis that you are doing is correct because the numbers all look wrong. It turned out that they were observing in the wrong filter. They were observing in the b-band. Things could go wrong during observing.

Pavlish:

The software depended on what filter you were using?

Kim:

Well, what you actually see depends on what filter you are using. And in particular, we are observing through the atmosphere, and the atmosphere has a different brightness, depending on what filter you are looking in. The supernova has a different brightness depending on what filter you are looking in. What you would expect to see is very different depending on the filter. For high redshift supernovae, since most of the flux is red-shifted, our early searches were run pretty much exclusively in the R-band. The R-band is a sweet spot of getting supernovae around a redshift of 0.5. Another thing we got from the CTIO searches is that we got a little more time to do follow-up. We were able to get a few more colors of that supernova set. In the first seven supernovae, in the paper with the first seven, there are almost no colors for those objects. There may be an I-band for two of them, just one data point. We were not allocated time to get that kind of complete data.

Pavlish:

Was it a big process to put proposals in all the time in order to get telescope time?

Kim:

Yes. It is part of what you do. It was not burdensome. Although, it did become burdensome as time went on. We were starting to accumulate a lot of data. There was a lot of science work that we wanted to do on the data. Every once in a while there would be a kind of emergency, that we have to write a proposal to collect even more data. One, you would be distracted from doing the work that you were doing with the data in hand, to work on this proposal. Then, number two; you are collecting all this new data when you have not even finished digesting the old data. We, I think, got backlogged.

Pavlish:

That is really interesting from a historical point of view. In fact, I saw in a binder from 1997, that Gerson Goldhaber kept and none of that data would have been in your ‘42’ paper. You are giving me a lot of interesting information and I do not want to bore you with philosophical questions, but my advisor has a book, ‘How Experiments End.’ It is about new discoveries and probes, how it is decided that a result is convincing. Maybe if you would talk a little more along these lines.

Kim:

I do not remember at what phase of proposal writing it was. There was a certain point in the mid 1990s when I said, “Let us not propose for any telescope time for a year so that we can take all the data that we have, that we are sitting on, we are not doing very much with it, but meanwhile we keep applying for time.” I was over-ridden.

Pavlish:

At a group meeting?

Kim:

Group meeting-s. This is something that I had talked about for a bit. In astronomy, my impression is that inertia is very powerful in terms of getting telescope time. At the very beginning of the supernova search it was very, very difficult to get observing time. The only way that we could get a significant amount of observing time is by say, promising to build a camera for a telescope. A tack is not going to give you time to do spectroscopic follow-up of supernovae, if you have not established that you can even find a supernova. It really took a long time for us to be able to consistently get time at a telescope. I think that we did not want to lose our grip in that, our place in that.

Pavlish:

The consequence is that your papers were on data that was from pretty far back. Is not that also a function of the supernova itself, which you have to follow its light curve?

Kim:

There is the time lag between getting the supernovae and the paper. I think that is a combination of several things. At least, early on for us, there was no competition. I have to admit, it was quite leisurely. I did not feel any pressure, like, we have to churn this stuff out. There was that, first. Number two, which is probably the most dominant factor, is that the people here at LBL come from a tradition of extreme rigor, being very careful about things. I never talked to Saul about this, but it is my guess, that he may have been shy about publishing data because of this 1987 A pulsar business. In order for a paper to come out of the group, that the group says, “Yes, we want to submit this,” is a long, arduous process. I guess, scientifically that is good. But professionally that has hurt a lot of people in our group. For example, there is a data set of moderately nearby supernovae that were discovered in the late 1990s. I think, if my memory is correct, three or four Postdocs have worked on that data set and it still has not yet been published. It is the data set that goes beyond the lifetime, or the gumption of a single Postdocs stay here. A lot of the Postdocs took the data to a certain level that a lot of people would have said, “This is good enough.” It was not good enough for us. That may have to do with the sociology of the people here coming from a particle physics background rather than an astronomy background.

Pavlish:

The particle physics community is known for that, that they take a long time to publish?

Kim:

I think it is a standard that was established maybe here at LBL, that in order to say, “I have discovered a new particle,” it has to be that your confidence in your measurement is five sigma. Whereas, in cosmology you only work with one Sigmas. You know what I mean by Sigmas?

Pavlish:

Standard deviation.

Kim:

Yes. The third thing is that at least for the first several papers, the first couple, the refereeing process was catastrophically difficult. Especially for the first paper, I think we spent a lot of time with the very first referee who I believe it was Bob Kirshner. I do not know if he said that it was.

Pavlish:

When you say first paper, what does that refer to?

Kim:

The very first supernova, 1992 BG, the one guy by itself. That took an inordinate amount of time. In the end we got a new referee and it took some time for the new referee to actually get that paper out. At least in the early days, there was a lot of difficulty in getting the papers out. That was another time lag. I would honestly say that the biggest factor in the time lags is the dynamics of the group, how the group works, how projects are assigned, and then once the work is done, getting sign-off from everybody in the group. It is a quite arduous process.

Pavlish:

Specifically, getting signoff, does that require verbal assent, written assent?

Kim:

What happens is, you get assent any way you want to. The way it typically works is that you post a paper someplace; you distribute it, and say, “I would like comments on this paper because I would like to submit it.” These days, there is maybe even a publications committee, although back in the day, there was not. What happens is that the day after the thing is due, Saul reads it and the paper has all these red marks. Saul usually is the one with the most amounts of red marks, he has the most number of criticisms. Well-founded criticisms. You go back, rewrite the paper, you give it to Saul, and he gives a new independent set of criticisms of the paper which were in the paper before but he did not notice before. So, really when the paper gets the approval from the group, it is Saul almost always, and now Greg as well, who are the tall poles, or the most stringent people who analyze and look through the paper. In principle the paper is distributed to everyone who is on the author list. A deadline is given. If you do not hear from somebody, you assume that they are okay with it. You do not have to actively get a sign-off. You are just waiting for the, “I disapprove.”

Pavlish:

Getting back to the mid-1990s, where did we leave off?

Kim:

We were at CTIO. At that point we were pretty much a well-oiled machine. It was at CTIO that the other group started working as well.

Pavlish:

You would see them there, or you knew that they were working on the same project?

Kim:

We would see them there. Mark Phillips and Nick Sunseff were both at CTIO. Sometimes our times would abut. I think, basically, what happens is that they split their allocation of time between the two groups. I saw Brian there once. You do run into people there.

Pavlish:

You did observing as well as data analysis.

Kim:

Oh yes. I would regularly go to Chile for the observations.

Pavlish:

Once you had time at the CTIO that was the place where you found supernovae?

Kim:

CTIO was really where we were very successful. The first handful were at the INT. We did do searches at Kitt Peak. I do recollect going to Kitt Peak and not taking any data because the weather was bad. I am not sure that we actually found anything at Kitt Peak. If we did, it may have been only one object. There were some random searches that were done. One supernova was discovered at the Keck telescope. That one was called Albinoni. We called it Albinoni after the composer.

Pavlish:

That would have been in 1999?

Kim:

I think that was earlier on, maybe in 1997. You can ask Greg. I think that Greg was involved in that.

Pavlish:

You are probably right. I just thought that you all started naming the supernovae after classical music composers post the 1999, ‘42’ paper. It may be that the data was taken in 1997 and analyzed later.

Kim:

There was a delay between collecting those data and actually putting the paper out. The search was working well. We were now to the point of trying to do the cosmology analysis. For the seven supernova paper, I wrote the cosmology fitter. Maybe I should step back. Once you have the search, you have all the data in hand, now you want to do the cosmology analysis. The first step is extracting the photometry from the images. You generally have a lot of images. You have deep reference images. That is another reason why you have to wait some time after the supernova explosion to actually do anything, is because you want a deep images where most of the supernova light is gone to serve as a reference. You take these reference images. You combine them to make them very deep.

Pavlish:

Deep, meaning that you have taken them for a longer time?

Kim:

Yes. You have better signal to noise on the image. And, generally, you also want to have a point spread function that matches your original images so that you do not have to do that much convolution of the images. That caused headaches also.

Pavlish:

You mean Fourier transforms when you talk about convolution?

Kim:

What was the convolution algorithm? It may have been done in Fourier space, I could not swear to that.

Kim:

We were trying to use data from many telescopes. We were trying to look for supernovae at Kitt Peak, but using data from the INT as reference images. That was a horrendous nightmare. It kind of worked. We tried, and we made the software to do it. But, minimizing the number of observatories doing the work is very important. That is something we ended up getting with the CTIO time. We not only got time to do the search, but also time to do a couple of points of follow-up. In terms of the strategy of how to observe and work around the moon, I cannot say that I witnessed or knew how that idea arose. I think it was just, intuitively obvious, perhaps. We are constrained in that at the telescopes there are certain instruments that are on the telescopes when there is a new moon and when there is a full moon. And, of course, we want to try to observe as close to a new moon as possible, because the sky is dark at that time so there is less background. Naturally, we fell into this routine where every month there was a certain time to do this. Then, you want to stretch things out as much as possible so that you can use the beginning of the dark time and the end of the dark time. I myself never thought of this observing strategy to be a eureka moment. It was more like, it made sense. I do not recall if it was ever presented to me. Maybe it was one of those things I just took for granted. And of course, now the state of the art are these rolling searches which CFHT is able to do. I guess that is something else. When the field of view of the imager is big enough, then you automatically get supernovae at different phases just in one exposure. You do not have to aim at anything in particular. You just take a picture and you know you have data in it.

Pavlish:

That is a post-1999 innovation.

Kim:

Yes. I talked about what the steps are of the analysis. You get the photometry of the supernova and then once you have that photometry, you want to connect that to how you think a Supernova Type Ia should look like. The Type Ia evolves in a smooth way, in what we call a light curve. What we do is we take the data, and we try to fit that data to a light curve. In the light curve, the most important thing we want to determine is how bright the thing is. Then, we want to figure out what the shape of the light curve is, because the shape of the light curve seems to be correlated with the intrinsic magnitude of the supernova.

Pavlish:

Is the light curve that you try to fit to a result of previous observations of light curves? Or is it a result of theory?

Kim:

It is the result of previous observations of light curves. Really, observations drive the subject and theory follows. Then of course you want to get the colors of the supernovae. One of the scary things in doing supernova cosmology is whether or not supernovae are extinct due to dust. Dust extinction is wavelength dependent. So if you can say that supernovae are half standard colors, then if you observe a color that is different, you can attribute that to the dust. That is basically the middle step of taking the photometric data and comparing that to what you would expect from a supernova, to estimate the distance of the supernova. Then, there is the intervening step of spectroscopically getting the redshift of either the host galaxy or the supernova itself. We use the host galaxy because we can get that to higher position because of the sharp lines of the galaxies. Then, you take the redshift information, the mu information, the distance information that you get from the light curve, and you make a Hubble diagram. Then, you can do your fit of the cosmology. SN-MINOW is the light curve fitter that we used. I believe that Don Groom originally wrote it. I got my hands on it and I significantly changed it. Then, I believe that Don subsequently got it. [laughs] If I remember correctly, then I wrote the little thing that converted the output from SN-MINOW to, basically, a corrected magnitude of the supernova, the one piece that you put onto the Hubble diagram. As I mentioned, I wrote the code that made those Omega_Matter versus Omega_Lambda plots.

Pavlish:

Those were also Saul Perlmutter’s innovation, plotting Omega_Matter versus Omega_Lambda? Is that right?

Kim:

Is it? [laughs] By 1997 I was off in France, frolicking around, doing whatever a scientist does in France. A lot of things I had written were passed onto new people. The cosmology fitter became Peter’s responsibility. I believe that he took my code and then rewrote it, to actually run the Omega_Mass versus Omega_Lambda plots in the 1999 paper. I do not think that the light curve fitter itself, SN-Minow, really changed that much.

Pavlish:

That had been passed back and forth between Groom and yourself.

Kim:

Right. But other people ran it. If I remember the histogram method correctly, the histogram was something qualitative. You can look at the histogram and see that oh yes, there seems to be an indication that the magnitudes. But to do a rigorous analysis, you do this cosmology fit.

Pavlish:

For your team, the histograms were satisfactory for getting an intuitive sense. Would they have been for the other team, do you think? They were coming from a different culture.

Kim:

Maybe I am narrow minded, but statistically speaking, I see that there is a right way of doing the analysis, a way that you can get an optimal amount of information from the data. That statistical technique is used by both groups.

Pavlish:

That is the Omega_Lambda versus Omega_Matter plot?

Kim:

That is right. Like I said, Gerson’s histogram qualitatively shows that we prefer an Omega_Lambda universe. But to do things rigorously you have to take into account not only the residuals, what he calls our residuals, but you also have to worry about the error bars associated with each data point. I am not sure that he takes those error bars into account.

Pavlish:

Even though from a naïve person’s view, the histogram and the (what do you call those with the ellipse?)…

Kim:

That is a confidence region.

Pavlish:

Even though to a naïve person, they look similarly qualitative or quantitative, you would say that only the one is quantitatively accurate.

Kim:

Do you have a picture of Gerson’s histograms? Maybe you can remind me what exactly these things are.

Pavlish:

Here is one presented at Santa Barbara in December of 1997. Here is one presented to the group in October of that year.

Kim:

Yes.

Pavlish:

He said that this axis is not realistic. It would mean that there are multiple universes or something like that. Lambda equals zero would give you negative mass density.

Kim:

What he is doing is he is taking each individual supernova and saying, ok, what does this supernova tell us about what Omega_Mass is. I think that is what he meant by multiple universes was that, of course, given data with statistical noise, you do not expect every supernova to give you the same answer for Omega_Mass. Although, we believe that there is only one true value of Omega_Mass. So, the analysis that we do, starts out by saying: ok, there is one value of Omega_Mass. What do all these supernovae tell us about what that one value of Omega_Mass should be.

Pavlish:

You do not even do Omega_Mass separately for each supernova, which is what he does here?

Kim:

That is right. What he does here is he calculates Omega_Mass for each guy. As an ensemble, if you take their distribution, it looks like we have a flat universe with a cosmological constant. This says something. It certainly is a valid indicator that the universe has a cosmological constant. However, it is not a statistically optimal way of really trying to measure what Omega_M is. That is what we do in the contour plots. We address the question of: what is our best estimate of Omega_M given all our data? One thing that you could imagine Gerson doing, is saying that the average of this distribution is what the value of Omega_M should be, and the width of this distribution gives you a measure of the error. That is perfectly valid, but it is not optimal.

Pavlish:

You were working with the confidence plots throughout.

Kim:

That is correct.

Pavlish:

Were those the last step, or were they also presented at group meetings?

Kim:

Those were presented at group meetings. When you ask about how different groups would present the information: at least, if both groups believe that there is a standard cosmological model, which makes predictions for the distance modulus and the distance modulus depends on Omega_Mass, the curvature, the flatness of the universe, and such, there is one way to do the analysis, at least with the kind of data that we have. Maybe I am completely wrong, because I know that we scientists have a very naïve, simpleton’s view of statistics.

Pavlish:

I would not say that. It may be that the only people who know statistics are scientists, economists, and statisticians.

Kim:

Maybe my story is almost done. I started in 1991, graduated in 1996, and left Berkeley in November of 1997 to go to France. For a lot of the settling down of the results for this paper, I was not physically in Berkeley. I was not there for the writing of the paper and all that. I was an interested observer from afar.

Pavlish:

Did any of the news media’s coverage hit France?

Kim:

Yes, but I was not deluged with it like they presumably were here. I was watching the values of the cosmological parameters nearing toward the existence of a cosmological constant, or an accelerating universe. Maybe I was young and naïve at the time, but I was like, “Oh, okay.” I did not have a prejudice based on expectations that the cosmological constant did not exist. For me it was just a number that we could try to measure. I also say that particle physicists have a very specific view of what the cosmological constant is, whereas, I was just thinking of it from a General Relativity perspective. General Relativity has this number. Einstein set it to zero, but it does not have to be set to zero. The particle physicists do not think of it that way. They think of it as something real, something that has to do with the vacuum, and that having a zero energy density for the vacuum is very important, or would be a very nice feature to have. I did not have quite that bias. Talking to my colleagues afterwards, I found out, I realized how much more disturbed they were by our results than I was.

Pavlish:

Long after looking at the view book and seeing that the team planned to weigh the universe, was that concept fulfilled?

Kim:

I guess so. It was a very funny process. When I started, as I mentioned earlier on, when I joined the group I had no idea that they had not found a single high redshift supernova. Of course, later on, I learned, a posteriori, that nobody thought we would be able to do it, that supernovae were flawed fundamentally. Bob Kirshner was one of the people who were saying that it could not be done. I think, working as a clueless graduate student, without knowing what we were up against, was a stress-free and relaxing experience for me. I guess I was a little bit naïve, thinking what would happen with measuring the universe. Of course, we did not even think that there would be a cosmological constant.

Pavlish:

When did you start thinking that there might be a nonzero cosmological constant?

Kim:

I do not know what the answer to that is. It could be, almost, not even now. Maybe this is more philosophy. I would say, we measure that there is Dark Energy. But of course that does not mean that there is Dark Energy. I make that distinction. Like I said, I had no prejudice about what the value of Omega_Lambda should be. So, my belief really followed the data. I did not have to reconcile the data with my preconceived notions. I think that if I had done the experiment now, I would view things very differently.

Pavlish:

So, you might say, that it is good to publish the results, to show what the data are giving you, but it does not mean that you now, from that one result, believe that there is necessarily a Lambda.

Kim:

When I say that I do not necessarily believe that there is Dark Energy, it is more from a ‘limitation of physics’ point of view, rather than, ‘we got the wrong answer.’

Pavlish:

Was this a discovery in the conventional sense, if there is such a thing?

Kim:

In the conventional sense? What is an unconventional discovery?

Pavlish:

I would have thought that the conventional story of discovery is the eureka moment, the unexpected result. I think the story of this has been told somewhat in that way. But it could be that the conventional view of discovery is that it takes time, and experiments do not end with one paper, that you have multiple papers and many people working together, and involved in the process.

Kim:

I think you are right, there are some discoveries where the discoveries are so straightforward and slam dunk, that you can say eureka. Was it Archimedes who said that? My perspective, having been in the trenches for this, is that going from the idea of doing the experiment — proposing for time, collecting the data, and the arduous process of analyzing that data — was such a long process and fraught with peril. There are several things that go on in the analysis that you have to make sure to get right. Also, the fact that we did not run this experiment as a blind experiment; i.e., as we were tuning the analysis, we saw how the answers changed at the end of the day, that it was a slow process, an iterative process, not just boom this is the answer. It was a continuous process. I will say that instead of having a eureka moment, it was more of a — we have this result, let us convince ourselves that it is result. Rather than — we have this result, and we believe it immediately.

Pavlish:

Any more stories?

Kim:

I have stories, but they are more for fun than for historical posterity. I am sure that you do not want to hear about observing runs where both observers had a bad case of the runs, having eaten raw seafood when there was Red Tide in Chile. Things like that. There was one run we were there, and there was an earthquake in Chile. We were under the telescope. We were observing. Don Groom and I were observing. Tony Tyson happened to be sitting there because I think he was observing the next day or the day before. There is this image that we were exposing during the earthquake. It has all this stuff all over the place on the image. There are other silly things. There was this entire Star Wars pantheon going on about different people in the group. This was earlier on, in the mid-1990s, when we assigned different people their Star Wars persona. You may want to turn this off. Really, Saul was the person who put it together, for our group. If it weren’t for him, we all would have gotten nowhere. A lot of people in the group did a lot of work. There is no doubt about that. But he was probably indispensable.

Pavlish:

Saul Perlmutter was the principal investigator. Is there a second tier?

Kim:

First of all, I think that Carl Pennypacker should get a lot of credit. The entire thing, to do this project, was his idea. I am not sure that the experiment would have the success that it has had, if Carl was still running it. Carl is quite a visionary in many ways. Also, credit should go to Rich, who supported the project early on. In terms of the authors on that paper, a number of people are historical, were very important in the full development of the experiment. For example, if we did not have success at the INT, then we would have closed shop, I would say. There are a number of people in Cambridge, and I believe from the INT, who are on the paper. They frankly, probably did not do very much for the group after the group stopped observing at the INT. But without them, we would not have gotten where we did. There are a number of undergraduate students, who went on to get PhDs elsewhere, who worked on the project. I do not know how much they contributed to the paper. It could not have been more than a year’s worth. Whereas, a large number of people spent ten years, trying to get the thing to go.

Pavlish:

Are the undergraduates on the author list of the paper?

Kim:

Yes, they are on the paper. Castro and Nunez were, I believe, undergraduates visiting from Portugal who were here for a year. Robert Quimby was an undergraduate who has just now graduated from the University of Texas. He, also, was doing cosmology fits. I believe that he may have been the one who ran a cosmology fit with this large set of data and said, “Oh, the answer turns out to be that there is an Omega_Lambda.” Yes, he may have been the first to run this data set. He was an undergraduate at the time. Julia, here, was an in between undergraduate and graduate school. She is now a professor at Harvard. You can probably corner her. She is at the CfA. Peter Nugent was here in my office. He has all the emails. I do not know if he showed you this email. He kept an email record of everything that we did. This email that I had sent in 1997, where I actually ran something for the first seven supernovae, and one other one, and then got indication for Lambda. It is kind of strange that he collected all these emails.

Pavlish:

When was this?

Kim:

In 1997.

Pavlish:

This was after the ‘7’ paper.

Kim:

Yes. This is after the ‘7’ paper. There is an email record. I have some old email around that you could go through if you were really bored. This kind of historical stuff exists. One person whom I did not mention is Ariel Goobar. He was very, very important in getting the first paper out. The first paper was the 1992 BG. He was the Postdoc who made that happen. From here on out, I would say that there was not much active participation in what was going on. I do not know when Chris turned on. He is very important in collecting spectroscopy for us. These guys were part of the INT team; they were at Cambridge, the Isaac Newton group. There is some residual from working in Australia. Andy, and Nino, I cannot say off the top of my head what exactly they were doing. They were probably helping out with the Hubble Space Telescope activity. Heidi was the graduate student before me. In terms of crunching numbers and doing the analysis, it was very Berkeley-centric. I cannot say that there is dead weight or anything like that on this author list. It did take a lot of people. The order reflects how people contributed to this paper, not how they contributed to the project. The paper is just one little sprig that is produced by the collaboration. There are some people who are more actively writing, more actively doing the analysis for that. I know that these guys, who were after Saul, were the people here at Berkeley, who were really trying to push this thing out.

Pavlish:

You mean, to get the paper published.

Kim:

Yes. But does this mean that they are the most important contributors to the group? Maybe, or maybe not.

Pavlish:

It is a monumental achievement, for sure. Thank you.