Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
Please contact [email protected] with any feedback.
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of Donald Geesaman by Catherine Westfall on 2013 July 15, Niels Bohr Library & Archives, American Institute of Physics, College Park, MD USA, www.aip.org/history-programs/niels-bohr-library/oral-histories/38094
For multiple citations, "AIP" is the preferred abbreviation for the location.
In this interview Donald Geesaman discusses topics such as: Argonne National Laboratory; Dirk Walecka; hadron physics; Roy Holt; Herman Feshbach; Stanford Linear Accelerator Center (SLAC); Nuclear Science Advisory Committee (NSAC); Bernard Mecking; Continuous Electron Beam Accelerator Facility (CEBAF); Gerry Garvey; quarks; Fermi National Accelerator Laboratory (Fermilab); Jefferson Laboratory; nuclear physics; University of Illinois at Urbana-Champaign; Larry Cardman; Keith Baker; relativistic heavy ion (RHIC) experiments.
I’ll say for the tape recorders that this is Catherine Westfall. I’m with Don Geesaman, and we’re at Argonne. It is the 15th of July, 2013. I’m interested in understanding more about this transition in nuclear physics from the time of the Livingston Friedlander Panel when there was this desire to have a CW (100% duty factor) accelerator to more cleanly and in a more detailed way look at nuclear structure. Then because of these developments with the standard model, there is the growing realization that you want to also look at the quark structure of nucleons and nuclei. I’ve heard that the EMC experiment was particularly important.
It was pivotal.
OK. What I can’t quite understand is that the experiment that I can find referenced actually was published in 1983, which seems late -- the MIT Blue Book and the first proposal for JLab were before that.
So let me give you my perspective. You have to remember I got my PhD in 1976 doing nuclear structure physics at a tandem accelerator at Stony Brook. I came to Argonne to keep doing that kind of research on a tandem accelerator and the new superconducting linac. So at that stage, I’m not aware of what’s going on at the high levels of nuclear science. So the Friedlander Panel was the National Research Council’s long-range plan in 1975. Dirk Walecka, as I’ve been told, was the one who truly pushed for this > 1 GeV CW accelerator. At that time, MIT Bates was just coming online. Their special expertise was in high resolution electron scattering based on the spectrometer that Stan Kowalski and Bill Bertozzi built there, and was actually starting to produce beautiful work on nuclear structure with very high resolution. To my mind, Dirk Walecka advocated this for two reasons. One was studying nuclear structure with electrons, which he’d always been a huge fan of, and he advocated with beautiful review articles -- DeForest and Walecka and Donnelly and Walecka. But he also was very interested in hadron physics, and he had spent a lot of years developing a formalism to analyze coincidence experiments with hadrons. He had always wanted that to be the experimental program that was done at SLAC, but because SLAC had such a poor duty factor, it was very difficult to do these coincidence experiments there. So he had always had this interest in this sort of work in terms of hadron physics, but that was really in studying resonance parameters.
At that same time, I and Roy Holt, with the rest of the Argonne group, started working at LAMPF doing hadrons in nuclei. Basically what LAMPF did with a pion beam is study the first excitation of the proton, the delta resonance. We were trying to understand what happens to a delta resonance when you put it in nuclei, and it changes. Its properties change. So there’s a lot of argument on what the appropriate theoretical description was, but one of the leading ways to do this was something called the delta-hole model that Ernie Moniz from MIT and Frieder at PSI worked on.
By the way, Ernie Moniz, who is the current Secretary of Energy.
So they basically developed the model where what you would see in these pion-induced reactions would come from a change of the structure of the hadron (Delta) inside the nucleus. That was a dynamical change. We were thinking in that direction. In 1979, there was the first NSAC Long Range Plan, and that was led by Herman Feshbach, and one of the members of that was Gerry Garvey, who was the Argonne Physics Division Director at that point. So he came back and said the next big project in nuclear physics is going to be a CW electron accelerator. A group was set up with Hal Jackson, who was the leader, Roy Holt, and some people from the Accelerator Research and Facilities Division to study options. After that, the community tried to lay out the scientific case (what you would do with this electron accelerator), and that’s why this MIT Blue Book, Future Directions in Electronuclear Physics, was so important. It was really people thinking of what they wanted to do. When you look at the table of contents, almost all of these things were things that people were actively working on. For example, the pion field inside nuclei and isobar propagation was exactly what we were working on at LAMPF, and so we were thinking about how we could use those techniques and those ideas using an electron beam. It turns out it was a perfectly natural thing to do. All these other experiments -- complex nucleon and few-nucleon systems -- were basically straightforward nuclear structure. There were extensions of the program that Ray Arnold was doing at SLAC and some of the things that people were doing at MIT. We sort of thought about quarks and sub nuclear structure, but it was just a lot of ideas. People didn’t really have, in my mind, a clear direction.
What’s the date on the Blue Book?
There was a workshop in November 1979 and then another workshop in 1980. So I think this was actually published in 1981. But the workshop at MIT was in 1980. So there was a nice scientific case, again, based mainly on the physics we were doing. There were other ideas. Illinois had their electron accelerator. They wanted to upgrade it to 500 MeV. NIST had an electron accelerator. They wanted to upgrade it to about 500 MeV. The energy of the Bates accelerator was being doubled to 800 MeV, but they hadn’t done experiments at that energy yet at this point. So at the workshop people were thinking of a 1 to 2 GeV electron accelerator. Then DOE decided to ask NSAC to decide what should be the design parameters. That is where the Barnes Committee came in–that was in 1982. The Barnes Committee was designated to set the parameters for the accelerator, and that was when I think Stan Brodsky and his ideas of scaling laws became prevalent. He strongly pushed that was the way to see quarks. If you could count everything that happened, you could see quarks, and so that was the signal to go look for. So if you did elastic scattering from deuterium, you should see the exchange of five gluons, and so forth. It was possible to predict behavior which counted the number of quarks, and it seemed to be true for the pion and the proton. It wasn’t true for any nuclear target. Brodsky had a model which he called reduced nuclear amplitudes. Here it is.
This is on page 8 of the Barnes Panel report.
Brodsky had a model where he thought you would see evidence of this type of behavior at very low momentum transfer. You need to find what he called a reduced nuclear amplitude. When he plotted the reduced amplitude versus q2, he claimed you could see this constant behavior, this scaling behavior. There was a lot of controversy about that. But Brodsky’s new point of view, I would say, carried the day in the Barnes Committee. It certainly opened up a much larger kinematic phase space, and depending on what kind of accelerator you built, you could still do all the nuclear structure type of experiments. So they said it should be a 4 GeV machine. I remember during this period I went to a conference in Charlottesville, Virginia, and I used to wear a red crusher hat all the time. I was on this panel discussion about this, and I waved my red hat. I said, “I’m waving the red flag to the community like a conductor waving down a train. I think we need to go slow. If we want to understand what’s happening in nuclei as they are, we’re more interested in low momentum transfer data. If we want to understand the quark behavior, you need to go to this higher momentum transfer. But what nuclear physics really needed to do was to understand what nuclei are. I said, “But fine. The community says we want to build a 4 GeV machine; we’ll design a 4 GeV machine.” So up to that point, we’d been designing a 2 GeV machine at Argonne. As soon as this came out, we changed it to a design of a 4 GeV machine. Then there was the Bromley Committee, which decided between the proposals. I would have said at the time of the Bromley Committee, the EMC experiment had been done, but the EMC effect was not well publicized.
The Bromley Panel was in ‘83, right?
The report was April ‘83. At that point there was a competition between proposals. There were two proposals for 500 MeV machines, and the Bromley Panel members said, “That doesn't address the charge, so we’re not going to consider those.” There were two proposals for a 4 GeV machine. That was SURA and Argonne. There was one proposal to upgrade Bates, and it was actually to only upgrade it to 2 GeV. So the panel in the end picked the SURA proposal, in my mind, primarily because they thought it could be more easily upgraded to higher energy than 4 GeV, the recommendation of the Barnes panel. But I have heard from people like Hermann Grunder at least five totally different reasons as to why the SURA proposal was picked. So I don't know the real answer there.
So then we thought about experimental equipment. When we were planning experimental equipment, in general the equipment was more or less similar in the different proposals.
Right. Well, what I noticed the SURA proposal list is similar to that for the Argonne proposal for GEM.
I mean I think it was quite similar basically because it’s just taking that list from the Blue Book. So the basic physics, the general ideas were the same. The difference is we were much more interested, or many of us were much more interested in nuclear structure. Therefore, we wanted higher resolution spectrometers and better beam quality. The problem with the linac-stretcher ring is that you inject from a linac and then you go around the ring and get synchrotron radiation so that the beam quality is about 10-3 or something like that in energy. So if you have a 10-3 beam and you want to try to do an experiment with the precision of a few times 10-5, you have to work very hard. It’s very difficult. They did do that at MIT, but they worked very hard and have a really beautiful dedicated system for that. We liked the idea of a microtron because it had a much better longitudinal phase space, and so it was much easier to do higher resolution experiments. Jefferson Lab, as it evolved with the superconducting RF, has an absolutely beautiful beam phase space, and so it’s fine. So based on that part of the argument, I’m perfectly happy with the Jefferson Lab recirculator concept.
So we said, “Look. There are sort of three classes of experiments you want to do. One is you want to do these few nucleon experiments to push out to the highest momentum transfer to try to see this quark structure.” They don’t need resolution, so they need some medium resolution spectrometers. That was the hall that I was basically in charge of designing. The other is you needed a photon beam. Roy was in charge of designing that hall. He came up with a very nice design for a photon tagger. In the third, you want to do these high-precision nuclear structure experiments. In our opinion, you didn’t need to do those at very high energies and momentum transfer. So even though it was a 4 GeV machine, we designed high resolution devices that were 2 GeV. It saved a lot of money, but you also had a much better chance of getting the high resolution. The Hall A spectrometers at Jefferson Lab have never achieved that kind of resolution. Part of that was because of a magnet problem, but part of it was because it became too expensive in some sense to get the quality magnets you need. So that was a difference in design philosophy. We said the few nucleon experiments need medium resolution spectrometers (essentially what got built for Hall C). They would be general purpose, maybe even modular so that our ideas of these are modular spectrometers so that you could put them together in different configurations for different kinds of experiments. Again, part of what’s done in Hall C. For the very high resolution spectrometers -- Ben Zeidman designed with the Los Alamos people, Arch Thiessen, who had developed the high resolution spectrometer at LAMPF and the energetic pion spectrometer at LAMPF. We had used the pion spectrometer quite a bit, and he used ideas from that to come up with the high resolution spectrometer. The idea of a 4π detector like CLAS was not widely embraced.
Yes, I noticed that. You did have three experimental halls.
We had three experimental halls. One had a photon beam that was a tagged photon beam, which is basically the way Hall B is used most of the time, or let’s say half of the time, is used as a photon beam. So I think the idea of making that have photons or electrons at low power that would have come out, too. It was more I think that the electronuclear community was not used to building big 4π detectors.
In the United States because the one that they used was modeled off of something that Bernard Mecking had done at Bonn.
Which was also a photon machine, basically. The people working in this country at Bates, the people working at SLAC had not spent a lot of time thinking about a $30 million 4π detector to do this physics. So when Bernard was recruited at CEBAF, I mean everyone understood that was opting for a certain kind of physics, resonance physics --
For the tape recorder, it’s the Continuous Electron Beam Accelerator Facility was the name at the time, CEBAF. But it is what is now JLab, or Jefferson Laboratory.
Right. So let’s see. So we thought it was very important that we could use polarized beams so we could do the parity-violating experiments and also use polarized beams for structure work. So we figured out how to maintain the polarization. In our machine, we would have accelerated transversely polarized beams and laid it down to longitudinal polarization. It’s possible we could even do what Jefferson Lab does now (just let the things precess around continuously and be synchronous), but we weren’t sure of that. So we made sure that we could use polarized beams. The other advantage of the microtron was you could get a beam every 100 MeV, and so it was much easier to do Rosenbluth separations, separating the contributions from charge and magnetization currents to the cross section. These are experiments that call for a high energy and low energy beam, and you weren’t near as constrained as you are with the racetrack design at Jefferson Lab as to what energies you can run. At Jefferson Lab, if they want to run a special energy -- they did for the G0 experiment -- they can do that, but that impacts everybody else. So that kind of experiment is much easier with the microtron design.
So this is all up to ‘83. This is what we’re thinking about doing. So there was a decision made. The accelerator was going to be a linac-stretcher ring. At that point, Roy Holt switched to thinking about how can we at Argonne best use a linac-stretcher ring? His idea was to use polarized internal targets, so in fact we had a workshop in 1984 that he led investigating polarized internal targets. He spent the next eight years working on that technology. There was an experiment that was talked about for the X-ray photon source at Wisconsin. Then they did experiments at Novosibirsk, and in the end, that’s what demonstrated the technology for the HERMES experiment at DESY. I’ll leave that out if you wanted to look at that later, and Roy might. All right. At that point, the EMC effect became, I would say, public. In the December 1983 Long Range Plan, I think it even appears. So that was the data.
So that is on page 18.
Page 18 of the 1983 Long Range Plan -- I think Gerry Garvey wrote this section. This showed that the quark distributions in the proton and the quark distributions in nuclei are different. I claimed this was a paradigm shift in nuclear physics. It was the first clear evidence at the quark level that something was different. A whole wealth of theoretical models sprang up for all sorts of different interpretations -- any way you could invent using quarks to try to explain this EMC effect. At that point, I decided to join an experiment at Fermilab that in fact included about half of the European Muon Collaboration that was moving to Fermilab and doing the same kind of experiment at high energy. I did this because it was clear to me we had all these problems in nuclear physics that we couldn’t explain, and if our fundamental assumption that the nucleus was made up of fixed protons and neutrons was wrong, then we would never find an explanation. So what I like to say is that I spent 35 years of my life searching for these changes, and my conclusion at the moment is the structure of the proton doesn't change very much, that most of the problems we had in the ’80s were because we weren’t doing the many body physics correctly. We were making approximations in our theories, and those approximations just weren’t valid, so that when the ab initio calculations of Vijay Pandharipande, Joe Carlson, Bob Wiringa, and Steve Pieper, were able to do the many body physics right, you could describe nuclei. You didn’t need to put in changes in nucleon structure. That’s still a very controversial statement. There are people like Tony Thomas who absolutely believe the structure of the proton has changed inside the nucleus, and there is some evidence to support his view. But I go back to none of these people are doing the nuclear structure, the many body physics, really right, and I, having been burned once, want to make sure that we’re not being fooled by the many body physics. It was very natural for us to be thinking about changes in structure of hadrons from our work at Los Alamos. We knew that something like a Delta, which is very loosely bound and emits a pion at the drop of a hat and then those pions would interact with other nucleons and then re-form again, we really easily understood how that would change inside the nucleus. But that a proton would change was a more difficult concept. As I said, there are many, many theoretical models as to how that would happen, and to this day, I’m still doing research to try to truly answer the question. So the EMC experiments showed without question that the quark content was different. The question is, is the quark content different because of many body physics, or is the quark content different because the structure of a proton changes inside the nucleus?
Which is still an open question.
It’s still an open question.
Okay. Well, by that point when you say you were starting to do these experiments at Fermilab, Franz Gross and other people are starting to have summer studies at JLab.
Yes. So I certainly participated in a number of those summer studies and workshops. I spent a few weeks actually looking at polarized deep inelastic scattering.
You see, this is a question that is in my mind. I figure there was a certain amount still of tension between the various competitors for who would get to build the machine, about the fact that the SURA won, and then Hermann came and switched the design. So my question is did the EMC effect help overcome that?
Sure. Yes, I believe it did because to study the EMC effect, you needed the highest possible energy electrons you could get. So since that was the clear signal, and you needed to study with the highest possible electron energy, people converged in thinking about how to get to the highest momentum transfer, how to get to the higher energies. I think the EMC effect had a big impact. And it also in some sense showed the rest of the nuclear physics community that this could be incredibly relevant to them. If you told people, “Well, the reason you can’t get the right binding energy of carbon-12 in your calculation is because a proton has changed and the nucleon-nucleon force has changed, you’re never going to get that with the calculation you’re doing.” So it became very relevant to people doing nuclear structure physics or work like that to have an answer as to how quarks and gluons made up the nucleon, how rigid that nucleon was.
So then by extension, did it become easier to accept the switch to an SRF machine, which of course gave the possibility of increased experimental capability?
So if you were Roy Holt, who was spending his time developing polarized internal targets for ring, the answer was no because none of that would work in your SRF machine. But the answer was yes if you were someone who wanted to study nuclear structure effects, which I did in addition to these EMC effects, the fact that this should have a very high precision beam compared to the beam you’d get out of a linac-stretcher ring. To my mind, the superconducting technology in terms of what it should deliver was a much better machine than the linac-stretcher ring. The only question was the technical risk. I couldn’t judge that technical risk other than I had a warm fuzzy feeling. We had a superconducting accelerator here at Argonne that worked fine, so I wasn’t scared of superconductivity. But the problems for a high-intensity electron machine are very different than the problems for the machine here.
Which accelerated heavy ions, and just for the tape recorder, there had been this problem at HEPL that had poisoned people’s idea about superconducting electron machines.
And to some extent at Illinois that there was this beam breakup problem that when you tried to go to higher intensities, these accelerators didn’t work. It’s like black magic. They suddenly didn’t work, and how do you understand that? That took very sophisticated beam dynamics calculations to prove to people that that was a problem that was under control. But then the other thing that happened in the mid ‘80s was nuclear physics built the Nuclear Physics Injector at SLAC (NPAS).
End Station A.
That was the program that Ray Arnold had pushed. We were doing experiments with 4 to 6 GeV beams. Roy Holt’s experiment on photodisintegration of the deuteron immediately saw a quark counting signal. So by the time of the next long range plan in ’89, we could show results and say, “Look. We are counting the quarks.” There was still a lot of argument as to what it meant. I mean Nathan Isgur said it meant absolutely nothing, and he and Brodsky had this continuing battle about the significance of the behavior we were seeing. When did we publish it? [Looking for document] Yeah, so it was published in ‘88. So it was this flat behavior that the cross section times s11 was flat that was --
Okay. Give the reference to that right there.
That’s Phys Rev Letters 61, page 25, 30 in 1988. That was—
And the name is “Measurement of the Differential Cross Section for the reaction 2H(γ,ρ)n at High Photon Energies and θc.m.=90°.”
So at center of mass angle of 90°. Roy Holt was the spokesman for this experiment. It was his experiment. Jim analyzed the data, I believe.
The first author is J. Napolitano.
So we had another signal that QCD effects in the nucleus deuterium might be present, and that, I would have said, carried the tide. So then you have ‘89 through ‘91 is when Jefferson Lab, the first Program Advisory Committee when John Schiffer was the chair, sort of looked at the big picture and what sort of experimental equipment was needed. Then they started considering experiments. In the end, we proposed experiments that were again, in many ways, the extension of what we were doing at NPAS or at Bates.
Nuclear Physics at SLAC. We called it NPAS. I proposed experimental proton propagation in nuclei using electron beams. Roy proposed an extension of the d(γ,ρ) to higher energies. Hal Jackson proposed a measurement, which is an extension of work he was doing at Saclay, to measure the pion field in nuclei. Ben Zeidman proposed the first measurements to look at light hypernuclei, so that was in some sense the new program, to look at hypernuclear physics working with Keith Baker and the people at Hampton University. Then all those were approved ‘89 to ‘91.
Okay. What can you tell me about… Here is the 1986 CEBAF design report, which was before the equipment plan that came out in 1990. So by this time, it says “Highlights of the Experimental Program,” and then here, actually, this is the ‘90 equipment plan. Here is the 1986. So presumably, by that time the EMC effect has had an effect!
So give me some judgments about what is different in 1986 from what was in the GEM and SURA proposals.
By this point, there was a high resolution thin target 4 GeV and 4 GeV spectrometers that could take advantage of the fact that the beam resolution was now much better. Note that if the energy resolution of the initial beam were 10-3 as it would have been for the linac, the design would have been more complex (850 tons versus 550 tons). So that was a change basically just due to the improvement in the beam quality. There was a large acceptance detector also. And the high resolution spectrometer designs show that Jean Mougey was there…
Came from Saclay.
…and wanted to build a good high resolution coincidence pair, and Bernard Mecking was there and wanted to build a large acceptance detector. So I think really what changed is you had those two leaders in place who had a vision. At this point, it still wasn’t decided what was going to go in Hall C. They had what they called variable acceptance spectrometer. There was even a lot of debate as to whether they could afford to build a third hall.
Oh yeah. Hall A was for the high resolution spectrometers and Hall B was for the large acceptance detector. And Hall C was almost not built.
Yes, several times it looked like it wouldn’t be built.
But the experimental program, is it different by this time because of the EMC, or is pretty much the same as Barnes?
I would have said it was more informed by it as opposed to being different. I would have said it wasn’t obvious at that point given that you had SLAC still running, and you were able to do EMC measurements at SLAC, what advantages you would have with the CEBAF spectrometer experiments. You could potentially get some advantages with the large acceptance spectrometer, but there were grave doubts as to what luminosity that would take. But the models that people proposed to explain the EMC effect and the resulting changes in the nucleon structure had consequences for other things you could measure. So I would say it was informed in the sense that people would try to go and measure these other things, not just repeating the EMC type of measurements.
For example, what other type of other measurements would this EMC effect inspire?
One of the obvious ones is one of the pictures of the EMC effect is that a nucleon proton swells inside a nucleus. Well, if you just knock that proton out of the nucleus, you’re knocking out a swelled nucleon. So the dependence of the cross section of a momentum transfer should show that the proton is bigger. There were two ways to do that. One is a direct knockout experiment, and the other is an approach called y scaling, which the University of Virginia people were using with Ingo Sick from Basel. So, people proposed doing those measurements with much better statistical precision so they could see the swelling, which they don’t see. Once the polarization transfer techniques got improved and they were demonstrated at Bates, then you could also measure the relative size of the proton with polarization transfer. Essentially the only data at present that seems to suggest the proton may be different in size in the nucleus is a polarization transfer measurement on helium-4. That was done at Hall A and done a couple of times. There are conventional explanations, but they have to work much harder for them to work. So the basic idea, which has sort of been my philosophy, is that we will only believe we understand QCD effects if we have an explanation in a quark picture that makes predictions of what happens in a hadronic picture, and then you see that same effect. So if we would have seen that the proton size definitely got bigger inside of the nucleus, we’d say yes, it’s proton swelling. That explains the EMC effect. Now we have to see what implications it has for other aspects of nuclear structure.
Okay. So the experimental program from your point of view in the 1986 design plan is pretty much still the same as Barnes.
No. Again --
You said it’s informed by --
It’s informed, meaning we knew other places to look for quark effects. But these experiments were parts of general classes of experiments that we always talked about doing, so that’s the point. We always talked about doing (e,e′p) reactions, and you would do those to measure the single particle structure nuclei.
This is good. Okay, good.
Now, instead of measuring the single particle structure -- Well, now you’re trying to see has that single particle changed inside a nucleus? So it’s the same kind of experiment --
Same class of experiment.
Same class of experiments, but its goal is different because of what we know about the EMC effect, because of what we know about possible modifications in proton structure.—
Some measurements, but then some measurements would also be the same.
Yeah. Measuring, for example, the electromagnetic form factor of the pion, that was an experiment that was in the Blue Book. We appreciated now the importance of extending that to higher Q2 because we have this connection with counting rules and things like that. But we just knew we were going to do that in 1979. We knew we were going to do that in 1995. Because we got the first taste of some of these measurements at SLAC. You know, the d(γ,ρ) was not done first at Jefferson Lab; it was done first at SLAC. But at Jefferson Lab, we were able to do it so much better. We could extend the energy range. It would be much more convincing that this behavior was scaling-like. I would have said the thing that changed the experimental program more was the advances in experimental technique, the use of polarization observables for the polarization transfer. And of course, that’s what led to the big discovery that the structure of the proton’s charge and magnetization shapes are different, which has been perhaps Jefferson Lab’s most significant discovery. If you look back in the Blue Book, we said we wanted to measure the nucleon form factors at a higher momentum transfer because it’s a fundamental property and we should do it. But we actually thought we knew what the answer was going to be. Then we measure it with this new tool and find we’ve been fooled for 20 years. So the polarization of the beam, the polarimeters, the polarized targets…Bob McKeown and Doug Beck had pioneered looking for strange quarks by doing parity-violating scattering from the proton at Bates. That could be done much better at Jefferson Lab and was, and it’s produced beautiful results. So I would have said that’s changed the character of the experiments most is having the polarized beam, having polarimeters, and polarized targets.
It makes sense. It makes sense. Okay. To go back a bit. You talked about coming to the summer studies in ‘83, ‘84, and ‘85, I guess. Well, there were summer studies in those years. I don't know if you came during all of that time. Then at some point, there is -- and I don’t have the date in my mind -- after ‘86, there is a fight and there is the success of getting three halls, Hall C being the hall that would have been sacrificed if there could only have been two. And the idea is then to make Hall C be the first hall to make sure it’s built.
Well, I wouldn’t say that was an idea.
So there was a question: What should go in Hall C? Hermann actually was looking around at the strong groups in the U.S. that he wanted to really be involved, and he thought that perhaps Argonne and University of Illinois were not involved as strongly as he would like. So he actually proposed that he would give Hall C to Argonne and Illinois if they could come up with equipment for a strong experimental program to put in it. So we met seriously with the Illinois people for several months, and I would have to do a lot of work to figure out --
Well, wasn’t one of them at O’Hare or something like that?
But we also went down there, and they came up here.
Larry Cardman says you met at O’Hare.
Right, right, right. So we did meet at O’Hare, but we also met down there, and they also came here. Our idea was we thought the high resolution hall was going to spend its time doing high resolution experiments, and many of these experiments didn’t need the higher resolution. So let’s give a capability that it didn’t have and put a higher momentum spectrometer. If you have a 4 GeV electron beam, you should say, “Why would you ever need a spectrometer with a momentum of greater than 4 GeV?” But if you have a 4 GeV proton, it has a higher momentum than 4 GeV. There was always the talk that the superconducting technology might let you get to higher energies. So let’s have a higher energy spectrometer, like a 6 GeV spectrometer. Then you say, “Okay, the other problem is Hall A and Hall B are busting their budget, and they’re building very complicated equipment. Why don’t we build something simpler? We’ll have the 6 GeV spectrometer, and then we’ll just copy the design of the Los Alamos medium resolution spectrometer. So it’s just a few million dollars. We have a coincidence pair.” Illinois, in my memory, was most interested in doing out-of-plane physics, and this was Costas Papanicolas. This was the kind of thing he wanted to be doing at Bates. So they were thinking about toroidal spectrometers, and in fact, it was not so different than what was was eventually done with the G0 spectrometer. I would say we never really got to the point where we converged because at that point, I think Dirk Walecka stepped in and he said, “No, no, no. We’re not going to have a hall that is run by just two groups. We want every hall to be open for everyone else.” Fine. But I think then the appreciation that they could build for significantly lower cost this hall that had additional capability, the higher energy, and yet was relatively cheaper, got the go-ahead. So we took responsibility for the SOS spectrometer. Jefferson Lab took responsibility for the HMS spectrometer (High Momentum Spectrometer). Keith Baker at Hampton built large wire chambers for them. Roger Carlini was now there as the hall leader. It was just because these were simpler spectrometers that were the ones that could be ready when the first beam was delivered.
Okay. One quick thing that I’m wondering about, which is was Hermann’s technique… Did this get the University of Illinois group and the Argonne group in fact more involved in JLab than they otherwise would have been?
I don’t think so. I think both of us were fully committed. This is where we were going to do research. But it would have been different -- You know, if we were doing these experiments in Hall A using the spectrometer that Mougey had designed, lots of MIT influence, Hall A viewed themselves as a single collaboration. Hall C was always individual experiments. So there was a somewhat difference in sociology even though the first experiments we said anybody who wanted to could join, anyone who contributed to building experimental equipment could join, and so they were like collaboration experiments. But quickly after that --
Because you did have Memorandum of Understanding (MOUs) right?
And you had MOUs. I mean you just talked about Keith Baker being from Hampton, for example. So it wasn’t just Argonne.
MOUs were for equipment.
Right. MOUs were for the equipment. And other groups -- Colorado built the Cherenkov counter for the SOS, and I don’t remember who built the Cherenkov counter for the HMS. So there were equipment MOUs. But relatively quickly, it was separate experiments. I said after the first year, everyone involved -- In Hall A, if you’re using their collaboration experiments and you can decide you have a right to be on them if you want. That was not true after the commissioning experiments in Hall C.
Yeah. Well, that’s one of the things that’s the most interesting to me is this hybrid nature of the experimental program, that you have something like Hall C, which much more reminds one of the earlier days of how experiments were done. You can use those two spectrometers. You might change them, but it’s flexible and it’s changing.
Right, and it was really envisioned that you had this one primary spectrometer. You’d bring in other big pieces of equipment like they did for the neutron form factor experiment or the polarized targets from Virginia or the hyper-nuclear experiments that the Hampton University people were leading. That’s why it was essential that we have a very cheap second arm, the SOS. Even though it had a much more limited energy range, it had a large dynamic range, which was very useful. But it was really quickly envisioned that that SOS was only going to be used for a third of the experiments. But you had to have some coincidence capability.
Okay. The 1990 experimental equipment plan -- I just copied the part for Hall C. Did any of this originate from your experience with the GEM proposal?
I would say no. I would say there was actually nothing that particularly carried over to Hall C. I mean it was like, in some sense, the medium resolution spectrometer hall that I envisioned. Hall C didn’t have to have the really high resolution so you could use a less expensive; a simpler optical design for the spectrometer, but that was a pretty general idea. It wasn’t taken from our proposal or anything else; the general idea was that once you made the decision you’d like it to be higher energy, you were looking for ways to make it cheaper, and so you didn’t make it as high a resolution. So as I said, for the SOS, it was an existing design. There was a proof of principle, so we were just basically buying the drawings from Harald Enge for Los Alamos and copying it. He had designed this spectrometer for LAMPF, and there was a duplicate of it at LAMPF. In fact, we had proposed to move it to Argonne if we had built the radioactive beam machine. It turns out it would have been a very nice spectrometer for the Radioactive Beam Facility. So I don't think there was anything we used from that time other than just very general ideas which were common ideas, other than we were thinking of certain times and coincidence experiments that didn’t require high energy. We were thinking of d(γ,ρ) and wanted to be able to go to high resolution, wanted to think in d(γ,ρ) that wanted to go to higher energy, and this design for the HMS spectrometer did the job. The SOS design did the job and was cheap, and so it was something that could be done and save money.
Then you did the first experiment.
Right. So the way it worked is I led the experiment. Roger Carlini was hall leader. Hal Jackson was -- I can’t remember what they called it -- co-leader or something like that. But each hall had one user as co-leader.
Right. One outside user and one…
Inside person. Hal was responsible for the mechanical construction of the SOS. I took the lead in developing and leading the analysis software for Hall C, and that was something that was contributed to by a large number of people. Here are some very technical notes from the GEM project: how to align the magnet, buildings, conceptual design, precision magnet measuring systems. Roy apparently says he’s thrown everything away, but I couldn’t bring myself to throw those things away. All right. I can’t lay my hands on it, either. I wrote something called the Hall C software Vade Mecum, which defined the units, the conventions, the coordinate systems, what we would use for an analysis design. So anyway, I spent a lot of time at that point focusing on the software, and that got used in Hall C for a large number of years. I think it’s all been replaced. Well, I can guarantee it’s all been replaced now by more modern software.
Why was the decision made that you’d be the one to do that measurement?
Oh, you mean for the first experiments?
So the first experiments, they just wanted to have the simplest experiments that would capitalize on the facility capabilities, and so mine was the absolutely simplest coincidence experiment you could do at the highest rate, so you didn’t need a lot of beam to do it. Roy’s experiment capitalized on the higher energy and higher intensity. By that time, he was at Illinois. Then there was one other experiment which was Ingo Sick, Donal Day, Brad Filippone, and John Arrington’s experiment for x > 1 that could have taken data. So originally, Roy and I were going to do it. But then the SOS spectrometer lost out. They gave us second priority. So they finished the HMS. The SOS wasn’t ready, and so they couldn’t do my experiment. So they were going to try to do the x > 1 experiment in December of 1994, and so it didn’t work well enough. So in the summer of ‘94 there was the first beam from the accelerator sent to Hall C. Then in the fall, there was beam in a double pass. Then they thought maybe they could do with the HMS spectrometer this x > 1, experiment, but the accelerator didn’t work well enough. So it wasn’t until the next November that we actually --
Yeah, because what Larry said is that they were also trying to get the machine up to speed and getting the kinks out of it.
Right, and that took a lot longer than they’d hoped. Yeah. So then we started in November of 1995. I took data for two months and we just shut down. Then Roy took data, I think, for two months, and then I took data for two months. Then just at the end of that, Hall A was finally ready to take beam, so at the very end, they got 24 hours of beam to just see if things were beginning to work with their spectrometer in Hall A.
And then Hall B took longer.
Right. Hall A was ready and Hall B took longer, right.
Okay. Well, my last final question for this time is that that’s the PAC that planned, the first round of experiments through 13 was PAC4 and it shows the experiments. I’m wondering if by that time you see any evolution in any way, because of technique or whatever.
The polarization transfer technique. So at this point, the polarization transfer experiment hadn’t run yet, and this experiment which used a polarized target had not yet run. So this experiment and this experiment were both trying to measure the form factor of the neutron, which is always --
Okay. For the tape recorder, say it’s the…
So 93-026 and 93-038 were two different techniques -- and 94-021.
Which is in Hall A, right.
Which is in Hall A where all three experiments designed to measure the distribution of electric charge in the neutron, which from the beginning was identified as one of the key experiments to be done. But they had not run yet. So experiments like these, polarized structure functions, were an outgrowth not of the original EMC effect experiment, but by the measurement of the EMC collaboration in ’86 that seemed to show that the quarks carried only a small fraction of the spin of the proton. So it’s actually that experiment that probably led to more direct proposals, new proposals to Jefferson Lab. So this polarized structure function G1n, that’s 93-009. 94-010, measurement of the neutron spin structure function. 91-023, measuring polarized structure functions inelastic electron-proton scattering. Drell-Hearn-Gerasimov sum rule, 97-110. All of those were inspired by the EMC spin measurements directly. Okay. Electromagnetic structure of deuteron, large momentum transfer. That’s in the Blue Book. Deuteron tensor polarization, that’s in the Blue Book. Yeah. High momentum structure of the (e,e′p). Okay. Then here’s Roy’s experiment, two-body disintegration at high energy. This proton polarization in d(γ,ρ) was an expansion -- this is 89-019. That was an expansion of Brodsky’s ideas that led to these quark counting rules and a prediction that polarization effects would be very small. They measured it and they saw polarization effects, so that was a challenge to Brodsky’s interpretation.
So it sounds like what you’re saying then is that by the time of PAC13, which is --
It’s around 2000.
Right. So still, the 6 GeV program continued through 2012. So PAC13 it’s kind of in the middle of the --
Well, I would have said… Yeah, it’s still at the beginning. We didn’t have 6 GeV until what time? That was probably around 2000 we got close to 6 GeV, maybe a little later.
Okay. So it’s evolving is the point.
Right. The polarization experiments received a significant boost from the EMC spin measurement, and so then there’s a whole series of them so you see a large fraction of the program now is about quark polarization. That is a direct relation to the EMC relation. The EMC effect, it was more it had this effect of informing the program as opposed to saying, “We want to do the same type of experiments.”
Wasn’t the so-called N* the kind of work that Nathan was particularly interested in?
Right. So that was when CLAS was built. It was to do nucleon resonances. That was something Nathan had a very personal interest in. They’ve taken beautiful data, but it’s proved to be very difficult to analyze until things like the Excited Baryon Analysis Center made a concerted effort for it. But I think there was a frustration that it took a long time for that initial round of CLAS data to be at a publishable stage, and since the interpretation was not so clear, there was a question of the physics impact. I think now there’s been lots of physics impact from the CLAS data and they have beautiful data sets, and with this theoretical push, that idea is… You know, everyone recognizes the value of a 4π detector, but at the time, they were competing with the Relativistic Heavy Ion (RHIC) experiments. The RHIC experiments turned on and had publishable results immediately. Part of it is because they were in a whole new energy range. Anything they measured was interesting. It was just a matter of making sure it was being done right. Jefferson Lab, it was much more precise data, but for the nucleon resonances, it was not as different. There was a qualitative difference, so you really had to do it right and do a very full analysis.
You weren’t looking for exotic phenomena? You kind of were.
You looked for exotic phenomena, but none of them appeared very quickly, right? I mean they looked for the θ pentaquark, “found it,” and then unfound it, right? So people look --
So they have to depend on nature to be cooperative in…
Right, and I mean that’s the way nature is.
Right. That’s why you get the big bucks, to…
We could have turned RHIC on and found something completely different, found a very boring quark-gluon plasma. Instead, RHIC has been lucky that the quark-gluon plasma they found is not the quark-gluon plasma they went looking for. At Jefferson Lab, I would have said we have been unlucky in that things look far more normal than people had hoped. Normal is not the right word there, but again, we’re not seeing big differences in the behavior of nuclei compared to the free nucleons. We’re not finding a whole bunch of new resonances in the baryon spectrum. I mean one of the ideas was there were a bunch of missing resonances, and either we would find those resonances or convincingly show they weren’t there, and that would rule out between diquark models and quark models. Well, we tend to find some of them, but not all of them. It seems to rule out diquark models, but it’s not as simple as if the diquark models were right. And CEBAF has shown us that dynamical effects, final-state interactions are very important, and so you can’t get the physics out of a naïve analysis.
Okay. That’s great. I’ve taken up exactly an hour and a half of your time.
The original 1983 EMC paper, which has this data, that data has an additional 7% overall uncertainty, which is not usually shown. But later data, people made a really big point of this very large rise up to 1.2. Well, later data don’t see that at all. It is within the systematic uncertainty that they assign, so it was not wrong, but you get a very different impression from this or from that figure and what you see later. So in fact, the effect is this…When we originally saw, we said, “Oh, there’s this big rise at low x and there’s this decrease at high x.” Now it’s just this decrease at high x which is called the EMC effect, and down here at lower x; there’s all sorts of other physics going on.
Hmm. Interesting. Thank you for your time!