History Home | Book Catalog | International Catalog of Sources | Visual Archives | Contact Us

Oral History Transcript — Dr. Mel Shochet

This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.

This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.

Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.

Access form   |   Project support   |   How to cite   |   Print this page


See the catalog record for this interview and search for other interviews in our collection


Interview with Dr. Mel Shochet
By Kent Staley
October 17, 1995

Mel Shochet on designing particle detectors

open tab View abstract

Mel Shochet; October 17, 1995

ABSTRACT: This interview is part of a group of interviews documenting the search for the top quark at Collider Detector at Fermilab (CDF). The interview with Shochet reveals how he got involved in high-energy physics; how he then became involved with the Collider Detector at FermiLab (1976), with Bob Wilson and Jim Cronin; how the detector was built, the decision-making process for designing the detector; work on the detector trigger, the general structure and then the detailed design; working as part of a collaboration and the small number of people working as part of the collaboration; discussion of the early physics working groups and heavy flavors groups; discussion of the top search; working with the silicon vertex detector; the writing of the paper that would later appear in Physical Review - the process and decision-making on the format; discussion of the top quark; discussion of D0 and its effects on CDF; what was learned from the data from the silicon vertex detector; how the spokesperson for the collaboration was chosen and the duties involved; his tenure as spokesperson.

Transcript

Staley:

Okay, are we on here? Yes, okay. This is Kent Staley interviewing Mel Shochet at University of Chicago on October 17, 1995. Maybe the best way to start would be if you could fill in a little historical background. How did you get involved in high-energy physics and what brought you to CDF?

Shochet:

Okay, I became involved in high-energy physics entirely by accident. I was an undergraduate at Penn and I was looking around for a summer job and lo and behold, in one of the high-energy groups they needed someone for a summer job and that's how I started. Worked on some electronics for an experiment at Brookhaven, spent a summer at Brookhaven working from eight in the morning until two in the morning seven days a week and fell in love with it. Shows how nuts I am that I did and that was it.

Then I stayed in high-energy physics. As far as CDF, when Bob Wilson first decided to have an initiative looking at the possibility of having colliding beams at Fermi Lab, he set up a colliding beam department with Jim Kronin as the head and that was in December of 1976, and the initial group was about, must have been about, 10 or 15 people meeting informally discussing both accelerator and detector issues and I was a member of that initial group.

Staley:

Now obviously it's a long and complicated story how the detector got built. What do you think were the sort of crucial stages in particular in terms of obstacles that had to be overcome to get the detector built?

Shochet:

Well, it's hard to remember everything that happened and repression is wonderful, it's probably even harder to remember the difficult things that happened. Certainly the first major step that we had to take was to agree upon a general detector configuration. What was the detector going to look like, was it going to have a magnet or not, if it had a magnet was it going to be a dipole magnet the way UA1 had been. Was it going to be a solanoid magnet which had been typical of the detectors in electron positron colliders. And we all had to agree what the basic definition of the detector was going to be before we could proceed to, in principal what's harder but in practice what's really easier, and that is determining what the best designs for each of the individual pieces would be, and a lot of that actually occurred rather early on.

It began in a summer study that we had in Aspen in the summer of 1977, and there we broke up into groups that were to focus on designing either a non-magnetic detector or a magnetic detector, which we did. In the end, after a number of months of arguing the pros and con, it was generally agreed upon that the magnetic detector would be most powerful and that what we were really looking at was a solanoid design. After that then the technical details which were, took a long time to settle on what was the best way to design each individual piece and that was a lot of hard work. But at least our direction was pretty clear once we set the general framework.

Staley:

I see. So that basic decision that we will have a magnet, it's a solanoid, that was a major turning point?

Shochet:

Yes.

Staley:

What were the considerations in favor of a solanoid, do you recall?

Shochet:

You expect me to remember details of 18 years ago. I honestly can't. I could certainly look it up. I have some notes at the time. My guess is, the bottom line was that a magnetic detector could do everything to first order that a non-magnetic detector could do but it had additional handles that we thought were going to be crucial. For example, measuring the momentum of the track of an electron to see the hypothetical decay of the W, which of course was the hot topic at the time, so that you could more easily separate it from other potential backgrounds in which there was an accidental overlap of a charged pion and a pi-0. Which would be harder to separate than on a magnetic detector. I don't remember, but my guess is that it was that sort of notion that with a magnetic detector, it gave you that many additional handles against unknown backgrounds that you might face.

Staley:

You mention the W and the physics that you were thinking about at that time. Was that the big piece of physics that you initially were in pursuit of and when did the top sort of become a major item on the agenda?

Shochet:

Well, when we first started to think about this, the single most important item on the agenda was the vector bows on the W's and the Z's. They hadn't been detected. It was at the point where their mass could be predicted reasonably well if you believed in this relatively new thing of the standard model. It seemed to be rather good. And it was a benchmark, it was something you could say, this we can do and this is how well we can do. Everybody understood that because the collision energy would be so much higher than anything that had been achieved, that of course the CERN collider had not as yet run.

That there were all sorts of new things that one might be able to see and you wanted to have a detector which you felt was generally good in observing the elementary constituent, in being able to measure directions of scattered objects out of a hard collision, be they quarks or gluons, or whether they were photons and electrons and meuons. But at least that was one bench mark that you could look at, was how well you would do on a W or a Z. So it was certainly important.

It was the only thing that you could say, that ought to be there. At that point, I think the upsilon in fact was not yet discovered, at that point. And so, there wouldn't have been any reason why one would have been looking for a top, and actually even after — within a year or two — but even after the B was initially discovered, and people said well listen, they come in doublets and there ought to be another one. There were no good estimates at that point of the mass of quarks. Anybody who ventured to guess, by and large was doing it by numerology.

The charm had been one and a half, and the bottom had been five, and the strange had been a half and that's sort of a factor of three each time, okay, well, clearly there's a rule of threes and the top is going to be 15. So, there was no reason to believe the top was going to be as heavy as it was. And so that was not, I don't believe, a major thrust in our thinking although if you wanted to I could probably pull out my report from that summer study and see if anybody was talking about it.

Staley:

Although, they're published right?

Shochet:

That summer study? There's the little Fermi Lab booklet, yeah, you have that? That little Fermi Lab document?

Staley:

Yeah. So...

Shochet:

Of course I'm now curious, you'll have to excuse me for one second while I just pull it and just look at it.

Staley:

Okay, sure. [tape off] [tape back on]

Staley:

Now as the detector's coming together, what particular, I guess we can just talk in terms of hardware, what particular pieces of the detector did you work on, or the Chicago group in general work on?

Shochet:

Well it's something that Henry and I discuss early on. What we wanted to do was find a piece of the detector which we found intellectually challenging and appropriate to a university. It was clear that the really big mechanical pieces, like the calorimeter modules, they had to be built at a national lab. Either at Fermi Lab or Argonne, just because of the need for large numbers of technicians and the ability to move and control really heavy stuff. And we decided that one of the most intellectually challenging issues was the one of triggering.

Because it was clear that if the accelerator ever reached designed luminosity, which of course by now it's reached twenty times designed luminosity, but if it had ever reached designed luminosity, there were going to be fifty thousand interactions per second, and the most that we would be able to write on tape for analysis was going to be on the order of one and the question then was how do you select the one most interesting event out of ever fifty thousand. So we latched onto the trigger very early.

We latched on to it so early that there wasn't even a debate about who was going to do the trigger because nobody even had time to think about it, we said we were going to do it. And that's what we worked on. First the general structure of how it was going to work and then the detailed designs of all of the individual components of the system.

Staley:

Did that then involve thinking about what the physics would be that you would be doing with this detector?

Shochet:

mp3

Yes and no. Yes in that the physics was crucial and if you ignore the physics you shouldn't be working on the trigger. No in that we realized pretty early on that we couldn't know what the physics was going to be like. At least we hoped we didn't know, I mean the hope was that we were going to find things that no one anticipated. So you had to design something that in fact was quite general and quite flexible, so that if you started to see a hint of new physics you could alter the system to focus on that.

So what we decided to do quite early is to try to focus on the elementary partons that would be the decay products of anything new. So electrons, meuons, jets, which meant quarks and gluons, missing ET for neutrinos. And the goal was as generally and as quickly as possible, to try to find all of these objects and to make lists of them. Here are all the electron candidates with their energies, the directions that they're pointing, here are all the jets, their energies, the directions they're pointing, the meuons, the missing ET.

And once you construct that list, then that could all be done in dedicated hardware with enough handles so that you could fine tune the way you found jets, and your minimal requirements on electrons. But essentially an algorithm that was predetermined. Then once you make the list you allow a very fast but programmable device to make the final decision of what it is that you are interested in. So do you want events with one high energy electron, if so, how high energy, is there anything else that you require to go along with that, like missing energy or jets or meuons? That provided the flexibility.

If there was some new object that no one had even imagined, that decayed into a meuon and an electron and two jets, this allowed the flexibility. Once you started to get the hint of that to focus on it. And so it wasn't completely built in hardware but enough was built in hardware so that it would be fast because it had to be fast enough so that we wouldn't incur dead time, and that meant making a first decision in under six microseconds as it was initially designed.

Staley:

Now the, oh, I had a question, I almost forgot. You said that when you as a group decided you wanted to work on the trigger that everyone was so busy that there was no real debate over it. Was the detector — collaboration as a whole, shorthanded in those early stages?

Shochet:

Um-hmm, yeah, there were, I think initially, well of course in the initial studies like that first summer study in Aspen, there weren't groups yet there were interested individuals. When we finally got together as a collaboration and tracked the first, which turned out to be the major, foreign collaborators, the Italians and the Japanese. And that was really the core of the collaboration that would build the detector. My recollection is that there were only thirteen groups. So in fact there were relatively small numbers of groups given the number of systems that we had to build. So there was no shortage of areas that needed a responsible person or a responsible group.

Staley:

Do you think part of that was that it was just hard to attract people into a collaboration when you are this far from being close to taking any data to get physics from?

Shochet:

Yeah, it was, number one, it was very early on. A lot of people were busy with other ongoing projects. And I think another reason may well be, that it wasn't so terribly visible. You know, it was going on, but it was not yet the era of mega collaborations. It was not like the SSC where people said 'My God, fifteen years from now they'll only be two experiments in the world, I'd better jump aboard or there is not going to be any ship left'. It was nothing like that. High-energy physics was still dominated by relatively small fixed target experiments.

There were the collider experiments at the electron positron machines, those were still modest collaborations. UA1 and UA2 were in the design stage, they were growing but they were still modest collaborations. And so the size of CDF initially is being 13 institutions with at that time I don't know maybe there were 50 or 60 people because very early on you don't have the postdocs, you don't have the students, that was...it was not a very small group. And it was not as if that was the only game in town.

Staley:

Yeah. Let's talk about the top analysis and sort of how it evolved.

Shochet:

Okay.

Staley:

I know there was a heavy flavors groups, right? This was one of the early physics working groups.

Shochet:

That's right. That's right. Yeah, Brig Williams and I were the conveners of that group when it first was organized. In the early days of the detector we were organized around what we call the algorithm groups, which focused on how you were going to reconstruct the primary objects that all physics analyses were going to use. How would you identify electrons, and how would you improve the signal to noise and optimize that. How would you reconstruct meuons and jets and missing ET. And those were the groups that were established. And when we were starting to get data, it became clear that we now had a basic reconstruction program working and the focus was now going to shift to extracting the physics. And so we changed the group structure from one that was focused on individual partons to ones that were focused on the physics that you wanted to do.

So there were really, I think there were only three groups originally. One was the WZ group, or an electroweak group, one was a QCD group, and one was a heavy flavor group which was essentially the top quark group. Because at that point it wasn't clear that we were going to really be in the B physics game in a serious way. We didn't have a vertex detector. We were clearly going to do some work but it looked like it would probably, largely be production cross sections and that sort of thing.

Staley:

So, at what point did the heavy flavors group break into the top in the B group? And where was the top search at that point? Not so much in terms of like years or dates but just...

Shochet:

Oh boy. I don't remember. What happened was, that the primary focus of the heavy flavor group was top although there were presentations on B production. But after a while it became clear that in fact CDF was going to be doing more and more B physics. And there just wasn't enough time in a regularly scheduled two to three hour meeting every other week to really be able to present all of the talks that people wanted to give. Both on B physics and on top physics. And at that point it split into a B group. But the B group, that was relatively late. It may only be four years ago that we split off into a B group. Once the silicon vertex detector was there...

Staley:

Was that the growth in B physics?

Shochet:

The growth in B physics was largely the silicon vertex detector. Which...there were two things. It was clear by that time that what we could already start to reconstruct exclusive B decays, and there was physics that we could do, but the prospect of having a silicon vertex detector, which promised both more cleanliness and being able to measure properties that were unaccessible to us in any other way like lifetime and like mixing and that sort of thing, really increased the interest in CDF in general and B physics, and especially that large cadre of people who were involved in designing and building the silicon vertex detector, there focus was on B physics. There really became and explosion in interest and that's when we decided that there really had to be a separate group to have its on three hours every couple of weeks focussing on that and then we split it off.

Staley:

I'm interested in the silicon detector. It was not initially part of the detector, but people were talking about it...

Shochet:

It was part of the original design of CDF.

Staley:

Oh really?

Shochet:

If you look in the blue book which is the design report for the CDF detector, wherever it is, it is there in 1981 or I think it was '81 or '82 when we submitted the design report to the lab.

Staley:

I see.

Shochet:

Silicon vertex detector was there. The limitation in funds available to us to build the detector forced us to put that on the back burner until we could afford it. But it's there in the original design.

Staley:

I see. Okay. Then what were the — I guess mainly what I'm interested in is what had to happen for you to have sort of confidence that this detector, that the silicon detector, was going to do what was promised and that you — and that it was worth, you know, the expense and effort that had to be expended to make this part of the detector?

Shochet:

Well, there was a dedicated group, largely a group of Italians who had proposed it from the beginning and had some experience in silicon at CERN. And had done lots of prototype tests and were quite convinced that this was going to work very well. By the time we could afford it, there had been silicon used in fixed target experiments and it was clearly a very powerful technique. There were still questions about how it would perform in the environment of a hydron collider, especially as the luminosity went up. Would there be so much radiation locally that there would be trouble. Every rough estimate that we made of that indicated that it should work okay. And at that point we just went ahead with it.

Staley:

When did you start to see events that made you think that, well you know, we may have top events here, that started to be more compelling?

Shochet:

In any exploratory experiments, there's always the zoo, there's one of this type and one of that type and you look at it and you say I can't figure out what that is. And they sit in the zoo, and in most cases, in almost all cases, it turns out it's one event that's way out on the tail of a distribution. You get ten times as much data and the distribution just fills in and it's clearly, it was just a fluctuation that you happen to get. One that was so rare so early. And early on, there just was no evidence. We started out seriously in this thing that became the CDF UA2 competition which resulted in that pair of TV productions, the NOVA on this side of the Atlantic, and one by the BBC on the other side of the Atlantic of the 'Hunt for the Quark'.

That started up right away, before we got to run, there was the UA2 claim that they had seen the top quark and then it became clear that they didn't. And that indicated that in fact this thing was pretty massive. It still very well might have been in the reach of the CERN experimenters because there was a great increase in luminosity that occurred at CERN. And so we were all in the game. UA2 dropped out early because of the problem they had with their upgraded detector that they were never able to rebuild those electromagnetic calorimeters, and so they were sitting there with half a detector. So then it was us and UA2. So we were in the game very early and the studies were all quite serious and all null results which we continued to present and to publish so that, first at, at that first meeting at LaThuile where I made the statement that it was unlikely that the top quark had a mass below 60 GEV.

At that point we hadn't finished the systematic study well enough to quantify it so we got some chuckles out of the theorists. So that's all I would say is it's unlikely, and I wouldn't say how unlikely. And then UA2 made essentially a similar statement that they saw no evidence below 50 or 60 GEV either and that lead to a publication and then a later publication which increased that to 90 GEV. It wasn't tantalizing. There was just nothing. I mean, you did the search and it was clear that everything that you saw was consistent with background. Once the silicon vertex detector was working and we were taking data, in run 1A there was a collaboration meeting in August of 1993, and at that point there were three groups that were separately searching, one was looking for the di-lepton signature, another was looking or trying to identify B quarks and W events, using the silicon vertex detector, and the third looking for extra soft leptons in association with W's. And they all made their presentations.

It was late, in my recollection, it was late in the day on Friday, the second day of the collaboration meeting and sort of interesting, nobody paid much attention. I mean, my eyes opened up and so did Alvin's and we looked at each other because we were counting how many the di-leptons saw and then how many the SVX saw.

Staley:

And these were all being presented separately?

Shochet:

They were being presented separately. And you know, each one was saying a little bit more than expected. But on their own, okay, so you expect less than one and you see two. So watching you expect three and you see five, so watching. But Alvin and I added them all up, but when you added them all up, it looked like there very well might be something there. I mean either we were miscalculating the background or there was something there or nature was playing a nasty trick on us.

[laughter] In fact, at the end of the meeting, just before we were going, Alvin with the microphone said something about the fact that maybe we were starting to see something quite exciting. And I heard the buzz around the auditorium, 'what's he talking about, what exciting, where, in what analysis?'. And it was clear, as I say, I think it was because it was the end of the second day of a long collaboration meeting and when you've heard thirty talks in various physics topics sometimes the eyes glaze over. But that was a time where to us and I clearly thought there was...

Staley:

Okay, so, I'm trying to remember where we were.

Shochet:

Oh, were we still on..., Oh yeah , I was telling you about that collaboration meeting.

Staley:

Right the collaboration meeting.

Shochet:

Right, so then rather shortly after that, we put on a concerted effort to try to see what had to be done to finish it up. And our goals, and it was true throughout that period, were, turned out to be overly optimistic. Our first goal was the pbar-p meeting in Tsukuba, which was going to be in October of that year. So the question was could we finish it up by October and very quickly the answer to that was no. And then it was could we finish it up for the winter conferences in Aspen and at Moriond and La Thuile.

And during that period, there was just a lot of work that had to be done to answer all of the questions, raised by the collaboration. That was the experience that really convinced me that concerns that some people had had, that, within a big collaboration, that was really a monolith and there might not be sufficient critical analysis that you really needed multiple experiments in order to be able to check as you had traditionally in small experiments. You always wanted a number of different experiments to check the results. And what I saw there was in fact, the critical analysis from the inside, from different groups that were trying to do the same or similar analyses. From people who were really experts and knew how to really dig hard and ask the hard questions.

That that process was much more difficult and much more comprehensive than any outside reviewer could ever be. When we wrote a paper, whether it was that first paper on the evidence for the top quark, or last years observation or our 150-page paper on the W mass, the last thing we worry about is the referees' reports from the journals, because it's been gone over by such fine tooth comb process within the collaboration, by godparent committees or in the more important things like top by every member of the collaboration, that I just cannot imagine that there is somebody from the outside who could find something that wouldn't have been found by the people who knew what was inside that detector.

Staley:

Right. So, this was a long process obviously, and as I understand there were debates over a number of points related to the paper. Perhaps you could comment on a couple of them. For example, the format. The original idea being to write four PRL's.

Shochet:

Yeah, there were really two issues that were trying to be addressed which resulted in this sort of round about way of coming to what, in the end I think, was a fine answer. One was providing enough information that the community could really critically assess what we'd done. And the other was making it available to the community. With a venue that in fact most members of the community had accessible to them. The collaboration had decided long ago that we would publish in American journals, so there wasn't an issue of should it be Physics Letters or should it be either Physical Review Letters. We had decided that we were going to publish in Physical Review Letters and the Physical Review.

And as things developed over the years, in fact, many, many more physicists in the community read Physical Review Letters than the Physical Review. And so, plus the review process, the amount of time that it took, from submission until it actually appeared in a bound journal was much longer for the Physical Review than Physical Review Letters. And so on the one hand the way to get it to the community quickly was Physical Review Letters. On the other hand, Physical Review Letters had this limitation of no more than four pages. And so the first thought that we had was that we did have three separate analyses underway, the dilepton, and the two single lepton analyses, one the B tag with the SVX detector and the other B tag through soft leptons.

And so the notion was lets try that. There'll be one four page paper from each of those three groups describing the method of the analysis, the estimation of the backgrounds, the systematic checks and the answer. And then there'll be a fourth paper that pulls it all together and draws a conclusion and presents any parameters that you would get out of that synthesis of the three analyses.

Staley:

You mean like the mass and the cross-section?

Shochet:

Like the mass and the cross-section for example. And any other properties like transverse momentum, anything else. And at first we thought it was a great idea, and then there was a problem, well, how do you actually do it? If you're really going to present a coherent picture you need an introduction, you need something that sets the ground work. What is it you're looking for? Why are you looking for it in these three particular search techniques? Why aren't you using other techniques? Why are you using three? How does it all fit into one coherent picture? And it's hard to do that in the fourth paper because see, people have already read the other three and they are wondering why you are doing all of these analyses.

Well, okay, so you put that introduction into one of the papers but then that leaves only a page and a half for one of the analyses. Can you really do that? And we tried and we actually wrote some drafts. And it became pretty clear, pretty fast that it just wasn't going to hold together as a coherent document. And at that point we went back and we said we will just do it as a single Physical Review. We would get it to the community either through e-mail or by physically mailing it to all the major institutions so that everybody would have it and then the suggestion was made, well listen, once you have that then you've served the purpose of enabling everybody who wants to read all the details, see all the details, there is another group in the community, not even necessarily the high energy community, but the broader physics community, that really just wants the bigger picture. What were you looking for?

Roughly how did you do it? And what's the answer? And so, that's when we came up with the idea of you really write this large Physical Review article that presents all the details and at the same time, or shortly there after, you prepare a four page executive summary for the Physical Review Letters, which serves both audiences and doesn't shortchange either one. And that's the way it turned out.

Staley:

I see.

Shochet:

And I think it was actually pretty successful, and my guess is that most people read the four page paper, relieved that there was a 150-page paper that could back it up.

Staley:

The 150-page paper, obviously when you opened that process up, and said well okay, we can't do four PRL's, we need to include more. What exactly more goes in then becomes a matter of debate.

Shochet:

We were worried about that because there were other analyses going in other directions and if you opened it up to everything, there would be no end from the point of view of time. There would be..., it seemed to us that by adding additional analyses we would delay that paper by six months to a year and we didn't want to do that. And so we said early on, and there was general agreement within the collaboration, that this big paper would contain those three analyses, an introduction and a conclusion, and not much more. And that's the way it turned out. There was a little more but not much.

Staley:

Although I know there's a section on kinematics and that a number of people had been using these kinematic approaches to try and define the top. There were several...

Shochet:

That's right. There were. Those techniques started early on. For many of us and it certainly was a majority of the collaboration, we never polled it so I don't know exactly what it was, my guess was it was an overwhelming majority, the feeling was that that is not the way you discover the top quark. My rationale was pretty straight forward, the essence of those analyses was that they found the vents with W's in which the jets were more energetic than one expected from a Monte Carlo simulation of the background. That presents three alternatives: One, the Monte Carlo calculation is wrong.

That in fact nature produces jets with a harder momentum spectrum than that Monte Carlo which was a leading blah, blah, Monte Carlo, predicted. Possibility number one. Possibility number two is that that really was harder than nature provides in a normal W ??? against hadrons and there is something else. But that then leaves two possibilities. That something else is top or that something else is who knows what. And in that analysis there was no clear way of addressing that issue. It just didn't agree with the Monte Carlo. And my view, and I think many others was, that is supporting evidence, but that's not going to discover the top quark. To discover the top quark you really have to look at the essential ingredients and ask is that what you see?

What really differentiates the top quark from anything else? And in fact, you know, D0 unfortunately was in the situation where it didn't have a vertex detector and so to some extent, I mean although the analysis is different, it relies much more heavily than CDF does on this notion that jets should be more energetic in top than in the background. Whereas, in the end, what we tried to do, and we had the apparatus to do it, was to ask the question, are there multiple W's? Are there B's with W's?

Are there multiple B's with W's? Are there pairs of W and B jets which give you the same mast to such pairs in an event. In our view those were the essential things that separated top from anything else you could think of which might be new or just a miscalculation of the background. So that's what we looked for. And therefore the event structure analysis was a nice study and additional supporting evidence that what we saw was consistent with the top picture. But it was not in my view prima facie evidence for the top quark.

Staley:

And for that reason, the significance calculation is based just on the counting experiments?

Shochet:

Which paper are you talking about?

Staley:

In the evidence paper.

Shochet:

In the evidence paper, there were two reasons. That was one reason, the other reason was we did not feel that the study of systematic uncertainties was sufficiently far along so that the event structure paper could be put on the same quantitative level as the counting experiment. Moreover, it was clearly correlated with the counting experiment. And when you look at significance you have to be very careful not to double count. I mean, you can look at the same thing three different ways and you don't get the count of three times. There was that unknown correlation between the two that made it not so clear how you would combine those two approaches.

Staley:

I see. You mentioned D0. I'm curious what your impression is, how you would assess the impact that D0 had on CDF as another collaboration, you know, just around the bend from CDF. Do you think that the overall effect of that was helpful or was it more of a, you think, a combination...?

Shochet:

Oh, I don't know, it's very hard to access. I mean, naively one would think that with that competition around the ring it would really light a fire to get CDF to finish up and get it out. Yet we saw that excess in August and we didn't tell the world until the following April, meanwhile denying fourteen times to various reporters who heard rumors, that we were about to say something. So clearly it didn't light that much of a fire. I don't know, it's very difficult for me to access the psychological factors. Yes, D0 was there and clearly they had the capabilities to find the top quark. On the other hand, it was their first run with a new detector, they didn't have the experience with their detector that we had had with ours because we had run before.

They didn't have the silicon vertex detector and when we looked at our data, if we threw away the silicon vertex detector, we would not have been talking publication. So I think, yes, D0 was there, and it lit a little bit of a fire to say listen this is not something you're going to sit on your hands for five years and think about. On the other hand, I think realistically, we did not feel that, unless nature had been very, very, kind to them and handed them two or three times as many events as it handed us, that it was likely that they were going to be in the kind of position that we were in, because we had that SVX which really was providing an awful lot of the statistical significance.

Staley:

Did you feel it was also..., I know that various people have said to me that it was important to have had those years learning to understand the detector, particularly with jet corrections and things like that. Did you feel that that was also an advantage in terms of being able to...?

Shochet:

Yeah, I think it was. I mean, we were, we had done W reconstruction for a long time. We had done a lot of jet physics for a long time. There were checks that we could make internally to our own data based on previous publications that were saturday checks. And that we could do relatively quickly because we had done the analysis in past runs and the algorithms were already there. You could check, okay here's your analysis, it's finding these top events. Get rid of the jets. Do you get the W cross section that we published? I mean, there were checks like that that one could make, and it really is a learning experience. And CDF had the advantage of having learned a great deal over the previous five years.

Staley:

As a philosopher, this issue interests me as a methodologically interesting issue. When people are working on an analysis, it is often said that you have to be careful not to bias your analysis by picking your cuts to save this or that event. What's your assessment of how well CDF did in avoiding that kind of potential bias and any particular things that you think might have been done, well, particularly well, or maybe a little bit better to reverse that?

Shochet:

Well, you know, it's hard. Again, especially as it turns out, I didn't know it before hand, that a collaboration of this size, on critical issues everybody has a strong opinion. And so there were an awful lot of people there who were ready to hit the roof if it looked as anybody was tuning cuts on data. And so as a principle, it was absolutely forboden. In practice, it's a little harder. I mean, for the single lepton searches, it's much easier to make sure you don't do that because the data samples that you start out with are overwhelmingly background. And so you set your cuts and so on. And it's a while, especially when you're also debugging a new detector like the SVX.

It's a while before your down to the point where you really think your signal to background is good enough that you might be starting to bias yourself. With dileptons, it's harder because you see them online. You know, they come in and the next morning everybody has seen it because it's such a unique signature. There's such little background that online programs pick it up. And you still try very hard not to do it, but is there any way that you can prove that somebody hasn't remember the properties of event two, of an event sample that's only five events long. There's no way you can prove. But I think that there was an honest effort to try to avoid that.

And between the evidence and the observation, we were really very strict. And, I mean there were lots of suggestions on a way to improve the signal to noise. And they were even done fairly, they were done before we had the data, they were done with Monte Carlo, the data was only twenty percent in. But there were some really hard nose people in the collaboration who said in that 150-page paper, we told people how to find the top quark, and we thought we had evidence. The only way to prove to the world that in fact it's there, is to use the same cuts and now take five times as much data and see, is it there or isn't it there.

And the only things that we changed were really technical cuts. I mean, improvements in vertex detection which applies to B physics and top physics and is not involved in looking at top events. But how you set jet cuts, and the number of jets, and the energies and the technical cuts in the dilepton search, those were just, they were fixed.

There was no way that the collaboration was going to allow those to be changed and they weren't. I think in the end we could reasonable say, in this paper we saw evidence, and we presented our method, and then we let the data speak for itself. We collected four times as much data and we showed you whether it was true or not. And what we saw as evidence in fact really was there, as we suspected.

Staley:

How was the decision made...

Shochet:

Oh let me say one thing. I must leave in about 10 minutes. Now, I don't want to cut it off, I mean I enjoy this. I'm glad to resume at some other time, or do it on the telephone...

Staley:

Actually I'm almost done, so...

Shochet:

Okay, fine.

Staley:

Ten more minutes would be more than enough. One quick question about how the decision was made to look at the one B data at the particular time that you did. I mean, you had, I know that the B tagging algorithm had been reoptimized and it reached the point where...?

Shochet:

Right. What we did, since we had learned from that first shot of data with the silicon vertex detector, we felt we could do a little better on just general vertex finding. And because we had an improved detector, which had better signal to noise, and didn't have some of the diseases that the first one had, we felt that we could improve the technical algorithm somewhat. And we all said, and we all stuck by, nobody looks at the data until it's fixed and it's frozen. And once we fixed those technical changes and had measured efficiencies with samples of B events that had come in and looked at background rates, and so on, and we felt that that was it.

And then we said okay, now we can look at the data. And again, this is I think another indication of that statement that I made which is something that I learned as well as everybody else about how strong the oversight is within a large collaboration of very independent minded young men and women. I mean, there was no way that we were going to be able to do that. If we looked at the data first, and then changed the cuts, we would not have been able to publish. I mean, there would have been holy hell to pay. You had to do it right or you couldn't do it at all.

And that, I mean that was clearly the right way of doing it. But it was internal, you know, people afterwards, friends of mine other places will ask question about this. Well didn't you really set the cuts afterwards, aren't you biased estimators, and blah, blah, blah. And the answer is no we didn't. And the reason we didn't is because people like you were in the collaboration. And they didn't keep quiet, they had strong voices, and they insisted that it be done in a way that when you were done you had an unbiased estimation of the significance.

Staley:

Okay, just one final question.

Shochet:

Okay.

Staley:

I know you were a spokesperson for the collaboration for a while. How is a spokesperson chosen and what are your responsibilities as spokesperson?

Shochet:

Again, this was a decision of the collaboration at the time that Roy Schwitters became director of the SSC. He was initially selected by Lederman, the lab director, who was looking for someone to oversee a big construction project of the CDF detector. It was not a decision of the collaboration, but when Roy announced that he was leaving to go to the SSC, this was something that we discussed within the executive board and it was decided that we wanted a democratic process. That we wanted the collaboration to decide who was going to lead the collaboration. And we discussed alternatives. I mean, it was possible, for the executive board to make that decision.

One senior representative from each institution. But again, it became quite clear to most people that that was not the way to go. An awful lot of the hard work and bright ideas come from this cadre of young people. The post-docs, the senior graduate students. And they have a right to express their view of who they think ought to lead the collaboration. And so we came up with the set of rules, whereby we select a spokesman. It's a rather complex, well, it's not complex, it just takes, every time we do it it takes about three months. In which we open nominations. Anybody can be nominated.

When nominations are complete, we form an election board of typically three people. Once nominations are complete, there is a list of those who have been nominated. That goes to each institution. And at each institution, there is a discussion and there's an ordered ranking of the people according to who was thought to, would do the best job. And then what the election committee does is it gets a reasonable size ballot out of that. And it's varied from five to eight, I think, candidates, who were most often supported by the institutions. And then it's a one person, one vote.

Each person is asked to order the candidates. And the system that's used is a modified hair system of proportional representation where each persons ballot is opened. And when your ballot is opened a vote goes to the person that you've labeled as your first choice unless either that person has already been elected in the case of more than one candidate, or that person has already been disqualified if it's very late in the election and you've had to drop off the people with the lowest votes in which case your vote is assigned to your second highest choice. So whenever you vote, your vote always counts. Then when someone gets a majority they're declared the winner.

Spokespeople are elected for two year terms. I was elected to three such terms. Declined to run for a fourth, I had had my fill of it. And the responsibilities of the, the major responsibility is the scientific leadership of the collaboration. It involves a lot of administrative stuff, there's dealing with the laboratory, with the program advisory committee, with the Department of Energy, fighting for resources, trying to make personnel decisions for various leadership positions.

And then there is a large scientific component of trying to help guide the scientific program. And for me that was the fun part, the administrative part in the end is what drove me to say I've had enough, I'm going to go back to do what I enjoy doing, the scientific part was wonderful. If someone else had done all the administrative part and that's all I had to do was the scientific part, I would probably still be doing it.

Staley:

Okay.

Shochet:

Okay?

Staley:

Well thank you very much.

Shochet:

It's a pleasure.