Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
Please contact [email protected] with any feedback.
Photo courtesy of Blair Ratcliff
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of Blair Ratcliff by David Zierler on May 17, 2021,
Niels Bohr Library & Archives, American Institute of Physics,
College Park, MD USA,
For multiple citations, "AIP" is the preferred abbreviation for the location.
Interview with Blair Ratcliff, emeritus physicist and Permanent Member of the Laboratory Staff at SLAC. Ratcliff describes his ongoing work at the Lab since he retired in 2017, and he recounts his childhood in Iowa after World War II. He describes his undergraduate education in physics at Grinnell College and he explains the opportunities that led to his graduate work at Stanford, where he immediately gravitated toward SLAC as it was being built. Ratcliff describes working under the direction of Burt Richter in Group C, and he discusses his postgraduate research at CERN where the ISR colliders were starting. He discusses returning to SLAC to join David Leith on Group B and his work as spokesman on the spectroscopy program. Ratcliff narrates the origins of BaBar and his decision to create the Physics Analysis Group and to build up the SuperB factory. He discusses his advisory work for the Dune and LZ experiments, and he reflects on winning the APS Instrumentation Award. At the end of the interview, Ratcliff considers BaBar’s contribution to understanding the cosmic imbalance of matter and antimatter, and he conveys a sense of serendipity that BaBar came together at the right time, at the right place, and with the right people.
Okay. This is David Zierler, oral historian for the American Institute of Physics. It is May 17, 2021. I am delighted to be here with Dr. Blair N. Ratcliff. Blair, it’s good to see you. Thank you for joining me today.
Nice to be chatting with you. Thanks for having me.
Blair, to start, would you please tell me your most recent title and institutional affiliation?
I’m an emeritus physicist at Stanford and at the SLAC National Lab. My precise title was Permanent Member of the Laboratory Staff at SLAC. It’s a mouthful, but that was a special designation that SLAC had for senior non-faculty who had permanent status (that is, tenure) at SLAC.
Blair, when did you go emeritus?
2017, I believe, in the spring. So, it’s been about four years now.
COVID notwithstanding, in what ways have you remained connected to SLAC since you retired?
COVID has made an impact for sure. I’ve been a long-term member of an experiment called BaBar which began more than 3 decades ago. BaBar took data from late ’99 into 2008. We have been publishing papers during that entire period – well more than 2 decades now. We’re still an active collaboration doing data analyses, writing papers, giving talks at conferences, and so on. We usually have four collaboration meeting per year and are still publishing about five papers a year. But all interactions have been virtual since the start of COVID, including the collaboration meetings.
I have a second connection which started more than a decade ago. As BaBar was winding down and SLAC was no longer running the accelerator for particle physics (HEP), several Barbarians at SLAC tried to think about the most crucial roles that SLAC could play, and what the SLAC BaBar group should be doing next, and what we would most enjoy doing. We ended up joining a collaboration called LZ, which was a very large, second-generation direct dark matter search using about 10 tons of liquid xenon, looking for Cosmic WIMPs and possibly other forms of dark matter. The name LZ stems from the merger of two previous dark matter detection experiments: LUX (Large Underground Xenon) and ZEPLIN (ZonEd Proportional scintillation in LIquid Noble gases). LZ is being assembled now at the 4850-foot level in the Homestake mine in Lead, South Dakota. Although my direct role has been ebbing since I retired – and even more with COVID – I have tried to stay informed, review papers, attend collaboration meetings virtually, and so on. During the last year or so, COVID has been a challenge for the experiment, which is now installing and commissioning. But given the challenges, installation and commissioning are going extremely well, so that is quite a success, so far.
I’ve continued to serve on review and advisory panels for several the laboratories around the world. These include CERN, where I was a member of the LHCC (the LHC Committee) until about two years ago, and Fermilab, where I’ve been on several of the review committees for the long baseline neutrino program, which will be building the new Deep Underground Neutrino Experiment (DUNE) with the main detector also in the Homestake mine some 800 miles from Fermilab. I’ve been a member of the B Physics Advisory Committee for Belle II (BPAC) that advises the KEK laboratory on their particle physics program with the Super B factory that they have recently built. The experiment has been commissioned and is taking data, but still far from reaching its luminosity goals. I’ve been on this committee for about a decade now, as I remain fascinated by the physics questions that can be addressed in the B sector, using data samples 50-100 times larger than those from the earlier BaBar-Belle generation.
Blair, how well have you adjusted to physics over Zoom during this time of remote work?
[laugh] Well, you know, I don’t like it a lot, but here we are. Of course, in some ways it’s a benefit. When one retires, you have less access to travel funding, for example, but you can still virtually attend conferences and be a participating member without any funding at all, so that’s useful. I find it unfortunate to not be able to see my colleagues in person at conferences, but I would say the biggest single detrimental effect of COVID is not being able to go into the laboratory. SLAC is still not open, at least for non-essential activities. There has been some recent chatter about beginning to open, but a full reopening may not occur for some time to come I stopped going to SLAC late February of last year, so that’s 16 months. I’ve been in my office once since to retrieve some papers from my archives. The office is still there. [laugh] I even logged onto my computer while I was on site, and everything was still functioning. There’s a new layer of dust on the tables, papers, and computer keys.
Blair, what do you think are some of the long-term implications on experimental physics, given that so many people have demonstrated how well you can do the work remotely on a computer?
Well, of course, virtual work has been a long-term trend for the large international experiments in HEP since the rise of the internet or even before when collaborators used to purchase dedicated lines to laboratories in the ‘70s. … Many experiments have had virtual shifters, at least in part, and virtual data analysis meetings and so on were quite common decades ago. I don’t know how big an impact COVID has had overall for how we do particle physics. Probably a lot less for us than for the broader society. Of course, it’s been a big difference for the subset of folks on the ground who are building or running the experiments; and for large general meetings and conferences, I think the attendees are less focused on the meeting and it is challenging to meet and interact with colleagues. For collaboration meetings and so on, the talks themselves differ little from before, but more informal interactions, whether planned or not, are difficult to duplicate. Anyway, your question remains an open and interesting one – an ongoing social experiment.
When you’re building and running experimental detectors, some people must be on site, but HEP experiments figured out ways to utilize offsite people effectively years ago. Take BaBar for example… Depending on the time frame, we had up to 600+ collaborating physicists, but probably less than 250 or 300 were ever working directly on site and working on the experimental hardware directly. Mind you, Babar is an experiment that occurred between one and three decades ago. Clearly the computer age and the World Wide Web, which after all is a particle physics invention, have allowed this model to work and made international collaborations of the field practical. And we have become better at it with time. Maybe COVID has spread similar kinds of interaction more broadly and pervasively into the general society.
The conference scene has changed since COVID, and I have no special insight into how that will sort itself out now that the change was forced upon us. There’s something lost when people don’t gather in the same place. Virtual attendees focus a lot less on the meeting, tend towards multi-tasking, and can’t easily interact informally with the other attendees. I’ve been in virtual meetings which attempted to get attendees together using various kinds of break-out rooms. They’re better than nothing. I’ve had a few informal conversations at some of these virtual meetings at nominal coffee breaks, and they’ve been pleasant-not so dissimilar than what one would have sitting around the table, I guess. But it is harder to move the conversation further into one-on-one. We’ll see how it shakes out. I suspect that virtual meetings are here to stay in a kind of hybrid mode, with both in-person and virtual attendees, because it will save so much money and time, but we are going to have to figure out how to do it effectively.
And something is missed for sure. I believe that you have found these that interviews to go well on Zoom, though.
As we’ve gotten more used to communicating this way.
It’s still no substitute for the real deal.
No. That’s the way I see it as well. Zoom like meetings have certainly helped society a lot during COVID. You can imagine if COVID had hit 30 years ago. Think how hard it would have been. Could the general economy have kept going?
Closer to particle physics, I’ve been quite impressed with LZ, which was starting to assemble the detector in South Dakota at a time when parts of California were under strict lockdown…SLAC was closed tight, basically, except for a few essential employees. South Dakota was different politically and they had less stringent state protocols, but still, it was very, very challenging to get much in-person work done. Travel was a big challenge. Those who were at Homestake were stuck in place and had to be very dedicated to keeping everything going. Since the experiment is underground, there were restrictions in place already, but protocols become even stricter. Despite it all, the folks on the ground have persevered and, at least looking from a long distance here at SLAC, it is quite a success with only a modest delay overall to the assembly.
I don’t know how it’s gone at Fermilab with the DUNE experiment, and with the excavation of their cavern at Homestake, but it has probably been harder to keep that on track.
Well, Blair, let’s go back to your beginning. Let’s start first with your parents. Tell me a little bit about them and where they’re from.
Sure. My parents were farmers in central Iowa. I was born in a hospital, in Grinnell, which says that my parents were modern. [both laugh] In many ways, my early life was closer to my grandparents’ generation than to that of my somewhat younger sisters, because at the time I was born, it was during the Second World War, and many aspects of modernization, like electrification, had been delayed during the ‘30s by the Depression. My parents were young, still in their early twenties, but my dad was not drafted since he was a farmer—you had to be a sole operator, I assume. I don’t know that for certain. He shared work back and forth with my grandfather and his younger brother (my uncle) who lived about 2 miles away.
In many ways, the infrastructure of life in that part of Iowa at that time was still early 20th century and had been built before travel was made easy by the car. So many elements of communal life were local and rural, meeting with your close neighbors. We had a Quaker meetinghouse, the local church – my father was a Quaker – on the corner of our farm. Many of the locals were Quakers, but there was quite a mixture of American evangelical protestants. There was a surprisingly large population close by because the farms were usually only 80 to 240 acres. Within a mile or so along many a road, there might be four to six houses. Many seemed to be young families with children. Occasionally there would be older people – often the parents of one member of the younger couple living on the same farmstead.
Of course, people had cars in the ‘40s, but there were few good roads. There were roads everywhere though because… You know the way land checkerboards were originally laid out in the Midwest, every mile there’s a road, right? It goes around a square of 640 acres. But, at the time I was born, there was no all-weather road to our house. From our house, it was about three-quarters of a mile up a dirt road to the gravel road eventually leading into Grinnell, which was about ten miles away. My mother had to carry a newborn me home from the Grinnell hospital in Iowa mud. She used to talk about that. It was one of her favorite stories. But it really would have been a struggle. She was just out of the hospital, probably in goloshes, carrying a baby, with feet caked in this muck – Iowa loam, which is a mixture of silt and clay and maybe some sand and humus. This is sticky stuff, and your feet get very heavy with the caked-on mud, but it makes Iowa soil great because it holds water so well.
When I was young, I can remember a gravel road being stubbed down to our house from the existing graveled road It was three-quarters of a mile, and there were…let’s see…one, two, three, four, five, six families on that three-quarters of a mile. So that’s rather dense. The neighborhood institutions were scaled to that population distribution. People would go to the local church once or twice a week with their neighbors. The school, which was a one-room schoolhouse, housing 8 grades plus kindergarten, was about a mile and one half away along a dirt road. The closest small town, called Lynnville, was about 4 miles away. Lynnville was on a river that provided power for a grinding mill. Although it had only about 500 people, it also had many services like a grain elevator, lumber yard, markets, a bank, a barber, a set of frozen food lockers and an icehouse, and so on.
Saturday nights for me as a four-year-old kid were exciting. I mean, everybody was there. It seemed very crowded. The town stores, including the soda fountain, were open. In the summertime people would have community events at the park bandstand. My father always imagined that we would be a famous family singing group, and my brother and I used to win talent shows (usually scored with an applause meter) when we were very young, some even at the statewide level. The cuteness factor carried the day, I think. Can you imagine? [both laugh] But anyway, that was a winning combination. What? My wife just said we were like the von Trapps.
My father never lost his love for that kind of music and the family groups doing it. These groups are still a beloved American staple, especially in gospel and roots music. He used to love to go down to Branson, Missouri which thrives as an entertainment center with many family performers still playing similar musical styles. But he passed away many years ago, enjoying playing the music to the end of his life.
Blair, was your high school larger or it was still the one room?
I went to Grinnell High which had about 500 students in 4 grades. But the one-room schoolhouse only had about 16 students in 9 grades… Rural life was changing fast in the late forties and early fifties for many reasons, both economic and social. The one-room schoolhouses were important to the earlier rural lifestyle since they were within walking distance. But it was a challenging environment to teach a modern curriculum. Everyone had cars by then and the roads were becoming all weather, so rural schools were beginning to disappear in the early ‘50s. I was from the last eighth-grade class to graduate in my county, and one of the last in the state. The big town Des Moines Register did an end-of-an-era article, including a striking illustration by their Pulitzer-prize winning cartoonist Frank Miller, probably in my last year (1957) about the disappearance of one room schools in rural Iowa, using my school, called Washington #4 or Hazel Dell, as the example.
My brother and I walked to school along a dirt road, or in good weather might ride a bike with balloon tires. I rode my bike to school on my first day in kindergarten, as a 4-year-old. I continued there for nine years. One teacher, nine grades. You can imagine what it might have been like to be the teacher. I had three siblings, which was not atypical for the local families. The school was just a few families combined, really. The local population was quite stable. Of course, siblings were spread in age. My youngest sibling was in school two years with me, I think. I had an older brother who was one year ahead of me. I had one other boy in my grade all 9 years named Don. The first three years I had another classmate, but the girl, Diane, moved away. I still remember these people more or less as distant family.
I hardly ever had routine recitations once I got to be more than a couple of years into school because the teacher was overloaded. She needed to work directly with a few individuals, so if you could take care of yourself and do the reading and exercises and problems that was it. I was somewhat precocious and could read well when I started school and so on. So, she basically gave me my desk work and left me alone. When I had finished my assignments, I spent a lot of time reading whatever I could find. Early on, there were few books around, so I remember reading all the lower grade reading books for entertainment when I was very young, probably in first grade.
The teacher would try to get a few books from the town library, so at least school had more new books than home. I liked to go to school. I enjoyed the friendships with the other children. I learned the basics (reading, ‘riting, and ‘rithmatic), but I was self-educated at some level. My father was the trustee of the school for a while when I was young. He bought a new encyclopedia for the school with some illustrations in color, and so I read the World Book Encyclopedia a lot, which the teacher always was happy for me to do. So, you could say my education was encyclopedic, I suppose. [both laugh]
Blair, were you always interested in science, even as a young kid?
Well, surely yes. I liked to tinker and was interested in astronomy. I had a small telescope, and so on. I had no scientist role models, nor knew any adult who did anything other than dealing with the basic needs of the rural life that surrounded me. My parents were highly religious Bible Belt Christians. My mother was Dutch Calvinist. She came from a Dutch Calvinist community that emigrated from Holland starting in the mid-19th century. They came from the Bible Belt of the Netherlands. driven out, they would say, by religious persecution from the State Reformed Church. As a group they appear to have been quite wealthy – at least enough so that they were able to buy land for a town and farms and immediately build houses and so on. They came as one large group initially although many others came later as well. Their leader was a rather wealthy fellow named Scholte. They named their town Pella, after the Biblical City of Refuge. I don’t know if you have heard of Pella, Iowa. They house a few major companies including one that makes windows, among many other things.
The town is quite famous by Iowa’s standards, and prosperous too with well-kept farms and homes. It houses a college associated with the Reformed church along with the nationally known firms already noted. It’s a lovely Iowa town—a tourist town too, with a Dutch theme, as you can imagine. I have many Dutch relatives…many, many Dutch relatives. My mother had a large family (6 siblings) which was quite common. I have many first cousins, particularly on my mother’s side. The original Pella settler community expanded outward, building quite a large farming community around Pella which probably extends out 20-30 miles. If you touch a random person within that area, they’re quite likely to have Dutch heritage. There’s still a lot of focus on family life, and many of the children tend to stay around—at least they did when I was young. I think they’ve been moving away more in recent years. Some of my cousins (people my age or younger) left, but a surprisingly large number stayed for such a small place.
Our farm was just outside of the fringe of Dutch settlement and about 25 miles away from Pella, so we were oriented to Grinnell as our “big” town. I ended up going to Grinnell High. Grinnell is a college town with a rather famous college.
Yeah. I know Grinnell.
Grinnell is probably one of the more intellectually oriented towns in Iowa, you might say, because there is a medium-sized liberal arts college in a relatively small town. The population was probably around 7,000 at that time. I had several friends in the high school whose fathers (always fathers) were at the college. When I went to Grinnell High with my farm country background, out of my one-room schoolhouse, it was a bit of a culture and social shock for me, which probably had a bigger impact than adapting to the academics. But I quite enjoyed it. Grinnell High was rigidly tracked at that time, so I had to take a test as a country school graduate so I could be slotted in. I did great in everything except the math placement exam, where, since I had no algebra in country school, they put me in the second class, not the first, for algebra. That changed the next year, as I was really pretty good at math – I mean, not great in a mathematician’s sense, but pretty good in the Einsteinian sense. Of course, I’m not anything like Einstein, but you know that he used to say how bad he was at math?
Physicists are not that bad at math, or they wouldn’t be physicists.
Grinnell High did reasonably well with the sciences. There was a good biology program, and a great chemistry teacher, Mr. Converse, who also taught physics but retired at the end of my junior year. So, when I took physics as a senior, I ended up with a callow fellow who I’ll not name though I remember him well. I think he had one physics course in his entire life and must not have done well. He was very young and fresh out of college. He probably knew some chemistry, but nothing about physics. I had a couple of friends, Don Rafferty, and Bill Ellison. Together we formed what now would be called the nerdy group. We learned little formal physics that year from the physics class, but I learned quite a bit at least qualitatively with my friends about what was fun and exciting about physics. We all ended up with STEM PhDs.
At that time, the Grinnell High math program was stuck in an earlier time and took you all the way to trigonometry and solid geometry in your senior year. That was how fast it went, right? My friends took calculus at Grinnell College, but my parents knew nothing about doing that (and money was an issue too) so I was a year behind many scientifically oriented kids in math, going into college.
Blair, did you ever think about going farther away for school, or that was not on your horizon?
I had no money. Locally no one would have thought of my family as poor, but we had little cash. At that time, family farming was not primarily a means for making money. It was a way of life — a way to keep a family going, housed, fed, and clothed. Everyone in the family, including the kids, worked on the farm. This was at a time of a large change from a mostly subsistence economy to a cash economy. Farms of our size and style could not generate enough cash to provide for a large family living in the modern world. A few years later, my farther ended up taking a job at a local manufacturing firm for that very reason, although he and my mother continued to live on the farm until he retired. The other problem was that I had no role models. I knew little about applying to college. I had friends who were in college but never thought to talk with them about applying in any systematic way. I had several friends whose fathers were college professors, but I don’t think young people talked about college very much. Not like today.
So, my horizon was limited. I thought I had two options. I never thought about going out of state. It was more than a bridge too far. One option would have been to go to Iowa State or the University of Iowa. However, I didn’t understand how to make the finances work out. I thought it would be very hard to pay for living away from home. Or I could go to Grinnell College and live at home for free. Grinnell was thought to be maybe the best college in Iowa. I gave little thought to the differences between Liberal Arts Colleges and Universities. I just thought, “Well, I’ll go to Grinnell.” Yeah…Go Pioneers!
My wife was just saying I worked my way through. I was a good student coming in and had a good scholarship. I worked in the post office every morning sorting mail, and later in the library and the physics labs as part of my student aid package. In summer, I tried to make and save as much money as I could. I had a farm homestead painting business one summer, and surveyed land for the US Ag. department another. I maintained cemeteries locally to make a little more cash, which turns out to be a problem for a college student because of timing. The big season for cemeteries is Memorial Day. You don’t think about cemeteries having seasons, but they do. People want to come on Memorial Day and it should look perfect. Well, when are finals?
So, it’s a horrible job to try to balance with being a dedicated college student. I was living at home, which meant that every morning I drove the 10 miles up to college, and I’d stay there all day. I’d go to work in the post office in the early morning – say at 5:00 A.M. I would generally opt for early classes if I could get them, and then I would sit in the library or the science library and do my homework. I was a singer and I used to sing in madrigals and chorus and the college chapel choir which allowed me to develop some college friendships, and to decouple from the local church in a way with which my parents could be content.
Blair, what was your prompt for getting involved with physics? Was it a course? Was it a professor?
Well, I had always been fascinated by the big questions that physics tries to address as well as practical matters of how the world works. In the distant past, the social consensus was probably that many of these questions were in the realm of religion or philosophy and not science. Many people still think so. Those who retain a fundamentalist religious orientation, like my parents or someone who stays within my background, tend to think they’ve figured out the way the world and universe work, and see these questions and their answers from a religious perspective.
As a kid I was something of an expert on Bible stories since books of these were one of the primary bits of literature in our house. I have been told by my sister – I don’t remember it all that clearly – but she claims that there was a trivia contest on Bible stories that I had with the minister when I was young, and I beat him in front of the rest of the church. I don’t know how hard he was trying but I had learned these Bible stories by heart and believed them. [augh] Anyway, take the story of how we got rainbows. You know the Bible story. God put them as a sign in the heavens that He would take care of His people…….
That there would never be a flood again.
Right. Exactly. That always seemed odd to me, even when I was a young kid, as an explanation for a rainbow. I knew that if you shoot water from a hose into the air on a sunny day, you’d see a rainbow. It seemed a little odd to attribute that to God, or at least to his direct intervention. I remember reading one day in school in my World Book Encyclopedia the section on rainbows. I used to go pick a letter for the day, so I was reading the R’s that day…. I ran across the section on rainbows, and I thought, “Well, that’s interesting. That’s perfectly straightforward, no magic required…” Not so simple but a straightforward explanation using basic physics.
I asked my dad about it. I said, “You know, the story that somehow this is God’s sign to his people seems to be perfectly easily explainable as a physical phenomenon.” I wouldn’t have said it that way, of course, but that was the question. He hemmed and hawed and them came up with something like, “Well, maybe that’s the way God does it for something like that,” which didn’t seem a good answer for the question I was really asking. A purpose driven religious type explanation for a physical event didn’t seem necessary or very helpful.
Anyway, that long story shows something about the way I thought even then. I couldn’t have been more than seven or eight at the time. Although my parents were proud of me then and later, I know that my questions and thought processes were troublesome, challenging, and incommensurate with the way they thought about the world.
Blair, after you graduated, was the draft for the Vietnam War something you had to contend with?
Yes, it was. I graduated in ’66, so the Vietnam War was heating up and the army was looking for too many “good men”. Anti-war protests and supporting the civil rights struggle were large parts of the social fabric on the liberal and socially aware Grinnell campus during all my years at college, but by ’66 the war’s impact was growing, and an expanded draft was impacting students, so the protests were more frequent and pronounced. I was doing research at Argonne Labs the summer of ’66, and I got a notice from my local draft board to get an exam. I wasn’t yet being inducted but they wanted to know how many candidates were fit for the military draft as it ramped up – somewhat concerning as you can imagine. I went into downtown Chicago for the exam, which was an experience. At that time, the part of Chicago where the draft board exams were taking place was a bit sketchy. Walking there from the train and then wandering around the exam room in your underwear with your little paper bag of valuables surrounded by a crew of tough looking inner-city dudes was more than a little intimidating. [laugh] It was quite an interesting experience – like the infamous curse.
But at the last testing station which checked vision, I was found to be too nearsighted which was noted as a DD on my form. I can still remember the line from the captain as I checked out after I had put my clothes on at the end of the exam. He looked at the DD and then looked me straight in the eye and said, “I am sorry to inform you that you are unable to serve your country.” I was relieved. It turns out that those entering graduate school in ’66 retained their deferments till the end of their PhD so usually avoided the draft anyway.
I had a graduate school roommate who was a chemistry major from Texas — very much a Texan when he came to Stanford. He came out of ROTC and had already been inducted into the army as a second lieutenant. You know what happened in San Francisco in ’67 – the Summer of Love? He met new and different people with a different outlook, and different politics, and his life was transformed. He spent the next three or so years trying to figure out a way to be un-inducted.
Eventually and unfortunately, but fortunately in a narrow sense, he and I were in a car accident together, right after his wedding. He and his wife, his wife’s best girlfriend, and I were going to Yosemite to camp — this was for an era appropriate honeymoon — when we were sideswiped outside of Mariposa, CA by a high school student who had taken her parents’ car without their permission to drive with her friend to Yosemite for the day. She was about 16 years old. I was in the hospital for quite some time, critically injured. My friend was less seriously injured but that injury eventually led to his honorable discharge late in his graduate career.
Blair, at what point during your undergraduate education did you determine that you wanted to pursue physics in graduate school? Was it early on?
Well, I really liked the physical sciences, especially physics, despite my High School class. Grinnell college was, and is, a liberal arts college, and, at least as an institution, was devoted to a view that higher education should be organized around the study and analysis of the Western canon. But there were diverse differing views within the college and many pressures from students and outsiders too, I suppose. Grinnell went through frequent cycles of trying to reinvent and reconfigure the curriculum to make everyone a renaissance, liberal arts educated person while also allowing time for students with other goals to develop some expertise in more modern areas of study. At the time I was there, there were many requirements for graduation eating up much of the curriculum space, and in particular, a student was supposed to devote a minimum of 50% of the first 2 to 2.5 years’ study to the canon-taught in a very structured way. There was even a major exam in the liberal arts between the semesters of the Junior year. For someone interested in a field like physics, where knowledge and technical mastery must be accumulated over time, that sets up a curriculum challenge, since it makes it hard to get started early enough to be able reach a high level in the major in time to graduate.
But I really didn’t see that so clearly. I thought something more like, “Well, I enjoy physics. I want to take something I enjoy while I’m struggling with ancient philosophy.” It turns out that I do enjoy many areas of the liberal arts, especially history, which I still read for pleasure. Anyway, I signed up for physics my freshman year, violating the college’s structured system which fortunately went unnoticed for some time…and, of course, I needed to take calculus too. The physics professor for that class was a senior guy who was primarily interested in physics from a historical perspective. He would have loved to have taught Newtonian physics directly from the Principia, I think. He loved Victorian era physics instruments and had a great collection. He’d bring them into class for demonstrations. I really liked him, and the course. It stimulated my interest in the history of physics and the way in which physics develops. I learned quite a bit about classical physics, conceptually, although the course was somewhat lacking technically, and I had a lot of fun in the labs, where I ended up as a TA during my last two years. So that was the beginning. There were only a few physics majors. The department had four professors at that time.
I started thinking about going to graduate school in physics in my third year. We had a young lecturer who was probably in his mid-twenties. He was taking a year off from the University of Illinois to replace someone on leave. He was not so much older than I and was in graduate school, so it was useful to get to know him and learn his experiences and perspectives. Then the next year, which was my senior year, Grinnell hired a young assistant professor named Bruce Thomas, a theorist just out of Cornell. I think he’s retired now, but he was at Carleton most of his life. After getting to know him, I realized, “Well, graduate school looks like a good thing to do next.” I didn’t have a very good plan about what I would do professionally otherwise, so going to graduate school seemed like a good option.
I don’t recall many focused discussions about grad school, but we would chat informally about various things such as what schools were like, his experiences, what he thought you should look for in a graduate school, things like that. Probably one of the advantages of a small department like the one at Grinnell is getting to know professors informally as individuals. Regarding intellectual matters, I was just taking standard physics courses and enjoying them for the most part – though some were better than others. I particularly enjoyed a great course Bruce Thomas taught on wave mechanics, which was sophisticated mathematically-a graduate level approach. So that was where I was at that time.
Where did you get the idea that coming from a small liberal arts college that a place like Stanford was within range for you?
Well, I don’t know. I never had the idea that it wasn’t. Grinnell produces an enormous number of PhDs per undergraduate – maybe one of the top 10 PhD producing colleges in the country – and sends its graduates everywhere. A standard thing you do coming out of Grinnell is get a PhD or go to professional school. A cynic might ask, “What else can you do coming out of a liberal arts college? “
It wasn’t considered to be out of your league to go to Harvard or Stanford or some other major research institution like that. I mean, Bruce Thomas was a Cornell PhD, and a Grinnell graduate. Went to Harvard first and hated it. But he liked Cornell. So, one of the reasons I didn’t apply to Harvard was because Bruce Thomas hated it.
Stanford was not very high on most people’s radar at Grinnell. I went to Stanford for a couple of reasons, one of which was that it had the new accelerator at SLAC, and I’d gotten interested in particle physics. SLAC was projected to be just turning on when I got there. The other thing was that Stanford had a reputation for being a good place to be a student. I mean, at a time when people didn’t speak like this, the experience was thought to be collegial rather than competitive. That was just what I understood informally. I don’t exactly know how I came to think this, but it had this reputation as a great department, even being fun in the sun, not super intense like Caltech, for example, right? I mean, Caltech’s rep was as super intense, and competitive and so on, while Stanford’s was laid back California, somehow, but with a brand-new world class accelerator and a great physics department. Somehow or other, all these things together made it seem attractive.
I was an NSF Graduate Fellow, so I had my own money, and could go nearly wherever I wanted, as it turned out. I had done well as an undergrad and Grinnell professors were used to writing letters for people going to graduate schools. Their contacts weren’t the best, especially as far afield as Stanford. They had more local and some east coast contacts, so going to Stanford was a little bit of a stretch, I suppose, but that’s what I did.
Blair, relative to your other fellow graduate students who may have come from bigger schools, how well-prepared did you feel?
I was roughly a semester to a year behind somebody coming in from a tippy-top undergraduate program like MIT or Oxford. I had a friend at Stanford who was from Oxford. He had an intense physics background. Of course, he’d done physics for three years rather than studying Aristotle and friends some of the time, which was more in Grinnell’s wheelhouse, but I found myself a little bit behind such folks. But there were several others in my cohort whose level of preparation was closer to mine. Practically, those especially well-prepared folks had already taken most of the nominally first year Stanford graduate courses, right?
So, they were taking advanced quantum field theory and so on the first year, while I was taking Schiff’s quantum mechanics, from Schiff in this case, and I was also taking Jackson’s electrodynamics from Frank Von Hippel, and a great course in Mathematical Physics from Al Fetter and so on. It wasn’t all bad to be a bit behind some of the others on advanced course work, but it took me longer to find adequate time to do much research. On the other hand, I had already done some experimental particle physics by that time. I’d been at Argonne working for Malcolm Derrick. I’d been at Iowa State working at a small laboratory doing some really nuts and bolts work on a scattering experiment at a small electron synchrotron, and I had also done quite a bit of lab work while at Grinnell.
I was pretty good with my hands. Growing up on a farm, you develop practical skills. I’ve known other particle physicists, experimentalists particularly, who grew up on farms, and the technical side of experimental physics is natural for them. That may explain something about why my career developed the way it did. There are a lot of people who are better mathematicians than I. I’m not the best person to explain theoretical ideas either, but I’m pretty good at detector physics. [laugh]
Did you find yourself gravitating toward SLAC?
Well, my first year of course work at Stanford was very intense. I had only a few hours a week to do research until the summer. I started working at SLAC for Dick Taylor who was heading up Group A, which was the electron scattering group that was commissioning the end station A spectrometers to do the classic single particle electron scattering experiments. Pief Panofsky, who was founding director of the laboratory, had succeeded in including this large scattering detector complex as part of the original SLAC construction project. I think he hoped to be leading the first experiments there, but of course, lab management was more than a full-time job. I never actually asked him about that, and it’s a little late to ask now. [laugh]
There were these three big detectors — for that time — enormous beasts which were mounted on railroad tracks and pivoted around the target point. They were massive instruments on any scale, even today. As big as a freight train engine. The experimental idea was a redo of the old Rutherford idea that you bounce, in this case, electrons off protons or neutrons and you see how the electrons scatter to probe the target — one of the oldest, most basic of particle physics experiments — but at this newly opened, high-energy scale.
So as the beam was starting to come down to the experiments in 1967. the first thing was to calibrate the spectrometers before taking real data. We put a magnet where the target was later to be and steered the beam around into the spectrometers to calibrate them. Where does a particle of a given momentum from the target end up in the detectors? That was the first experimental thing I ever did with the SLAC beam, and it would have been my first year, probably in late spring. The Lepton Photon Conference was held at SLAC in the summer of ’67, which is the first one I ever went to. It was very exciting for me because the first new data at 20 GeV or at least close to 20 GeV — I don’t remember exactly — from electrons scattering off protons from End Station A was presented, and I had a role in it, however small.
For people who don’t know the way Stanford is structured, SLAC is a department of the university. It has its own faculty independent of the physics department but does not independently admit graduate students. Graduate students are essential to a university research enterprise, and, as SLAC was starting from scratch, the department wanted a rather large cohort, which had to come from campus. So, SLAC made attractive offers to the Stanford grad students for my first summer. They paid you as a full-time research worker rather than one-half time as a student. That was very attractive for students, as you can imagine, and a fair number ended up enjoying the environment and staying on. Many had probably come to Stanford with that intention anyway. However, to no one’s surprise, this tactic was widely panned by faculty from campus, and only lasted for one year. It took some time for all the issues of coexistence of the SLAC department with the campus to be fully worked out.
Blair, what was some of the excitement in theory at SLAC when you were a graduate student?
Oh. I’m probably not the best person to ask because as an experimental graduate student, there was so much going on just building and running an experiment, and at a personal level developing the requisite skills to be useful as an experimentalist, that it was hard to find time to keep up with all the theory. There was always a lot of theoretical chatter but much of it seems to have ultimately been of little importance in retrospect. When SLAC first delivered beam, people were excited to be able to measure high energy electrons scattering off protons. If you see a high probability of big changes in the vector momentum, this means you are hitting hard heavy objects. Of course, Rutherford discovered the essential nature of the nuclear atom this way.
Now this kind of experiment was being repeated at SLAC to probe the nature of the proton. This ended up being where a lot of the early excitement was at SLAC in both experiment and theory, as hard scattering centers (quarks and/or partons) in the proton were demonstrated with Nobel prizes all around eventually. But it was hard to understand while it was going on. James Bjorken (bj) was a great theorist at SLAC and had a model that predicted well defined so-called scaling behavior and explained all the results beautifully. However, it was couched in the abstract language of current algebra, and I never developed much intuitive feel for his approach. Shortly thereafter Feynman developed a much more intuitive model (the parton model). Feynman used to show up at SLAC to snoop out the latest experimental results. As a grad student this could be fun because he liked to slum it with the grad students at lunch. He would collect some of the local students around the group A student office area occasionally and we would go and spend some time chatting with the great man in his playful way, mostly about physics, under the oaks of the SLAC cafeteria. Quite fun to see how his mind worked.
As a beginning grad student, I had started working with SLAC group A that ended up doing this great set of experiments along with a group from MIT, but I ended up going in another physics direction when I decided on my thesis research. This may be taken to show how poor my physics judgement was, but, of course, all the SLAC experiments were just starting in 1968, and I saw opportunities with other groups.
A major theoretical theme of HEP at the time was the evolving understanding of hadron classification and structure, and what that meant for possible constituents of the hadrons. Gell-Mann and others had invented quite successful schemes (especially the eightfold way) to group and classify both mesons and baryons using SU(3) group algebras by the early ’60s, although to me this seemed very abstract and hard to connect to the physical world. By perhaps the mid ‘60’s Gell-Mann and Zweig realized that this scheme could be built up of 3 elementary fermions that Gell-Mann called quarks and Zweig called aces. This was the beginning of a much simpler and more intuitive quark model picture for hadrons, though it took 10 years or more for this way of looking at hadrons to be taken completely seriously.
But by the mid to late 70’s, it was a part of the Standard Model. The quark model implies a complete spectrum of hadrons made up of the different quarks, and predicts their quantum properties and spins, and in a more approximate way (at least at that time) their masses. So further exploration of the hadron spectrum to test these ideas was a crucial area for experimental research. I thought this looked promising and did my thesis on an early experiment studying the ?? spectrum and the hadronic behavior of the photon. I continued to do spectroscopy experiments for much of my career.
Blair, who ended up being your graduate advisor?
Burt Richter. I got to know Burt and his group while working on the end station A spectrometers. Groups A and C also had offices in the same wing of the central lab annex at SLAC, so we went to lunch together and so on. Burt was a great experimentalist and driven by data. I enjoyed his somewhat crusty take on theory, his approach to doing experiments, and learned a lot from watching his management style. He was interested in machines and wanted to build a colliding ring (e+e-) at SLAC. His group had several machine physicists as well as experimentalists. He came to Stanford originally to probe QED in deep scattering experiments with the HEPL machine on campus. He then worked with Gerry O’Neil from Princeton to build the first electron-electron collider rings which were shaped like a figure-eight on campus using HEPL as the injector, to probe QED at even larger momentum transfers.
Eventually, he realized the virtues of colliding electrons and positrons, and proposed a series of machines, of ever decreasing cost, to be constructed at SLAC, trying to obtain approval from the funding agency, the AEC. Eventually, around 1970, Panofsky received agency approval to build the machine out of funds already allocated for experimental facilities at SLAC. After this first machine, SPEAR, was constructed on a flat concrete lot without any new project funds, and was such a resounding success, e+e- colliders became the major theme of SLAC research for the next 35 years until particle physics experimentation stopped in 2008 at the end of BaBar.
What was Burt like as an advisor? Did he work closely with you? Was he involved in your research?
Early on, we interacted regularly, but this changed rather abruptly once approval came for SPEAR construction around 1970. He became a somewhat distant presence after that. Burt had very few graduate students in his career, and I was likely his only student during his long career at SLAC. He certainly read my thesis carefully and so on, and we met about it from time to time, but he had a very challenging job leading the building of SPEAR and MARKI by that time. But several staff physicists from Group C remained with my thesis experiment. Group C had also been collaborating with David Leith’s group EB, so I spent more time working directly with David and other members of his group. Later, when I came back in ’75, I joined David’s experimental group.
One of the interesting things that’s kind of hard to capture is that SLAC at that time was populated by a lot of quite young people, not only young by present standards, but even young by normal standards of academia at the time. Many people who were ten years older than me were senior experimentalists. Several group leaders were young. David Leith was seven years older than me. Burt was probably 13 years older. So, these were people in their thirties when I was in my twenties.
In those early days, the beam would come down the accelerator and it would be directed into several beamlines. The amount of beam you got, to some extent, depended on how well you negotiated, or some thought, how loudly you yelled. People would communicate in those days via an intercom, and some were prone to switching down all slots at one time and yelling into the mic which might provoke a response. So, there was a certain chaos about the process. You know, I was an Iowa boy. I was quiet. You’re supposed to keep your opinions to yourself and speak calmly, and a lot of these people were from the Northeast — let’s say New York. [both laugh]
So that was an interesting experience for me. It was physics as a sport, carried on with a lot of energy. But it was exciting, too, right? Everyone thought their experiment would change the world, and there were a lot of good people throughout the organization, both staff and faculty. There were also a lot of young post-docs and young staff on leave from around the world – people who became pillars of the field later. You know, at one time, many of the major labs or collaborations in Europe were being run by one of these SLAC initiates, you might call them, who were there when I was there. So, for me it was an incredible social experience in terms of getting to know people who later had an impact on the field in a major way. The fact I was SLAC at a time when there was so much excitement played a big role in my career.
Blair, what were some of the main findings of your thesis research?
In the early days of SLAC operations, Burt and Group C were interested in understanding how the photon interacted with hadrons at high energies and had done some early experiments with the end station A spectrometers measuring single pion photoproduction with a photon beam produced by Bremsstrahlung off a beam production point upstream of the targeting point for the spectrometers. Photons are the carrier of the electromagnetic force and have a well-defined interaction with charge, but they also act like hadrons. It had been known for some time that photons interact much more strongly with protons than would be expected from the charge component alone, and indeed have a similar interaction strength with zero charge neutrons. The leading explanation in the 1960’s for this was called vector meson dominance (VMD) and had been invented by Sakurai by 1960 to explain nucleon form factor data from the 1950’s. This was quite prescient as the vector mesons had yet to be observed. VMD claimed that the entire hadronic electromagnetic current is just a linear combination of the neutral vector meson fields, so clear relationships were expected between rho production by pions and single pion photon production.
My thesis experiment (E41) aimed to study these expected relationships in a newly built multi-particle forward spectrometer that sat in a 15 GeV/c meson beam in the C line at SLAC. One other point of interest was the possible observation of the interference effect between the rho and omega mesons. The possibility that they could mix with one another through the electromagnetic interaction was first suggested by Sheldon Glashow shortly after the discovery of these vector mesons in 1961. He pointed out that these two vector mesons had all the same quantum numbers from the electromagnetic perspective, so that electromagnetically induced transitions between them should occur. Moreover, he predicted how the effect might be greatly enhanced by the near degeneracy between the masses of the two mesons.
In one sense the results described in my thesis were big successes. We saw a very clean signal for rho-omega interference and were able to observe its production properties. The rho production results did show structure like the single pion photoproduction data as expected. But there were discrepancies between the data and the model which could not be easily reconciled. This is quite typical for VMD predictions. It often provides a good description of many features for some data, but then will fail when applied to other data. It can often be useful to this day phenomenologically but seems uncompelling as an explanation at a more basic level. When looked at in light of the Standard Model, it’s not so much that it’s wrong as that it provides little insight into understanding the deeper features of particle physics.
One other piece of physics that came from E41 was not included in my thesis, but it was important for my future career because I learned about a technique for doing a partial wave analysis (PWA) of π+ π- scattering via extrapolation to the pion pole, which is about the only way you can imagine measuring such a fundamental process on such short-lived particles as pions, and is the gateway to studying the natural spin-parity meson ladder.
E41 had limited acceptance for the pion pairs, so results were limited to low mass. But later, I worked with a full acceptance spectrometer called LASS doing Kπ scattering to determine the spin and level structures of the strange mesons. This is interesting not only for comparing with the quark model, but also as basic data needed for what I call hadronic engineering. That is, to understand an experiment, you may need precise experimental results from another channel that’s related. One recent example is that for doing certain kinds of B physics in BaBar, you need to know the cross-sections for the Kπ S-wave component as measured by LASS.
As another very recent example, you may have heard about recent g-2 results from Fermilab, right?
Okay. So, the standard way of calculating g-2 and relating it to theory requires that you do hadronic subtractions. How do you do hadronic subtractions? Well, the standard method is that you measure all the hadronic cross-sections produced by e+e- which allows you to subtract off the hadronic piece that you don’t know how to calculate directly in QCD. Now all these cross sections must be measured, which is challenging, but the largest contributions have been measured to a fair level of precision at e+e- colliders for decades.
We were able to improve many of these measurements in BaBar. We measured all these cross-sections via the initial state radiation method to quite high precision, and so have had a part in the widely publicized comparison between the recent g-2 measurements and the standard model calculations. Now, as an aside, I worry that the recent hype about g-2 may be a bit unwarranted because the standard method for calculating the hadronic corrections appears to be giving a different result than the most recent attempt to calculate it more directly using QCD on the lattice, and if you use the lattice calculations, the g-2 data and the standard model are in much closer agreement.
Blair, how did the opportunity at CERN come up after you defended?
Let’s see. I’m trying to remember exactly how that happened. SLAC was a very international place, and I knew several people who had worked at CERN. Paul Baillon was a collaborator on my thesis experiment, and a senior CERN staff physicist. David Leith had been on staff at CERN. My good friend Fatin Bulos was planning to take a sabbatical there, and Harvey Lynch joined my thesis experiment from CERN. So, knowing people with CERN connections was common at SLAC, and they were uniformly positive about working there. I don’t know that there was a CERN Fellows program yet that would accept US post-docs in the way it does now. But there was a senior British physicist named Geoff Manning, whom David and Fatin knew, who I believe was deputy director and a group leader at the Rutherford Lab at that time, who led a British collaboration on an ISR experiment.
At that time, the new ISR collider at CERN was the first hadron collider in the world and was providing by far the highest energy collisions in the world, to above 60 GeV. I thought that to be sitting at the energy frontier on a new kind of machine would be exciting, although it wasn’t clear to me just what might be seen that would be revolutionary. As it turned out very little exciting frontier physics came from the detectors at that machine, but it was a major step into the collider beam era for hadron machines, and the machine where stochastic cooling was first tested, an invention which won Simon van der Meer the Nobel prize. In any case, I applied for a Research Associate position at the Rutherford Lab to work at CERN. But I believe that this post had been filled by then, so I was offered a CERN posting on an experiment at the Proton Synchrotron(PS) with Chris Damerell’s group.
When it came time to decide about my post-doc, I had a few options, but the right choice was unclear. I had some positive advice from friends that working at CERN would be a great opportunity to grow and would give me insight into the world of hadron machines, which were quite different than the SLAC linear electron machine. And it would be culturally enriching both scientifically and personally. I finally decided that that I should do it.
So, I ended up going to the Rutherford Lab to work at CERN on an experiment studying line-reversed reactions. I enjoyed the experiment a lot. It was a strong group technically. I became friends with many new people and enjoyed the CERN scientific community and the overall CERN lifestyle which is great for postdocs. Geneva is beautiful and I loved it and the Alps. I enjoyed my time living in Oxford and working at Rutherford too. I spent some time working on a new proposed 14 GeV electron-positron (and maybe even proton) collider (EPIC) while in England, though it never was funded. And we did a pretty experiment at CERN. But scientifically it was from the older school of HEP, which largely became passe after the November revolution. I did work on several novel sub-detectors, including Cherenkov and tracking detectors, and wrote the reconstruction software, all of which ended up being relevant at SLAC when I came back in 1975.
Overall, it was a good experience at CERN? You enjoyed it?
Very much so. To this day I love going to CERN. In addition to its scientific breadth and broad technical competence, it has a certain collegial ambience in the living quarters and the cafeterias, and many scientific friends and colleagues to meet with again. It is so large that everyone comes by sooner or later. It is the best place in HEP to randomly find a colleague whom you know or become acquainted with someone whom you don’t know. There is no other place in HEP quite like it in terms of sociological factors associated with doing particle physics, though many labs have tried to duplicate some of the magic. For many years SLAC had some of the same flavor, due to its attraction for international scientists, but it was quite a bit smaller, and it is hard to duplicate the CERN croissants, although the coffee has improved at SLAC in recent decades. But SLAC did have a lot of people coming through and you became acquainted.
Did you have a job waiting for you back at SLAC the whole time, or you were applying openly on the job market and SLAC had something for you?
I had been a research associate at Rutherford Lab, but living and working in Geneva for most of three years. I had been offered a path to a permanent position at the Rutherford, but I wasn’t enthusiastic about the scientific opportunities of the Rutherford experiments at CERN at the time, nor the probable need to switch to a commuting lifestyle if I were on staff at the Rutherford. This was before the November revolution in 1974, but I thought the scientific options at SLAC were better. And “life” played the biggest role. I had been in a long-distance relationship all the time I was Europe with the fantastic women who is now my wife, and we needed to find a way to be together. She was a graduate student at Stanford. So, I was lucky to have the chance to tie several threads together. I talked to my friends and former colleagues at SLAC and found that David Leith had a job available for me. So, I came back to SLAC and joined Group B to work on the new LASS spectrometer.
What was Group B doing at that point?
David’s group had built a succession of increasingly sophisticated electronic spectrometers, which often retained both design and physical elements of the earlier detector within the next one and were scaled up in size. In the early 70’s, they had built a larger version of the detector I had used for my thesis, and a new rf-separated kaon beam to do a series of high-quality kaon spectroscopy experiments. But it was a forward spectrometer, so had limited mass acceptance, and could only handle low multiplicity events. The follow-on to that was to add a very powerful vertex detector to that downstream detector, making a full acceptance device – an electronic bubble chamber. It was interesting physics with a great new detector (LASS) that I was quite familiar with, but the spectrometer wasn’t finished by any means. Some components, like the tracking and particle ID, weren’t working very well. So, I came back to help get the components working and assembled, and to do the science with that spectrometer.
LASS was an interesting experiment technologically. It was a full acceptance device that could track all particles from each event with sufficient energy to emerge from the target. It had particle ID for the higher energy tracks. The tracking chambers were somewhat old-fashioned spark chambers at that time, but there was a very sophisticated direct online connection to a main frame which allowed full event reconstruction in real time with event visualization. It also had a large 1.5T superconducting solenoid, which almost no one else had in those days. Of course, such magnets became standard in spectrometers at colliding beam accelerators later, but at that time it was unique. That magnet is still around! It’s being used by GlueX at the Jefferson Lab and is approaching 50 years old.
Were you on the young side when you were named spokesman of the spectroscopy program?
In other words, is that generally a more senior position?
Well, it would be today, but you must realize that collaborations were much smaller then and people were younger too. The field has aged. For the first kaon spectroscopy run, I was co-spokesman with Bob Carnegie, who had been an assistant professor at SLAC, starting my last year or so when I was a student. He was probably about five years older than me, something like that. He had become a Professor at Carleton University in Ottawa, Canada. We had some other Canadian collaborators like Penny Estabrooks. She had started as a particle physics phenomenologist who co-invented a new approach to PWA at the pion pole working with Alan Martin.
After a few years, our Canadian friends left the Kaon spectroscopy experiment and went to CERN to work at LEP. We attracted Japanese collaborators – a large group from Nagoya University. We upgraded the spectrometer, especially the tracking chambers in the vertex detector and the large acceptance Cherenkov counter, and took a massive data set with the new detector. We had several Stanford graduate students plus a large cohort of Nagoya graduate students, who did theses on the experiment and did lots of the analysis, which was very important for analyzing all the data providing the physics breadth of the experiment.
Did you have any idea that the experiment would last as long as it did, 14 years?
I’m not sure we thought about it, at least not at first. In some ways, this experiment with its massive data set was more like something from a modern detector collaboration (think BaBar or CDF), than like its peers at the time. We needed to solve many of the same problems that major programs solve today, but with less modern tools. Some of our solutions remain relevant today. One difference though is that we had many fewer people – probably a one or even two orders of magnitude smaller collaboration. We did an initial trial experiment in the late 1970’s gathering a modest sized data set and demonstrating the high quality of physics we could produce.
However, we had decided we needed to make major upgrades to the hardware and analysis software before the main run, and as we scaled up the data set by an order of magnitude the time scale for reconstruction expanded. This was the era of the mainframe, and we didn’t have access to enough computing horsepower at the computing center locally to analyze it all. Eventually, Nagoya provided a significant part of the computing with a main frame Fujitsu, and a colleague from group B, Paul Kunz, built one of the first analysis farms (called 168Es, and 3081Es) in HEP which emulated the IBM mainframe instruction set to do the SLAC part of the reconstruction.
Since we were a small group, we were doing much of this work sequentially; starting with the upgrades, through data taking, developing and testing software, getting the computing hardware fully developed and in place, running reconstruction and simulation, until we were in a place where we could do science with all this data. Getting the physics analyses finished and published took us some time too. But long before this time, the experiment itself was mothballed, running costs were gone, and we were publishing physics the entire time, so the physics program continued for all those years. Of course, we were building detectors, doing R&D and collaborating on other experiments as well during much of the later part of that period. Nearly all the LASS group, including my Group B and Nagoya colleagues, joined SLD and many of us developed the CRID detector, taking data, and doing physics at the Z.
As an aside, in terms of my career in Group B, I had been deputy head of group B for much of this period. Around 1990, David Leith, who had led Group B until then, became Research Director for SLAC, so I led Group B from then until the group structure at SLAC was phased out about 2 decades later.
Blair, by 1990, what were some of the major accomplishments of the spectroscopy program?
Well, we did an exhaustive job on strange and strangeonium meson spectroscopy. To this day, the strange meson sector remains arguably the cleanest and most complete light quark meson spectrum known – including the cleanest example of the spin ladder. You can look it up in the PDG. Many of the states observed overlap, so they can interfere, and can be challenging to disentangle, but by measuring the levels of all the expected states, we were able to show that the q-barq triplet splitting’s are small. There seem to be some extra resonances too, especially scalars, that are most likely not q-barq. Such entities are expected in QCD, of course, and have been a source of some confusion in the light quark sector and elsewhere for years. And there is quite strong recent evidence for non q-barq resonances in charmonium. We also discovered some previously unobserved baryons in LASS with strange quarks in them, including an excited Omega-star.
In a sense, all of this is archival and part of the knowledge base of the field. The q-barq spectrum looks just like you expect from QCD and the quark model in terms of the level structure. To date, it hasn’t been feasible to calculate the masses directly from QCD, though the effective potential models do reasonably well. But when people become able to calculate levels directly from QCD, which I believe is getting closer every year, there are a lot of data for comparison there already.
Historically, the timing of this program in the LASS spectrometer is interesting. When I came back to SLAC in early ’75, LASS was still being built. It had roots in the spectrometer that I used for my thesis, but the downstream part had already grown through one complete iteration, and then a beautiful vertex spectrometer was being added upstream. I still think LASS is probably the best design approach for a medium energy multiparticle spectrometer to this day and the closest device there was in that classic period to an electronic bubble chamber. It could measure all the particles that emerge from the target and reconstruct the complete event. It was a fantastic spectrometer.
The question was, in early 1975, what were we going to do with it? Charmonium had just been observed, naked charm was yet to be observed. It seemed clear that physics had changed in November 1974, right? Should we be doing this old fashioned, quasi-light q-barq meson experiment as planned while the rest of the world was so excited to be chasing after mesons made of one or more new heavy quarks?
David and I had an idea that we could put a photon beam into the device and produce charmonium – taking our cue from our old friend VMD. But LASS is a big detector with an associated Cyro plant – quite literally the immovable object. So, we had to try to bring the photon beam to it. The only feasible photon beam seemed to be a bremsstrahlung beam, which has a rapidly rising low energy tail. After thinking about dealing with that for a while, I realized that a basic feature of SLAC, the duty factor, made that a deal breaker too. [laugh] Does that ring a bell with you?
A duty factor is the time the beam is on compared to the total time, if you will. SLAC has a horrible duty factor because the electromagnetic energy that accelerates the beam heats the cavities so the power must be off most of the time to keep things from melting. So, the typical duty factor is a part or two in 10-4. To get the overall photon beam intensity needed to make an adequate number of charmonium mesons, the intensity would need to be quite high during the short periods it is on. But then the total number of tracks in the spectrometer produced from the low energy tail overwhelms the tracking system. It’s a nightmarish version of the problem that the LHC experiments have solved for their experiments now but was intractable at that time.
So, in the end, we decided that the best thing we could do was to use LASS for its original intended purpose, collecting the world’s best data sample to look at strange and strangeonium mesons. It was a great experiment. But it wasn’t fashionable, nor very exciting to the community at a time when people’s imaginations had been caught up by the new heavy quarks. In fact, the basic spectroscopy we were doing was quite similar to what was being done in the charmed sector in terms of probing QCD and helped our evolving understanding of the Standard Model. We are proud of LASS and the quality of the physics we produced. At the technical level, LASS, with its enormous data sets, had to solve many challenging technical issues with tracking, DAQ, data storage, etc. that presage similar problems in modern experiments to this day. We had to invent our own computer farms for reconstruction and simulation and utilize improved PWA methodologies, some of which were invented by members of our group.
I had three graduate students who did theses on the natural spin-parity ladder via Kπ scattering. This production approach is the primary experimental handle available to investigate the orbital spin ladder in spectroscopy up to high spin excitations. It’s very hard to see these excited states via direct production or decay in an e+e- machine looking at charmonium, charm, bottomonium, or bottom mesons. You just don’t produce the high spin objects, like the spin-5’s.
Were there any unanswered questions that you just couldn’t seem to get at?
Well, there are always states you can’t see because you don’t have enough data. The states are quite broad, sitting on top of each other, and interfering, so interpreting your measured waves may be challenging even with massive data sets. This becomes more problematic at higher masses, where there are more and more overlapping states, while the production cross sections decrease. Other issues are backgrounds to the q-barq resonant states, such as threshold structures, or non-q-barq states expected in the Standard Model like hybrids or 4 quark states. One of the advantages of working in the strange sector is that certain q-barq resonant state backgrounds, such as glueballs, will not occur. This makes it especially valuable to be able to compare the strange meson spectra to those seen in spectra without explicit flavor.
I think it is quite interesting to rethink how much our perspectives changed in the nearly 30-year period between the beginning of my thesis experiment, and the end of the strange spectroscopy program. The entire edifice of the Standard Model was built up and had been mostly experimentally confirmed during that period, except for the Higgs. If you go back and look at a textbook that I might have had in ’66 on particle physics, it was very different, both in the theoretical questions people thought were interesting, and in the experiments.
Now that we have a solid theory which explains nearly all of this “old” physics, and has for several decades, we also seem to be approaching the end of the energy frontier particle physics accelerator as the way to probe new energy scales. This is partly because it is unclear how high in energy we need to go to reach the new physics scale, but also that such high machine energies no longer seem attainable financially even if they were to be well motivated scientifically and could be achieved technically. The time scales and costs are troubling even for existing machines.
CERN is a great lab which is taking data, and spending money every year upgrading the LHC and detector. It’s the greatest accelerator in the world, but its best approach to higher energy scales currently seems to be through luminosity increases. That’s a hard way to expand searches for new physics because improvements in precision measurements scale only by √n factors as luminosity increases. So, if something has yet to pop out in present data, it’s very hard to say, “Well, if I get 10 times more data, I’ll see something.” You may, but having been at it for a decade already, it is likely to be only a hint even then, and it’s difficult to imagine that as being the most compelling path to the new frontier.
I think many individual physicists who are interested in fundamental physics are trying other approaches. For example, they’re trying to find ways to look for dark matter directly or probe dark energy in the cosmos away from the big machines. Of course, these non-accelerator approaches like LZ or LSST intrigue many but have their limitations too. CERN has a shot at directly producing dark matter, but nothing has been seen yet, and as discussed above, sensitivity to new physics via higher precision doesn’t improve very fast even with lots more data and running time. So, while the existing accelerators still provide the core of the HEP experimental program, and there is still substantial enthusiasm in the field for what can be learned, many of the basic questions that particle physicists are asking now seem to not be well addressed by any energy frontier machines that could be constructed in the foreseeable future.
Blair, how well developed was BaBar by the time you got involved?
Not that much. I remember writing a small piece of code to simulate a “non-existent” PID detector for a workshop in the late ‘80s, so that must be when I started to really do any work on what became the BaBar project. Of course, I had gone to seminars and maybe a workshop before that, but I was still working on LASS and SLD. There was a cottage industry in B physics workshops around the world at that time.
The basic idea that the B’s produced in e+e- collisions could be an ideal place to test the validity of the CKM approach to CP violation was explored by theorists, especially Bigi and Sanda, early in the ‘80s, but given the luminosities of the time, the experimental sample sizes were a few 10s of B mesons, and it seemed hopeless. But several experimentally observed facts then conspired to flip that outlook. These included the discovery of a long B meson lifetime at SLAC, and the discovery of a substantial rate for mixing in ARGUS. These experimental facts were coupled together with extraordinary improvements to the machines leading to the possibility of massive data samples. The detectors developed at a similar pace as well so would be able to keep up with the data pouring in, with game changing improvements especially in tracking, vertexing and calorimetry. The major detection technology needed for a superb B detector that was still lagging was hadronic PID.
A key idea emerged in the late ‘80s when Pier Odone proposed the novel concept of an asymmetric-energy pair of colliding e+e- rings to provide sufficient boost to enable solid state vertex detectors to measure the expected time-dependent asymmetry. So, by that time, the viability of this physics concept had devolved into resolving some technical questions. Can you build an asymmetric pair of rings with sufficient luminosity? And can you build a detector with sufficient precision that can handle the rates to be able to measure the CKM parameters and demonstrate CP violation?
All these threads were coming together in the late ’80s. There were a wide variety of proposals with different concepts, including linear colliders, but eventually they coalesced around the two technically similar projects using the Odone suggestion that combined to become the B factories we now know and love; the PEP-II machine with the BaBar detector at SLAC, and the KEKB machine with the BELLE detector at KEK.
So that was the general environment in the late ‘80s. I was still working on the LASS spectroscopy experiment, and on SLD at the Z. But BaBar was especially intriguing. Of all the experiments that I was associated with, it seemed to provide the clearest path to measuring something fundamental. But everything had to work as projected to make it successful. The other thing was that there was so much physics on tap – so many things to measure. Search experiments are dramatic but can be unfulfilling when the object of the search isn’t found. I like to measure something. Spectroscopy is like that. B physics data would comprise so many different topics to be explored. It’s probably the broadest data playground I’ve ever been involved in. You can do search experiments for dark matter of many different types up to certain energy limits. You can study CP violation. You can test the Standard Model. You can do heavy and light quark spectroscopy. You can look for exotic mesons. The opportunities abound and many measurements will have high precision because you have so much data.
I was talking a little earlier about the relationship between the recent measurements of g-2 and measuring the e+e- cross-section to every hadronic final state, right? It’s somewhat astounding that BaBar data provide probably the best measurements of most of these cross sections because cross sections to all particles and multiplicities are measured simultaneously at all relevant energies. And we have now done just that. [laugh] It’s remarkable how many things that you can do just sitting at the Upsilon resonances, which is why there are nearly 600 papers to date and people are still hard at work producing more.
Blair, your time as group leader of Research Group EB, was that separate from BaBar and SLD?
Well, Panofsky had the idea that to be a successful, SLAC, though it was a national lab supporting users, needed to have top notch experimental and theoretical research programs of its own, in addition to the large technical teams involved with constructing and running the accelerator. This evolved with time, but SLAC experimental research was originally structured around lettered groups. Each group had a rather clearly recognizable area of expertise, at least initially, and was led by a senior group leader with an international reputation. They had some senior staff people who were experienced particle physicists, often with international reputations of their own, supported by engineering, software, technical and administrative staff. They usually had some technical support responsibilities, such as the ESA spectrometers (Group A), the bubble chamber (group BC), or the electronic spectrometer group (EB).
They also had an explicit academic component with associated post-docs and graduate students, with an occasional undergraduate as well. These were very powerful organizations scientifically, and well matched to the way experimental physics was done at that time. They had long term funding through the laboratory. This powerful structure certainly played a major role in the great success of SLAC in HEP research. However, eventually the agencies stopped supporting this SLAC-centric research focus, and the group structure faded away. The new organizational principal became centered around specific projects. In the case of BaBar, this started with the research side of the BaBar project morphing into the SLAC BaBar department.
So, when I started leading group EB, we were finishing up on the LASS experiment, taking data with SLD, beginning work on BaBar, and doing detector R&D. Somewhat later, I became the SLAC BaBar department head for many years. For some of that time, a remnant of the SLAC group structure still existed organizationally, as subdepartments within the larger SLAC BaBar department, but eventually these went away as the size of the BaBar group shrank.
One defining characteristic about a Research Group’s structure was that its members stayed together over the long term and decided what to do collectively. Of course, this decision-making process was not democratic, but the group leader needed to be able to define a scientific vision that group members could support. In the case of Group B joining BaBar specifically, the group had been working most recently on SLD, and so the question was what we were going to do next. David Leith and I had been involved in the early BaBar proto-collaboration, and most group members found BaBar’s scientific agenda compelling, but the group needed to develop an appropriate role that matched our size and expertise. We were an experienced experimental group who had built and operated complete large detectors, so we could have taken on a variety of roles, but we had unusually strong detector science personnel in the group.
The major unresolved BaBar detector issue at that time was that there was no known detector capable of doing the hadronic PID adequately within the constraints coming from the rest of the detector. In most experiments, hadronic PID is somewhat peripheral. It is often quite important to be able to separate hadrons from leptons but needing to know whether a given hadron is a kaon or a pion is often not so central, particularly at higher energies. But in this case, it is important to recognize light hadron flavor, to efficiently tag B’s and B-bars, and it is also crucial to be able to fully reconstruct final states. At the time, it was unresolved as to how you might do this.
In SLD, we, that is, Group B and friends, had built a powerful ring imaging Cherenkov counter, (called CRID) that would measure the angle at which the photons were being emitted from a particle track in two different radiators That angle is related to the speed of the particle. You can identify the particle because, having measured both the speed and momentum of the particle, you can determine the mass and thus the identity of the particle.
What I realized at that time was that when you had a window of very pure quartz glass with parallel sides, some of the light could be trapped and travel within the window to the edge, with the absolute value of the angle inside the fused quartz being conserved. And since the scattering and absorption lengths were long in fused silica, 10s of meters or more, it would travel a long way. You could see how this worked in the CRID data, by looking at the photons that didn’t get trapped. This gave me the idea for how to build a very compact Cherenkov for BaBar-- one that would have excellent performance in just the right energy range, which was compact in radial size, and had a sufficiently small amount of material that it would allow you to put a very high-performance crystal calorimeter behind it that you could afford. What you needed to do was measure the photon angles with respect to the track. What I realized is that since the angle is conserved in a precisely made rectangular bar or plate-what I later called the DIRC principle-it was possible to do exactly that. Well, it may be obvious now, but it was a bit astonishing at the time to imagine that even after a photon has bounced 100 times and gone down to the end of a several meter long bar, that the angles are intact.
I sat down one day with an “envelope” and figured out how much the angle would be smeared in a bar made to optical specifications. It was clear it wouldn’t being smeared much at all, and that most of the photons should make it down many meters of a bar with well-polished surfaces. So, then I got some bars made and, sure enough, the basic concept was sound, even straightforward. I measured the first images, believe it or not, on blueprint paper. I would take a big sheet of paper as the detector, and project a ring down a long bar onto the paper. Then I’d take it and put it through the ammonia fixing solution and there the ring was. That was exciting, and I took some pictures with a Polaroid on a lighted pegboard [laugh], which allowed smearing and distortions to be measured. Losses of photons traveling down the bar were also directly measured. So, at this point, the conceptual idea seemed sound. The challenges largely revolved around how to manufacture the nearly 5-meter-long bars out of pure fused silica at a sufficiently low cost.
So, Group B and friends became the BaBar DIRC group. Group B had significant crossover contributions, but as a research group, that gave us our focus for the BaBar experiment, and it became central to what we did for the next two decades. DIRC was a brand-new device, so it was quite exciting to develop all the details of building one. How might you obtain fused silica of the requisite quality and size at reasonable cost? How do you produce such long bars at scale and moderate cost? How do you test them? How do you polish them? How do you align them? How do you join them? All unresolved questions. As a result, I became an optics guy, at least in a limited sense. It was a lot of fun. Certainly, the most exciting experimental detector that I’ve been involved with many times in nature everything seems to conspire against you, but this is one of the times where nature seems to be smiling.
Blair, what were the circumstances that compelled you to found the Physics Analysis Group for BaBar?
This resulted from a confluence between untapped physics opportunities provided by the BaBar data, and how those mapped into the analysis expertise, interests, and availability of several people in my group and others with whom I had worked over many years. There was also a question of timing. It is a complex story and I’ll try not to belabor it.
The central physics goals of the BaBar Experiment were focused on measuring CP Violation in B mesons and constraining the Cabbibo-Kobayashi-Maskawa (CKM) matrix elements. This work was a great success as was explicitly noted in the announcement of the 2008 Nobel decays prize to Kobayashi and Maskawa.
However, since these data and analyses were so central to the entire idea behind the experiment, it had a large cadre of enthusiastic analysts who had begun proto analyses long before data were taken. There were many other areas where BaBar had taken exciting new data as well, but several of those, those with heavy quarks or leptons, also were, shall we say, fashionable and had enthusiastic proponents within BaBar from early on. As local group B physicists with responsibilities for DIRC, we had been focused for more than a decade on R&D, and building, commissioning, and operating the new DIRC detector for PID. When we emerged with the completed detector and lots of data and found some time to consider our Physics contributions to the experiment, joining one of these major efforts that were already well underway seemed less than ideal to many of us, although some Group members did do that very successfully.
Well, I realized that the physics analysis efforts at BaBar, broad as they were, were ignoring many “old-fashioned” hadronic physics topics in which several people in our group had significant expertise. One area that had no analysis home was the precise measurement of all the energy dependent cross sections for e+e- to hadron of all multiplicities and flavors, as we have discussed already. We had some of the world’s leading experts in this area within the BaBar community, and it was important to find an analysis home for them.
Now was there a BaBar executive board before you joined that?
Let’s see. I’ve forgotten some of this, but the organizational details are all written down in the Physics of the B Factories book jointly written by BaBar and BELLE, in chapter 1. BaBar as a community existed and was named, informally, in the late ’80s. The formal BaBar international collaboration started in 1993, and later adopted the name. For the four or five years before this, the collaboration and its organization were, in a sense, bootstrapped. We might call that phase proto-BaBar. Formal structures like boards didn’t exist. There was informal leadership comprising a spokesman, Dave Hitlin, advisors, and a community of system organizers. My main role was to form and run the particle ID proto group. At that time, we were trying to build broad community participation and invent a PID detector that could function well in the B factory environment. Collectively, we were also writing proposals about the project to the DOE and undergoing reviews. These went through several iterations, but by early 1993, it became a political issue in Washington. Cornell wanted to build an asymmetric e+e- collider in this energy region, too, but they had always been an NSF funded lab. DOE did not want another laboratory to fund, and NSF didn’t want to take on such an expensive project. At least, that’s my take on it.
In any case, it was implausible that the US would build two such projects in the country. Daniel Patrick Moynihan, who was senator from New York, insisted that there be a formal shootout between Cornell and SLAC. So, in June ’93, the two proponents presented their proposals to a jointly selected NSF-DOE committee. As a result, it was determined that the asymmetric e+e- collider would be built at SLAC, and that fall, the B Factory project at SLAC was formally authorized by President Clinton, and the machine construction project began, under the leadership of Jonathan Dorfan.
During this time, SLAC management had decided to build the detector collaboration within a fully international model and looked to CERN for its guidance based on the long experience there with large collaborations. The Research Director of SLAC, David Leith, led this process. He formed an international advisory committee to help bootstrap the process of finding and engaging interested physicists and their institutions. He also put together an International Finance Committee, including funding agency personnel from all the associated countries, to oversee their contributions to the detector. In December of 1993, we had the first BaBar meeting and began the process of developing the collaboration.
SLAC management recruited an important temporary committee, called the Interim International Advisory Committee, to oversee the formation of BaBar’s committee structures and recruit leading scientists for an important first committee, called the Interim International Steering Committee, to advise the lab on the R&D program, to select the initial Executive Board and Collaboration Council, and to write a governance document. This group then disbanded, and the collaboration became governed within a more normal order.
In the case of particle ID, the French groups were interested in BaBar particle ID, too, so a French co-coordinator for PID, Georges London from Saclay, was added and we set up procedures for how BaBar might select the PID system. This had a political component, and such things are never fully transparent. People had different ideas and interests. Eventually a select committee formally appointed by the collaboration set up a final process to make the decision about PID for BaBar. They selected DIRC in late 1994, and nearly all the collaborators interested in PID then joined the DIRC effort.
In the meantime, as Group B, we had collaborators who had worked primarily with us at SLD on the CRID ring imaging Cherenkov who joined us to work on building DIRC prototypes and doing a design for BaBar. We also attracted some new collaborators, particularly from Berkeley Group A, which was Alvarez’s old group. It was led by Moshe Pripstein at that time. That was a technically strong group, with good engineers, so together we had a strong group to build the first big prototypes and finish the design. We tested the big prototypes at CERN after going through the BaBar selection process and started to build the detector in ’95. It took about four years – a very challenging four years, because lots of new questions kept coming up.
Blair, what was the SuperB project and how did you get involved in that?
Well, once you built a B factory, why not build a SuperB factory?
Why not? [laugh]
B Physics provides a very sensitive arena to search for New Physics beyond the Standard model, by looking for small discrepancies. But to do that you need more data — lots more data than the B factories can gather. Many, probably most, standard model measurements BaBar made were statistics limited. While many limits improve nearly linearly as data samples grow, most measurements in a luminosity-dominated world only improve by the square root, so factors of 2 in total data sample sizes don’t matter much. Factors of 10 only provide a square root of 10, around 3. So, if you haven’t seen a strong hint of some new physics already in BaBar and Belle data sets, it is unlikely there will suddenly be a compelling argument with just a factor of 3 better error bars- it’s probably not going to happen. So, to search for new physics via this route, it seems a factor of 100 times more data is essential…hence setting the scale for the SuperB factory.
From the theoretical side, there are many measurements where small discrepancies from Standard Model predictions can be predicted in New Physics models. We know that the Standard Model has become established and triumphant – and yet there are many reasons why we know the Standard Model is wrong or incomplete.
Some of those reasons are just simple observations. Neutrinos have mass, but they don’t have mass in the Standard Model. The Standard Model doesn’t play well with Einsteinian gravity – there is an unbridged divide between our two basic ways of thinking about the universe, quantum mechanics and general relativity. And why is there such a dramatic antimatter-matter asymmetry in the universe? I mean, we thought we were doing BaBar partly to address this big question, but of course discovered that CP violation in the B sector is explained just fine within the Standard Model. So, there are a several fundamental physics issues that the Standard Model really doesn’t address at all.
Then there are experimental tensions between precision experimental measurements, and the Standard Model predictions. For example, as we already discussed, there was substantial press recently where experimental proponents were saying, “Oh! We found that g-2 doesn’t agree with the Standard Model, though the discrepancies have not quite reached the appropriate confidence level yet to claim a discovery”. If more data are analyzed, the experimental precision will increase, but calculating what the Standard Model predicts in this case may be more problematic than the statistical precision. There are several discrepancies between measurements and Standard Model predictions in B physics which are similar in size to that just discussed for g-2. For example, there is an excess in the rate of B decays to D*τν, from BaBar, Belle, and LHCb, compared to Standard Model predictions which is also approaching 4 standard deviations, not so different than the discrepancies noted in g-2.
Such discrepancies, if you had ten times the precision, would pop out if they’re real. And the pattern of discrepancies should be helpful in selecting out the kinds of New Physics models that are most promising. So, a primary reason for doing SuperB is to look for New Physics in this way. I personally worry that we’re looking for our keys under the only lamppost that’s lighted, so maybe the new physics won’t be so clear even when/if one makes these measurements and sees several significant discrepancies with a particular pattern. New physics discoveries notwithstanding, there’s a lot of good physics at the SuperB scale, precision measurements that will help to confirm our understanding of the Standard Model and its limits. As an aside, the crucial matter here is a major leap in machine luminosity, and so far, it looks like that may be difficult to achieve, judging from experience to date with the Japanese KEK-II upgrade.
After the decision to stop doing particle physics at SLAC, the proposal to upgrade the PEP II machine and the BaBar detector at SLAC, called SuperB, was no longer on the table. So, the international proto-collaboration proposed taking all the existing components with appropriate upgrades, and build the SuperB at Tor Vigata, Italy, near Frascati, which is the Italian e+e- lab. It would have been a BaBar-plus collaboration on the physics side, and mainly an Italian American machine. It would take many of the PEP-II components from SLAC but re-arranged into a thoroughly upgraded machine design. These components would have been provided as an in-kind contribution from the US. The core of the detector would have been Babar, and it could have used some of the existing sub-detector components such as the DIRC bars and the calorimeter, but many components would be upgraded to handle the 100 times higher rates and improve performance. So that was the technical side of the proposal. The Berlusconi Italian Government approved the experiment and promised substantial funding, but not enough to build the machine. But the Italian political environment was very fractured, in ways I don’t fully understand, and eventually the project fell apart politically. That was in 2012, I believe.
What were some of your responsibilities as Deputy Council Chair of BaBar?
That’s not much of a position at this time. [laugh] I would say… If the council chair isn’t there at a meeting, I sub for him.
I see. [laugh]
You know, these kinds of posts are important formal positions for a collaboration, and crucial when the collaboration is being formed or growing rapidly, and of course if/when something in the collaboration goes off the rails. But at the time I became deputy council chair, we were done with data taking. The collaboration was shrinking slowly, and few new members were joining. Of course, one of the council chair’s roles is to help deal with new groups who are thinking about joining, vetting them, obtaining collaboration approval, and finding appropriate roles for them. That need has largely been off the table for more than a decade.
I remain a member of the executive board, which also has become a small role. The board exists primarily to advise the spokesman, but the spokesman doesn’t need much advice these days. But there were times during the experiment, especially earlier on, when momentous decisions had to be made, and the executive board played a bigger role. As an example, when BaBar was presented with a mandate from DOE to abruptly stop data taking in 2008, the executive board provided strong and enthusiastic advice to support a proposal to DOE and the lab to conclude the experiment by sitting on the lower upsilon resonances for the limited time being allotted to BaBar rather than just getting a small data increment on the Upsilon(4S). This led to the discovery of the ground state singlet bottomonium spin zero resonance, ηb(1S), in decays of the Upsilon(3S) later in 2008, and other unique interesting results.
Of the BaBar boards I participated in, the technical board was by far the most crucial because of its central role during experimental design, construction, and early operations. And that’s where all the big bucks go, right? Wherever money is being spent, there will be excitement and intensity.
Blair, in more recent years, I wonder if you see in some ways your advisory work for DUNE and your experimental work for LZ as part of SLAC’s broader transition into astrophysics.
Sure. It became clear that the landscape had changed for SLAC once DOE decided they didn’t want to support an experimental particle physics accelerator at SLAC any longer. In some ways that was liberating, because as a SLAC experimental staff member, I had previously had a significant obligation to both use and support SLAC facilities. At least, I always thought I did, and although a few SLAC physicists sometimes worked outside the lab in part, they were the exception. A member of SLAC staff had a responsibility to the SLAC laboratory which was being supported by DOE to run its accelerators and detectors in the national interest. Part of the SLAC model going back to Pief’s day was that the staff should also be doing first rate physics. So, without an accelerator driving the direction, we were free to look for other physics opportunities.
So, some SLAC groups joined detector collaborations at other major accelerators around the world, but others started looking at non-accelerator-based approaches to doing fundamental experiments. There are several of these, for example, LSST and LZ. All these experiments have fundamental scientific objectives and many detector technical components that overlap with particle physics expertise.
When we were looking at how SLAC could get involved with dark matter direct searches, a few of us at SLAC did a survey of the world’s experiments. We went around the world looking at different options and decided that the one where we could make the biggest and most timely contribution and had the most exciting physics prospects was LZ. SLAC hired a couple of experts in LZ physics, Tom Shutt and Dan Akerib to head up the new department.
I think that’s been a successful new direction for SLAC. The technical capabilities, and infrastructure of SLAC has played a big role in the success of the sophisticated SLAC contributions to the detector, and I’m sure the group will be a major force in the operations and the data analysis. But these contributions are smaller than what SLAC would have expected in our past history. The SLAC group looks like the other large research groups within LZ rather than having a central role like in the old days of SLAC operations. Of course, as the experiment is offsite, and the SLAC group is much smaller, this is inevitable.
In all of the experimental work you did up until LZ, did anything prepare you for just how elusive dark matter has proven to be?
The experimental evidence for dark matter in the cosmos seems quite compelling, but it comes from outside particle physics. Dark matter might not be elementary particle physics at all but could be. This is a somewhat unique situation for us in HEP. It has been difficult to find compelling fundamental arguments to constrain the experimental search space. So, as silly as it sounds, the lamppost analogy is probably apt for what is going on now with the direct searches. Clever experimental techniques have been invented which provide an illuminated lamppost that allow large increments in search sensitivity for hypothesized dark matter particles with the right properties to make up cosmic dark matter and lie within that illuminated spot of light. It makes sense to look under these lampposts, given that we don’t have any better theoretical guidance about where we should be looking, but without better guidance, we shouldn’t be too disappointed if nothing shows up.
Historically, I think some of the disappointment comes from the failure of some early handwaving arguments – those collectively called the WIMP miracle – which seemed to point to WIMPs of a particular kind as being the dark matter particles with just the right properties to make up the cosmic dark matter and which coincidently could be seen in these experiments. Given what people knew, and the excitement around these search techniques, it seemed natural (or at least not un-natural) that direct searches would find a particle with a particular mass and interaction strength scale reasonably easily. Once that turned out not to be true, then, of course, people began looking more carefully at the predictions, and realized there’s not really anything that compels surprise about our lack of any discovery so far. It may turn out to be that when we do these searches an order of magnitude or two better, that suddenly something will pop out, but I think everyone has realized by now that it may not happen. Still, the direct search community is and should stay optimistic, at least for the present suite of new experiments as they are taking data. These are technological tours de force and elegant, beautiful experiments, with substantial coverage.
Do you see LZ and what Elena Aprile are doing more as competitive or more as collaborative in the long run?
In the long run, I think they must and will be collaborative. That is the trend as both experiments are thinking about the next generation. At present, each experiment is going hard at the competitive side, with XENON1T ahead, having already published results from a large data sample. LZ is doing well on commissioning and should reach substantially better limits within a couple of years or less. The two collaborations will probably get together and propose a larger G3 experiment within a few years.
Of course, many of the individuals involved are scientific competitors of longstanding, but the realities of obtaining funding and enough technical support will likely smooth off the more competitive edges. I suspect that people will figure out a way to work together and it will be beneficial to the science. But there will ultimately be limits to this approach of just growing the devices. These limits may be technical, scientific, financial, or the project may simply take too long to sustain support. The field may also just lose interest in very expensive direct search experiments of this kind in the absence of compelling theoretical predictions or hints from experimental data. So, somebody is going to have to come up with a compelling argument or a better idea.
Are you bullish that all of this is going to lead to understanding what dark matter is?
That’s an interesting question. One of the things about these kinds of experiments is that either they find something, or they don’t.
Well, isn’t that the lesson of the Higgs or LIGO?
Right. Sure. But in both of those cases, there were compelling arguments from well validated theories that some well-defined, spectacular effect would be seen when a reasonably well understood level of sensitivity was reached. The discovery of the Higgs was unfortunately more like an end of building the Standard Model edifice than a beginning and it didn’t point to new physics. LIGO may be the more interesting case because LIGO opens up a whole new kind of astronomy once the first observations of gravitational wave have been made. And the success of the first instruments pointed very clearly to how much things would improve by increasing the sensitivity of the individual instruments and adding more instruments (the LVK collaboration) around the world to improve the localization of objects, see more deeply into the cosmos, and so on. It all sounds exciting for the future.
New massive direct dark matter searches seem less compelling because it is unclear whether the object being directly searched for exists. When the searches first began, the optimism was higher, and arguments like the WIMP miracle seemed more compelling. But, somewhat ironically, the better the limits have become, the less compelling the strategy seems to be. Of course, there are new optimistic theoretical arguments that the particle may lie within an attainable range, and you can measure some other phenomena as the detectors get better. You can measure neutrinos from the sun, for example. Or in principle, if you can do the experiment well enough, you can look for neutrinoless double-beta decay. Well, that hasn’t been found yet, either.
But no one would spend so much effort and resources on peripheral issues alone. We were talking a little earlier about my personal preference for experiments where there is a mix between the frontier “new” physics and the less spectacular measurements within the known domain. The later may not be quite at the leading edge of the field, but nonetheless are important data that help you understand the central story better and might show where cracks are emerging that can point in a new direction. Search experiments don’t do that so well. Breadth across the field is useful, but at a personal level, I prefer my experiments to also have broader internal opportunities. As an example of a modern experiment that does that, LSST provides one way to probe for dark energy, which is another one of the mysteries of the universe that’s poorly understood in the Standard Model, but it does a lot of other astrophysics too.
Blair, given your long tenure in the field and all the perspective that comes with it, are you ever concerned that all of the resources and brainpower that’s being poured into the search for dark matter is siphoning off resources from other physics that’s perhaps not getting the due it might deserve?
That’s a difficult question, as we’ve been discussing. Science finds it difficult to broadly come to grips with resource allocation issues. Individual fields often try hard to do this. Particle physics has field wide studies like Snowmass and Astronomy has their decadal survey and so on. These are important for the fields and should be given due credit, but they don’t deal with crossover issues very well and tend to favor institutional imperatives, and they tend to assume fixed boundaries and more or less static resources. Particle physicists are used to trying to prioritize particle physics at accelerators, but as you said, requests have been growing for ever larger experiments in non-accelerator particle physics, which leads to a loss of resources for other things. The kinds of questions astronomers ask overlap more with particle physics than they used to as well, which may impact thinking at agency and congressional levels in Washington.
I don’t know that I have any new wisdom to offer about the specific question you asked. I think the particle physics field has done reasonably well, to date, balancing the growth of the direct dark matter searches compared to other opportunities in the field. But the experiments have stayed below some funding viability line. This won’t be true for the next generation detectors. So, unless hints begin to emerge in the data from present experiments of a discovery that guides the growth of the next step, simply growing the present direct dark matter devices seems problematic. More broadly, I do think particle physics is in a difficult place because it’s quite mature and therefore expensive to get a spigot at the frontier, while the political appetite for funding this type of work seems to be diminishing. This just is the reality of the times, and the field must live with it and adapt. So, we need to be very insightful about our own processes for resource review and allocation.
For an individual physicist, if it seems possible to find compelling experimental programs with broader mixed goals, I think that’s probably the better bet. For the field, when experiments in each area are small, I think everyone is best served by letting 1000 flowers bloom. But once resource demands for an experiment reach a certain size, and impact what else can be done, a balance must be struck somehow, and I don’t know of any better mechanisms than something like the Snowmass process. Perhaps it could be improved, but compelling arguments to the whole field and broad support from the field will be necessary to proceed to the massive next generation direct dark matter search detectors.
Blair, what did it feel like when you won the Instrumentation Award from the APS?
[laugh] Well, I was pleased and thankful that the contribution of Cherenkov detectors in general and DIRC specifically were recognized, of course, and enjoyed the experience. Experiments in particle physics wouldn’t be feasible without the suite of instruments that generations of experimentalists have invented, so it is also important to recognize these contributions and how central they are to the success of the field. I think these contributions are valued in the field but under-recognized by awards committees, so it was great that the APS created this instrumentation award. It has been some time since any major awards like the Nobel prize were awarded to particle physics detector scientists – Charpak and Glaser come to mind as probably the most recent. It’s not that people think that these inventions are unimportant exactly, but this kind of work seems underfunded compared to its importance and under-recognized when it comes to academic appointments. Many of the present practitioners are growing old. So, I hope these APS instrumentation awards help make instrumentation work more broadly visible and encourage a new cadre of people to spend some of their effort in this crucial subfield.
But to me, the most gratifying thing was that DIRC was used in BaBar. I had an idea for building a device that was an elegant solution to a crucial detector problem, and it turns out that it works just as advertised, which is wonderful. The BaBar collaboration had an ability to weigh options and would accept a novel idea when there were good reasons to believe it would work best, and that the groups who were proposing it would be able to build it successfully. The ability for a large organization to make these crucial decisions based on the best information, rather than political considerations or something else, is rare and reflects the high quality of the management, organization, and collaborators of BaBar. It was a bold decision by the collaboration that gave me the opportunity to build this device and see it work so well. So, I’m gratified that the awards committee recognized the device, but it isn’t as central as the fact that BaBar decided to use it, which is the crucial matter.
Blair, for the last part of our talk, I’ll ask one big retrospective question, and then we’ll end looking to the future.
So, in light of BaBar and what you were recognized for, what satisfaction do you have that we’re closer to understanding this cosmic imbalance of matter and antimatter?
That is an interesting question because I don’t believe we are closer to understanding it. I mean, we have nailed down CP violation in the standard model, which is a tremendous achievement, but that just doesn’t cut it for understanding the relic imbalance in the cosmos. Certainly, some people thought, or at least hoped, that when we precisely measured all the parameters of the Standard Model using the vast variety of data from B decays that the explanation would become transparent. Maybe the Standard Model parameters wouldn’t fit into the standard CP triangle at all, or maybe some other clear hint would emerge about the deviations. But the Standard Model provides a good explanation for all measurements that we’ve made in B physics to date.
This is quite a puzzle in that we know the Standard Model can’t be the whole story, and there are many phenomena that violate the Standard Model explicitly, such as the fact the neutrinos have mass, or that dark matter and dark energy seem to be the predominant form of energy in the universe. More subtly, as we have already discussed, there are several precision measurements where there appears to be considerable tension (approaching 5 sigma) with the Standard Model predictions, such as the long-standing difference between the direct measurements of the anomalous magnetic dipole moment of the muon (g-2) and the results calculated from theory, or, in the case of B physics, an excess in the ratios of certain types of B decays to different types of D mesons. As more data have been accumulated, these discrepancies have continued to grow, so it seems they are probably not just statistical flukes.
So, in this sense we seem not closer at all to understanding the basic asymmetry of the universe. Of course, there’s no shortage of theoretical ideas that lie beyond the Standard Model about how all these issues and others might be explained, but I doubt that we have any reason at this time to have confidence that any one of them is the right approach.
If I were young and really wanted study particle physics at an accelerator, I suspect that a good long-baseline neutrino experiment is probably the best chance we have right now for a breakthrough, but it’s very hard and will take a long time at best. It is difficult to build a sufficiently intense beam, and the detectors are massive, difficult, and far underground, which adds other peculiar challenges. In the end, I’m concerned that the DUNE detector with the Fermilab neutrino beam, large and expensive as it is, will still not be big enough to make a breakthrough, but of experiments now on the table, it may have the best chance. It must work in a timely way though.
Is there something intrinsic about BaBar that made it inevitable that it would take place at SLAC and not a different national laboratory?
I don’t think so. It probably could have been successful at Cornell, for example, although the lab would have needed to grow substantially, and a very similar B-factory and detector were built at KEK. SLAC has some advantages such as a great location, a good injector that’s great at producing positrons, existing large interaction halls, experimental and technical personnel, and infrastructure, and so on. But I think there was nothing essential that can’t be reproduced.
None the less, SLAC was a great place to do it, the best really. It had a powerful machine e+e- colliding beam group; lots of technical expertise on all the things you need to do a modern experiment, including competent administration, detector building expertise, an excellent computing infrastructure; and a first-rate group of research experimentalists and theorists at the lab who were really excited about the science. All of this and the exciting science was very attractive to many physicists, and SLAC is within easy reach from anywhere in the world and is also in an attractive part of the world for living, especially for Europeans who seem to enjoy spending time in the Bay Area.
All this aside—and I’m not exactly sure why this is true—nearly everybody who worked on BaBar really enjoyed the collaboration. Somehow, the whole was greater than the sum of the parts. No doubt, part of it is the fact that exciting physics results began right out of gate, and just kept on coming. More broadly, there were a lot of physics opportunities which in turn provided a plethora of research approaches. That’s important. Graduate students have all sorts of choices for thesis work, and there were good analysis teams everywhere that they could join. I think you must give kudos to the working relationship between the laboratory, the machine, and the detector collaboration too.
The BaBar collaboration was fully international with an internationally organized structure more or less like a CERN experiment. SLAC was a single-purpose US high energy physics laboratory at that time run by a bureaucracy used to doing things in a SLAC centric way, you might say, but SLAC, especially David Leith, the Research Director, and the PEP-II director Jonathon Dorfan was committed to making this work. Another issue is that as part of their heart and soul, SLAC staff brought competence and passion to what they were doing. It has always been a fun place to work. I worked at SLAC for nearly 50 years and was always enthralled about the expertise and passion of so many of the staff. There were great people around, no matter what you needed to do.
I can tell you an old story. When I started at SLAC, probably in ’67 and ’68, there was a meeting every day of the operations staff at 8:00 in the morning at the control room. It was still going on the last I knew. If you were doing an experiment, you could attend the meeting to find out what was really happening. There would be people in the room sitting down—a small room, not a COVID socially distanced room—and lots of others standing around the walls and doorways. Panofsky was nearly always there. People would stand up and say in three minutes or less what had happened in the past 24 hours. What struck me was that no matter what had happened, whether obscure or clear, destructive or whatever, somebody in the room would say, “Okay. I have fixed that”, or “I am looking at it,” or “I know what happened,” or whatever. The difficulty of the problem itself didn’t matter! There was this organization all run with 1960’s technology, before computers, with paper and chalkboard lists and so on, but there was always somebody who took responsibility for it and had immediately either been on it and fixed it already or had a plan to do so. The organization, the depth of the expertise, and the passion for making it work were wonderful.
To me, it felt a bit like watching the old days of NASA when they were launching the Mercury spacecraft. SLAC people tended to retain that excitement and passion at staff level for decades thereafter. Another thing that is important to users and collaborators is the general lab environment for the physicists as users. Do they have adequate offices? Are they co-located with their collaborators? Are there administrative services? CERN is the gold standard for international collaborators, with advantages like onsite hostels, cafeterias, and coffee bars, but it is much bigger and more bureaucratic, at its core. SLAC tried hard to duplicate some of the sociological elements at a smaller scale, co-housing the BaBar collaboration so it was easy to spend time with collaborators and theoreticians at coffee, etc. And collaborating physicists like to live in the Bay Area, although the housing costs are a detriment. Somehow, it just worked.
Blair, looking to the future, last question. Given all that you’ve worked on, your transition into the things you’re working on now, and in light of all the uncertainty in the field, what kind of advice would you give graduate students now in terms of the most exciting physics to work on, the most stable, in a career sense, fields to work in?
That is really a good question. People often say that students should follow their passion, but their scientific passion may not be tractable at this time, or it may not be that well founded scientifically or may not provide a productive path for a career. Then again, other parts of a student’s life may be more crucial than their research topic. They should be learning and growing. They should be happy to go to work every morning. And, as Feynman once observed, love is more important than physics. Putting that aside for the moment – from the perspective of a mentor and working particle physicist – the practical things you can provide to a graduate student are good experiences for their graduate school years. You should be jointly working on exciting scientific questions and solving experimental physics challenges. You should help them find a substantive role in the group where they can grow and learn. The question as to whether that’s the right thing for them to do as a career is quite hard to deal with. Most scientists are probably too close to their own work and deeply committed to what they are doing now, that it is hard for them to stand back and ask if this provides a good career path for someone who is much younger.
This is a challenge for big science in general, and especially in particle physics, since the career path is so constrained. If you are a graduate student starting to work on a major program, the timescale for doing the science may be 20 years or more. You may think the physics is exciting but starting down that path makes it a commitment for much of a life, and you must hope there is a career path available. If the program is well funded, there will be opportunities down the road, but most graduate students starting with these experiments will find those next career steps to be very competitive. So, it is a conundrum for both students and advisors, and probably unavoidable given the direction of particle physics employment. If I were a student and could dream about what I wanted to do programmatically, my ideal would be to find a project something like a new BaBar. Now, of course, redoing BaBar physics at the same scale is not so interesting, so it must be something new, but what I mean is a project of about the same size and scale with a plethora of physics opportunities. It doesn’t have to be at an accelerator, and from what I can see, probably wouldn’t be…...I said it was a dream.
What was the magic of BaBar at its inception that could be recreated? Maybe that’s the question.
Right. That maybe is a question at least, but I don’t know how to duplicate magic. BaBar hit a lot of sweet spots. We talked about some of them, but one of the sweet spots is that it wasn’t too big, and it could all be done at a scale both in time and in money that is career appropriate for physicists and that agencies could deal with. I mean, agencies knew what they’re getting into from the beginning, and how much support it needed for how long. It took only a modest slice of their budgets to fully fund it. Now, given funding constraints, we can only have a few cutting-edge particle physics experiments in the world. It costs so much to run CERN that it basically absorbs the particle physics budgets of Europe and not insignificant budgets from other parts of the world, too, Much the same thing, I suppose, could be said for programs like the neutrino program at Fermilab. It absorbs a major part of the US particle physics budget, at least what can be put into facilities. If this is the physics you want to do, then you’ve got to pay for it, but it does mean that there are not a lot of questions that can be addressed in each decade or two. The field must guess right. What are the odds?
Another approach to this question might be to ask what I would do if I were a young guy and starting over, and there wouldn’t be a SLAC to come to.
I was very lucky, right? My timing was impeccable. There’s nothing that looks like that from an incoming student’s perspective today… When I was a 21-year-old Physics BA from Grinnell asking, “What’s exciting out there in particle physics?”, there was a really good answer, “Oh my goodness! Stanford is just starting up this new accelerator at a much higher energy than ever before and has all these beautiful experiments that people are about to begin.” There is nothing quite like that out there now, to my mind, where the timescale looks like a graduate student’s timescale, in the first instance, and with many questions that will be open and exciting during a later career path.
Yeah. It’s almost like simultaneously the frontiers were bigger and also within reach somehow.
Right. I think there’s something to that. In a way, we’ve seen that before in the history of physics, where the productivity of a particular approach loses steam after a long, productive run, and eventually the directions of research interest change because they must. Panofsky used to claim that when asked how long SLAC would last, he would say “Ten years, unless somebody has a good idea”. Well for 60 years, several “somebodies” in a row had good ideas and SLAC persisted as a world-leading accelerator, but it eventually was unable to continue as a particle physics accelerator after 2008, although as we discussed earlier, at the time there were reasons other than the lack of an exciting new particle physics project, because I think SuperB qualified as exciting. But it may have been the last such “good” idea at the right scale.
The broader issue for the field is that when you build a large particle physics accelerator, you build an institution and its infrastructure. These require resources to sustain that are somewhat independent of the research output, at least in the medium term. In particle physics, we’ve built infrastructure throughout the world that is quite specialized; places like SLAC, Fermilab, CERN, and KEK. These institutions comprise great people and lots of technical knowhow, but we’re running up against our ability to continue to do cutting-edge physics at these places, because we’ve done the easy stuff that the institution’s infrastructure was designed to do. We’ve picked all the low-hanging fruit and we don’t have a long enough ladder to get up to the top of the tree. There are still interesting scientific experiments that can be done by these institutions, but they are large and expensive, and it is a challenge to obtain sufficient resources to do them.
It’s happened before in science. Somebody either needs a new idea, or the field needs a new direction entirely. Small experiments that might produce exciting new data at the frontiers are hard to find. Many non-accelerator experiments that address fundamental physics questions are large and expensive in their own right and require long term commitments and institutional support. The main lesson seems to be that all areas of experimental science at the frontiers have become sophisticated, complex, and expensive, and progress requires a sustained long-term commitment, and very careful choices before committing to a particular approach.
Blair, it’s been a great pleasure spending this time with you. It’s been a lot of fun listening to all of your recollections. I’m so happy we were able to do this, so thank you so much.
My pleasure. Thanks very much. It was great fun.