Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
We encourage researchers to utilize the full-text search on this page to navigate our oral histories or to use our catalog to locate oral history interviews by keyword.
Please contact [email protected] with any feedback.
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of Peter Lepage by David Zierler on January 19, 2021,
Niels Bohr Library & Archives, American Institute of Physics,
College Park, MD USA,
For multiple citations, "AIP" is the preferred abbreviation for the location.
Interview with Peter Lepage, Tisch Family Distinguished University Professor of Physics at Cornell. He recounts his childhood in Montreal and his decision to pursue an undergraduate degree in physics at McGill. Lepage discusses his Master’s work at Cambridge University and his decision to do his thesis research in particle physics at Stanford. He describes the fundamental advances happening at SLAC during his graduate years and his work on bound states of electrons and muons under the direction of Stanley Brodsky. Lepage discusses his postdoctoral appointment at Cornell and his work in high-precision QED calculations in atoms, and he describes the foundational impact of Ken Wilson’s work on lattice QCD and the intellectual revolution of renormalization. He describes this period as his entrée into QCD research, and he emphasizes the beauty of Ithaca and the supportive culture of the Physics Department as his main reasons to accept a faculty position at Cornell. Lepage explains how and when computers became central to Lattice QCD research and why effective field theory was an area of specialization that was broadly useful in other subfields. He describes the ongoing stubbornness of the Standard Model, and he discusses his tenure as chair of the department, then as Dean of the College of Arts and Sciences, and his work on PCAST in the Obama administration. Lepage explains his longstanding interest in physics pedagogy, and he discusses his current work on the numerical integration program called VEGAS. In the last part of the interview, Lepage emphasizes that the most fundamental advances in physics are in astrophysics and cosmology and that lattice QCD should be “kept alive” because it’s unclear where it is going until physics goes beyond the Standard Model.
Okay. This is David Zierler, oral historian for the American Institute of Physics. It is January 19th, 2021. I'm delighted to be here with Professor Peter Lepage. Peter, it's great to see you. Thank you so much for joining me.
I'm pleased to be here.
Okay, so to start, would you please tell me your current title and institutional affiliation?
I'm the Tisch Family Distinguished University Professor of Physics at Cornell, where I've been my entire professional career. In the Physics Department at Cornell University in Ithaca, New York.
And do you or the department have any particular connection to the Tisch family?
The Tisch family are generous supporters of Cornell. I know a couple of the Tisches from when I was dean, but I had nothing to do with the creation of the Tisch Family Professorship. And they had nothing to do with my being designated a Tisch Professor.
Peter, I want to ask a very present question before we go back to the beginning, and that's the way in which the pandemic has been sort of helpful for your productivity as a theorist and is maybe perhaps not helpful in the sense that all of the physical and social distancing really is a problem in terms of seeing people face-to-face and bouncing ideas off of your collaborators. I wonder if you could talk a little bit about how your scholarship and your research have fared over the past ten months?
Well, it's been a mixed bag, like you say. Informal contact with people, interaction with people, is an essential part of doing research and scholarship. And that's so much harder now. Spontaneity is impossible, you have to make an appointment if you want to ask someone a question. Chance encounters don't occur. On the other hand, you make more of an effort to connect with people through Zoom meetings, and so I've had a lot more interaction with some of my remote collaborators than I would have otherwise. I've also had serious chunks of time to work on backlogs, papers that I've been meaning to write for years. Enough of one's life is on hold that one has spaces for that kind of activity. So, that's been good. But mostly, I think it's very, very bad because of the friction it introduces into the interactions with other people and other researchers and students.
Peter, I'm curious how concerned you are about the problems, the challenges the pandemic is creating for the career prospects of graduate students?
I'm hoping that it's temporary. We went through a bad period in 2008, 2009, during that crash. It caused a lot of universities to stop hiring and it was scary. But the hiring came back and presumably it'll come back more quickly now, assuming that COVID really is under control by the end of this year. And if it isn't under control this year, then I start to worry more. University finances may not be in such great shape by the end of the year but should get better afterwards. I'm not pessimistic in the long run, though a little worried about people who are graduating this year and need to go out into the job market.
Peter, perhaps as a window onto your current research, I'm curious if there are any experiments that have been put on hold that have impacted the kinds of things that you've been after in recent years?
Experimental physics has been slowed down. There's one experiment at Fermilab which was measuring the magnetic moment of the muon to very insane precision, which should have delivered data by now, but hasn't. That was something that several groups working in lattice QCD were tailoring their work plans around, but it has now stretched out. Which actually is a good thing, because it gives you more time to work on the theory. (both laugh) I think there's a complicated situation in particle physics right now which has nothing to do with COVID, but rather with the fact that the Standard Model refuses to die. A lot of the predictions for new physics have not come true yet, especially at CERN. It is causing focus to shift to astrophysics and cosmology, or to smaller experiments looking at neutrinos or searching for dark matter candidates. I am not sure how this will play out, but it might be the big story of the decade in particle physics.
Well, Peter, let's take it all the way back to the beginning. I'd like to start first with your parents. Tell me a little bit about them and where they're from.
I'm Canadian originally. Both my parents grew up during the Depression in Western Canada—my mother in Saskatoon, Saskatchewan, and my father, at least initially, in a tiny little town called Heisler in Alberta, which is up around Edmonton. Both of them went to college, but they were in the first generation of their families to go to college. My father's father was the station master in the train station in their tiny town. My mother's father was a night watchman in a movie theatre.
Where did they meet?
They met in Montreal after World War II. My father's French-Canadian family had moved to Alberta to take advantage of free land from the railroads. The railroads were trying to populate the west, and there were too many people in Quebec. They gave up the free land after one winter. It was just miserable—out in the wilderness and unbelievably cold. But my grandfather found a job as a station master and stayed there. My father still had lots of relatives in Quebec, in Montreal, and ended up there after the war. My mother’s family were Irish, having left Ireland for Canada to escape the violence there. My mother ended up in Montreal after going to graduate school in social work in New York, at Barnard, during the war. Her first job was in Montreal. And that’s where my parents met, in 1951.
And did they speak to each other in English or French?
English. My father grew up in Alberta. He was bilingual, but he spoke French with enough of an accent that French-Canadians, kids in particular, would disown him and treat him as though he wasn't a real French-Canadian. And of course, English kids up in Alberta treated him as though he wasn't really English either. (both laugh) I think this was a bit of a problem for him when he was young, since he himself wasn’t sure which group he belonged to. And of course, the tension between English and French speakers has been [present] in Canada for a long time. But he grew up more English than French because he was in Alberta. And my mother was Irish—her parents were Irish.
And where did you grow up? What town were you born in?
I was born in Montreal and grew up in a suburb of Montreal. I lived briefly in Windsor, Ontario, when I was very young. But mostly I lived in a suburb called Saint Bruno, just outside of Montreal. I lived there until I finished college.
And would you say growing up, was it mostly a middle-class existence for your family?
Probably upper middle class.
And where did you go to school? Did you go to public or private school?
They were public schools. In Quebec at that time, they had public schools, but they were English Catholic public schools or English Protestant public schools or French Catholic public schools. They were all public, and I went to the English Catholic public schools in my town when I was in elementary school, and then I had to bus to a nearby town for high school. Most of which was done in the afternoon and the evening because they didn't have enough high school capacity for all the kids at that time. There had been an explosion in the English-speaking population in the 1960s, and the school system was having a hard time keeping up. So, they were running shifts. For most of my high school, I was in the afternoon/evening shift, which was good, because in the morning there'd be no other kids on the neighborhood ice rinks. (both laugh)
Peter, when did you start to get interested in science? Was it as a young boy?
Probably because of my father. I think he wanted me to be an engineer. I think he would have loved to have been an engineer himself, but he wasn't—he was a salesman and eventually a business executive. He would get me things like chemistry kits and physics kits when I was a kid, and I became a tinkerer, taking things apart and rarely put them back together, alas.
(laughs) Well, you're a theorist now, after all.
Yes, there's probably a reason for that. I actually knew I had to become a theorist when as an undergraduate I managed to electrocute one of my TAs by connecting the DC volt meter to the 120 volt AC wall circuit. The meter wasn't working properly, so I called him over to help. He went to check the leads and then went flying across the room. (both laugh) It wasn't really my fault, because they had exactly the same plug for the two things and they were next to each other. So, it was a problem with the lab design. The TA wasn't too happy.
Looking back, would you say in high school you had a strong curriculum in math and science?
It wasn't bad. It was limited. I couldn't study calculus, for example, in high school, unlike many kids now. But I had good teachers and particularly strong teachers in chemistry and physics and math. The physics teacher was the most inspiring, which probably gave physics an edge when I went to college. In Canada at the time, when you went to college, you lived at home and commuted every day. You didn't live there. I ended up going to McGill. But before you even arrive, you have to pick your major. And...
So, it's more like the British system than the American system?
Much more like the British system, and so I had to choose right from the start, and I chose physics. I could have changed in the first year. There was a little wiggle room, but mostly I was doing physics and math in college from day one, and never looked back. So, by the time I started college, I was already on a physics path.
Peter, who were some of the prominent professors that stick out in your memory in the physics department from those days?
I can't remember the names. I didn't interact a lot with the faculty, because first of all it was a commuter school, so I was not hanging around out of hours. I remember more the subjects than the faculty—quantum mechanics and general relativity. Actually, the general relativity professor was good, because he was a nuclear physicist, and I think he was learning it as he went along. He did a great job, possibly because he communicated the excitement of learning it. But for me, the personal connection that mattered the most was through a summer job. With my father's help, I ended up working summers for a small aerospace research company, based in Montreal, that had been connected with McGill at one point. I had the same supervisor for the three years, a fellow named Paul Gough. He had a huge impact on my self-confidence, because he would give me hard problems and just expect me to do them. I would go away and actually do them. I can remember working on a project where they had a contract with NASA to study whether satellites could be oriented using the gravitational gradient—the fact that gravity further out is a little weaker than gravity closer in. That's a very small effect.
There are all sorts of other forces that come into play, like pressure from solar wind and light. But first I had to learn how to describe a rigid rotating three-dimensional object. Which is chapter four (?) in Goldstein's classical mechanics book. I hadn't taken classical mechanics yet. I hadn't studied that book. But I was told that to check out chapter four, and I used it to code up the equations for a simulation program. It was only the following year when I was taking classical mechanics that I realized that I had already taught myself week six of the course. (both laugh) It was a very empowering experience, to be given real responsibility. To be given the opportunity to perform, and to be able to do it. It also created a lifelong interest in and involvement with computation and computational physics, because most of what I did for that company was write computer code to simulate things. I did that in the summers and very part time during the year.
Peter, did you ever think about entering industry? Or was pursuing a career in academic physics, was that really your goal from early on?
Somehow, I couldn't get enough of the academic stuff. I didn't have a plan when I started college, and even briefly considered becoming an engineer.
But as I went along, I just wanted more of what was going on in physics departments, and it didn't occur to me, for example, to make a career like the careers of the people in the little aerospace company. It was a very small company, by the way. I don't think there were ten people there. So, I could actually see what the engineers were like, and what they did. The person I worked with had a physics degree but had become an engineer. And I kind of liked his job, but a lot of the other jobs didn't look as appealing as learning about quantum mechanics and general relativity. I wanted jobs where I could do that kind of thing.
Besides the incident in the laboratory that may have pushed you away from experimental physics as a negative, what was it or what classes were it that attracted you to theoretical physics?
It was the theories. When I was young, I was fascinated by the fact that equations could predict things that I could observe in the real world—I can remember dropping a rock over the edge of a cliff and using my watch and gt2/2 to estimate how high the cliff was. This fascination intensified as I encountered more sophisticated versions of things like mechanics, with Lagrangians. I particularly loved quantum mechanics and general relativity. As far as I was concerned, those were the two coolest things in 20th century physics.
And on the social side of things, being an undergraduate in the late 1960s and early 1970s, did the counter-cultural movement, did it find its way to McGill? Were there anti-war protests? Were there civil rights marches and things like that?
There was one major protest in my first year, but I wasn't involved. It caused a couple million dollars' worth of damage, which was so shocking to Canadians, including Canadian students, (laughs) that they stopped protesting.
(laughs) Too much.
Yeah, I think Canadians just aren't quite into civil disorder to the extent that Americans are. You could see it then in the States, and you thought that Americans were insane. They were burning their cities down, there were riots, there was the draft and a crazy war. American music and culture were certainly flooding into Canada, and there were Americans coming as well, draft dodgers. We didn't discuss the chaos particularly. It was there all the time and in the news all the time, but not something that was present in your day-to-day life, like it was for students in the States.
Did you have a senior thesis?
When you were starting to think about graduate schools, where were you looking? Did you specifically look beyond Canada?
Yes, and in fact I went to England, to Cambridge for a year. I applied to Canadian schools, a couple of American schools, and Oxford and Cambridge. I didn’t know what I was doing, and I didn't get much advice from anyone about where to apply. But I thought I would have a good chance at these schools, because I was doing well at McGill. And when I—
And were you applying mostly by reputation, by specific programs you wanted, or by professors you wanted to work with?
By reputation purely.
I had no idea what people did at these places. (laughs) I went to Cambridge because it seemed more exotic than the American options, and it was. It was not really all that useful to me academically. I didn't quite manage to get the hang of their very hands-off teaching style.
Now was this a terminal master’s program, or you could have stayed on for the PhD if you wanted?
When I applied, I thought I was on track for a PhD, but actually I was in a terminal master’s program that you had to get through in order to apply for the PhD program. Within a couple of months, it was clear that people like me rarely got into the PhD program. It was usually people who had been undergraduates at Cambridge or Oxford. And so, I wrote off the idea that I would stay there for a PhD.
Now this natural advantage that Cambridge or Oxford undergraduates had, would you say that was more cultural or because they had a better grounding in physics than you did?
It's probably cultural more than anything. I had a pretty good grounding in physics. Canada's undergraduate degree was a four-year program, but it was almost as focused as Britain’s. Britain’s degree was only three years.
So, by the time you were in your fourth year as a Canadian undergraduate, you were doing courses that in the United States were first year grad courses. I already had good introductions to quantum mechanics and general relativity. And a high-level E&M course from Jackson's book, and mathematical physics at the level of Morse and Feshbach. And so on. So, I was well-prepared, more than well-prepared. But I decided to apply again and look more closely at American universities. I remember deciding between MIT and Stanford and picking Stanford because its brochure had a picture of a palm tree. That seemed more exotic (both laugh) than MIT. And I had heard of Stanford’s experimental particle physics program by this time and was gravitating towards particle physics. This is partly because Cambridge put me off general relativity. They were too formal there for my taste.
Now when you say, "too formal in GR," what does that mean exactly?
Just more time on the mathematics, less time on the physics. The mix wasn't quite what I wanted. And maybe I wasn't well-enough prepared for it mathematically. I don't know. Or maybe it was just whoever was lecturing that year. But I had my first quantum field theory course at Cambridge, and quantum field theory became very interesting to me and—
Do you remember who taught you quantum field theory at Cambridge?
Not only do I remember, I still keep in touch with him. It was Ian Drummond.
Cambridge for various reasons is a place that I've gone back to again and again. I did my first sabbatical there. I spent six months there five years ago, and then six months there again two years ago, and I spent six weeks there in the 1990s. I've kept connections with people there, including Ian Drummond. My wife and I really like Cambridge. I didn't like their academic program particularly, but I liked being in Europe, being in at an institution where a college courtyard called New Court was older than Canada.
Did you take the opportunity to travel at all while you were in England?
A little bit. Not in that first year as much as I should have. In subsequent visits there, it's been great as a base for going to Spain, Italy, France, or wherever you want to go. Which is part of the reason why we keep going back.
Peter, just as a point of reference, as a Canadian, was it more of a culture shock when you got to Cambridge or when you got to Stanford?
It was a bigger culture shock at Cambridge. The whole concept of the university and how it’s run is quite different. Having to put your gown on every night to go eat in a 600 year-old room at great long tables with people dishing out unbelievably overcooked Brussels sprouts (laughs). It was very, very different.
Besides the palm tree, were you attracted to SLAC particularly when you chose Stanford?
Yeah, I'd heard of SLAC and knew that something was happening there. I got there in '73 and I was about to be drawn into particle physics on false pretenses. Because pretty much from the time I got there until the time I left, there were major discoveries every few months, and a large number of them were happening at SLAC. Nobel prizes were being won left and right, and the whole picture of particle physics underwent a huge change.
A particle physics course that you would take in Cambridge, before I went to Stanford, was utterly different from the course you would have taken in my last year at Stanford. There was almost no overlap. So, I'm thinking, “Wow, this physics is amazing!” I didn't realize that it doesn't continue at this pace all the time. And in fact, it slowed down fairly dramatically right around the time I graduated. (both laugh) But it was a very intense and exciting time.
Now, as a graduate student, I'm curious from your vantage point, were you aware of the rift between the physics faculty and the SLAC faculty? Did that filter down to you?
You heard about it, but you didn't know much about it. It didn't affect your life much, but it was clear that they were two mostly unrelated groups.
And you spent most of your time in a department or at SLAC?
By my first summer there, I was spending all my time at SLAC. And it made sense because I was heading for particle physics. There were huge opportunities for a student in particle physics at SLAC. There wasn't nearly as much at Stanford. So, SLAC was obviously the place to be, and—
And who was your advisor? Who did your advisor end up being?
Stanley Brodsky, who is still around and still producing unbelievable numbers of papers. It was great fun to be a graduate student there then, because the graduate students were given desks in the hallway and by the mailboxes. So, you were out in the open where senior people would be walking by and arguing by the mailbox. You'd be there trying to tune them out often, because you were trying to work. But other times you were not tuning them out because you're eavesdropping on what they're talking about. (both laugh)
What was Stanley working on when you got to know him initially?
He worked on a variety of things. I needed a thesis advisor, and he had been teaching a particle physics course I was taking where he had mentioned bound state physics—the physics for example of an electron and a positron forming positronium. For me, one of the things that had been missing in quantum field theory was any discussion of bound states. After going through quantum mechanics where everything was about hydrogen atoms and bound states, I was puzzled by the complete absence of the topic in textbooks and (most) courses on quantum field theory. Stan’s course was the exception. So, I went and asked him if he had a project that had to do with bound states. He did. It turned out that he had a background in high precision QED calculations for bound states, which I hadn't known. He suggested a problem and approach to it. The problem had to do with bound states of electrons and muons. There was a recent calculation that he wanted me to reproduce using a different quantum field theory technology. I started reading about bound states and almost immediately decided that the technology he wanted me to use was not a good fit to this problem. But I found other ideas in the literature, from people like Don Yennie who would later convince me to go to Cornell. Don was Stan's thesis advisor, as it turned out. I started to develop my own versions of these strategies for handling bound states in quantum field theory.
Eventually I came up with a formalism that was better than other formalisms people were using. It wasn't as good as what would come later, but it was better than what was available then. So, I started applying my formalism to the calculation Stan had suggested, the one that someone else had already done and I was supposed to check. I got totally bogged down in the calculation. I had a mechanical way of setting the problem up and was filling notebooks up with formulas. These were expressions that would have to be evaluated numerically on a computer, because they involved complicated multidimensional integrals. It was going to take forever. But then one day I was reading a paper by Don Yennie where he described a different but somewhat related calculation. At one point he was comparing two different contributions and said it was obvious that one contribution was much smaller than the other. I was looking at the equation and wondering, “How did he know that?” Without actually doing the full calculation, he could tell that one piece was less important than the other. He gave an explanation for it in a sentence, and I can remember focusing on that sentence and trying to understand what it meant. I then had one of those lightbulb moments that you get a half dozen times in a career. I suddenly understood what he had done and how to apply it to my problem. Almost immediately I realized that 99% of what was in my notebook was non-leading and unimportant. There were just a handful of terms that I needed to focus on in order to reproduce the other people’s result, and I could do the integrals by hand without a computer. Within a week, I was done. And it turned out that the other people’s result was wrong.
Peter, for graduate students, you know, there's this hyper-focus on this little area of research that you're involved in. I wonder if this moment helped you connect what you were initially interested in with some of the more fundamental questions that were going on in the field at that time?
All the time you're trying to learn more, taking more advanced field theory courses, watching what's happening in particle physics. I wasn’t working on the really red-hot stuff, QCD, and the emerging Standard Model. I was focused on what was already a slightly old-fashioned problem, which was how to do bound state calculations in QED. But in order to do them, I had to manipulate quantum field theory. I had to use the tools of quantum field theory in actual calculations. My problems were very focused and on the sidelines of the main thrust of the times, but they taught me things like how to renormalize something and get a finite answer, how to design a new formalism, how to do calculations and compare with experiment, how to calculate things numerically.
This laid the foundation for my later career in ways that were impossible to predict back then. For example, I needed to do multidimensional integrals, and Stan gave me a computer code from CERN to do them numerically, using a Monte Carlo method. I had to learn how to use the tool, but I didn’t just use it, I tried to figure out how it worked, because I was in student mode. I remember reading a long article on a plane flight about how it worked and getting to a point in the article where I suddenly understood it. I was excited and closed the article to write out the details for myself, before going back to check that I had it right.
At some point I realized that, actually, what I had figured out was different from what was in the paper. And it was better. I came back, coded it up, and it was way better. It led to a computer program called VEGAS which is now widely used. Closing the book and working it out yourself is a habit that I suspect most scientists have. You don’t read an article from beginning to end. Instead you read just enough to allow you to guess what the author is up to. You then try to predict what will be on the next page and on page five, glancing at the equations and figures to verify that you got it right. This is very efficient if you succeed, because you don’t have to read the whole article. More importantly it is how you internalize understanding, by integrating it with your past experience. So, instead of learning only how to use a code from CERN you end up understanding Monte Carlo integration, which turns out to be really useful for event generators in particle physics, so lots of people end up using your code. Experience with Monte Carlo methods would also serve me well years later when I started work on lattice QCD. Doing QED calculations for my thesis, as well as the chores Stan would assign me (for example, calculating QED backgrounds for experimental groups), was how I exercised my tools as theorist, deepening my understanding of them and building out my tool kit.
Peter, when did you realize that you had enough to defend for your dissertation?
Once I figured out how to do bound state calculations, I started looking at other bound state calculations that people had done, in addition to the one Stan had assigned. My first published paper was on ortho-positronium decay. Bill Caswell, a postdoc at SLAC, noticed that there was “tension” between different experiments and between theory and experiment. He and I, together with fellow student Jonathan Sapirstein, redid the theory, coming up with the first correct calculation. This paper was followed by my paper on numerical integration, and then another longer paper on the bound state problem Stan had assigned (which again corrected errors in the literature). So, by the middle of my fourth year, I had accumulated a fair amount of work, but it hadn’t occurred to me that it was enough to graduate. Stan, in December of that year, wondered if I shouldn't apply for a named postdoctoral position at Chicago, and that was the first time that it occurred to me to worry about when I should graduate. (both laugh) So, I started thinking about it then. I decided not to do it that year, because I was having too much fun. I started writing my thesis during my fourth summer, while also working on four more papers with Caswell. And—
Peter, I'll test your memory. Who was on your thesis committee?
Can't remember. Oh, probably Stan...Maybe Stan, Fred Gilman...I don't know if Sid Drell was. You mentioned interviewing Persis Drell. I interacted with her dad then because he was quite active at SLAC then and very active with the grad students. He would have the grad students come together and give talks to each other, where he was the only faculty member present, by design. He would make a point of asking stupid questions just to show us that it was okay to ask stupid questions. (both laugh) I think he did this in the real seminars, too. I got the impression that he was consciously modeling correct behavior, which is never be afraid to ask when you don't understand something in a seminar, and no question is too stupid. It’s true. As I tell students, when you ask a stupid question, the person you're asking always assumes that it's not stupid. They will assume that there's more to it than you're asking, even if it's really stupid, and that's usually enough to provide cover. (both laugh) They'll answer your question, but then they'll say more, and you'll nod at the more as though that's what you were really asking. But—
Peter, how well-wrapped up were your post-graduate opportunities by the time you defended? Did you know where you were going next at that point?
I defended in March or thereabouts, and the postdoc decisions were in January. So, I'm pretty sure I knew where I was going, which was Cornell. I had visited the East Coast in January because Stony Brook was willing to pay for the airfare. No one else flew postdoc candidates out, but Stony Brook was trying to improve the quality of their grad students by impressing them with a trip. So, I took the opportunity to visit Princeton and Cornell at the same time. I remember going to Cornell: it was all-out blizzard. (laughs)
Welcome to Ithaca. (laughs)
Well, I grew up in Montreal, and I'd been living in California, so this was like coming home.
It was eight inches of snow, maybe ten—students were out pushing cars out of ditches for fun. That reminded me a lot of home, and probably helped win me over. Also, Ithaca is very pretty in the snow; I saw it at its absolute prettiest, with a thick layer of fresh snow covering everything.
Why did Cornell ultimately win out?
I enjoyed visiting there. They were very friendly. Don Yennie was particularly friendly, as was his wife Lois whom I met when they invited me over to their house for dinner. Don was a master of high-precision QED calculations in atoms, which was what I was doing. And as I said, one of the pivotal insights I needed to do my thesis actually came from a sentence in one of his papers. So, I had a high regard for him as a physicist. There were other well-known physicists in the department, like Kurt Gottfried who had written my favorite quantum mechanics text. But what made Cornell particularly feasible was that Ken Wilson was at Cornell. Everyone knew that Ken was an over-the-top genius at this point, even though his genius has been expressed more in condensed matter physics at that moment. He had come to QCD kind of late but had already had an impact by inventing lattice QCD. I was doing QED but had made a conscious decision to learn more about QCD as my next move, which I started before I left Stanford, in the months after I defended my thesis. I'd been reading papers about how it works, and— oh, there are two pileated woodpeckers right outside my window. You don't see those very often. Anyway, sorry. (laughs) The—
What was so exciting about QCD at this point where you felt compelled to learn more?
It had just been invented and it was the first real theory to explain the strong interaction.
I think I was at the first meeting where the name QCD was used. Was it Gell-Mann? I forget who actually coined the name. Being at SLAC, I was very aware of the events that made QCD go from having no name and no one paying attention to it, to being in about a year the standard explanation for the strong interactions. And what was nice about QCD is that it was a strong interaction theory, but also had a set of applications that were perturbative. So, you could use the tools I had been learning about in QED to do perturbation theory in QCD. I was trying to find a way to combine QCD with my knowledge about how to do bound states in quantum field theory. Most early perturbative QCD analyses were designed to get rid of the bound state physics. They would focus on cross-sections where the fact that bound states (protons, neutrons, etc.) exist fell out of the problem, so you didn't have to analyze the bound states. I wanted a problem where the bound state physics didn't drop out. I thought the high-momentum behavior of the pion or proton form factor might be such a problem. The form factor describes what happens when you hit a proton with a photon but in such a way that the proton stays together. People had applied QCD to processes where a proton is whacked by a photon but also smashed to smithereens—deep inelastic scattering. No one had tried studying the case where the proton stays together (elastic scattering). Stan Brodsky had earlier developed an explanation for the phenomenology of form factors, but it wasn't based on QCD. I didn't really know how to approach form factors in QCD, but I thought maybe there might be something like what these people were doing for deep inelastic scattering.
Peter, looking back on the postdoc, to what extent did you draw on the expertise that you developed for your thesis research, and to what extent was this really a brave new world for you and you were learning new physics as if you were a graduate student all over again?
I went to the postdoc, and I was trying to move into QCD. I was living off of my QED expertise, still generating papers. That kept me busy. But I was trying to learn QCD and trying to figure out how to apply QCD ideas to my new problem. But I couldn’t even begin to apply them without using my expertise on bound state physics. The key to a QCD analysis back then was to divide it into two parts. One part involved short-distance physics and could be calculated using perturbation theory. The other part involved long-distance physics, like bound state physics, and was non-perturbative. The non-perturbative part was parameterized and measured experimentally. You put the parts together and you had a prediction with maybe one or two parameters that you had to fit from experiment. I got really stuck trying to do this for form factors, for several months, but I remember when I figured it out, because it was probably the most intense “lightbulb” moment I've ever had. I was attending a seminar about birds at Cornell's Lab of Ornithology, that my wife wanted to go to. She's more of a bird watcher than I am but had brought me along. So, I was there in the middle of this pretty boring talk about sparrows or something (both laugh), thinking about my QCD problem. I was trying to relate my problem to the QCD work that people had been doing for the last couple of years. I suddenly saw the connection and realized what I needed to do. But it was 30 minutes into the talk, I was stuck in the middle of the audience, and I still had 30 minutes to go. I had a pen with me but no paper other than the program for the talk.
The program had a very narrow margin of white. (laugh) I started scribbling equations in that margin, trying to flesh out the idea before it went away. I came home that night and worked until three in the morning. I had pretty much worked out the entire formalism by the end of the next day. It was an adaptation of mainstream work on QCD, but in a very different context. It was very exciting because I was putting the pieces together and getting equations that were quite different from the ones that were in the literature, because they were for a different thing and a different problem, but they had a very close and obvious relationship to those equations. So, I knew they were right.
I was working on this problem with Stan Brodsky. I had not collaborated much with him while he was my advisor, but we had started talking about form factors during my last couple of weeks at SLAC. Now I had a formalism for form factors that brought his earlier work into QCD. I called him on the phone the day after working out the formalism and he had a million ideas for applications. It was really fantastic, because he had been thinking about this type of problem for a several years, but now we had a rigorous formalism that we could use to underpin it. This work powered the rest of my postdoc. The QED stuff kept going, but QCD took center stage. To get back to your question, this effort resulted from a conscious decision to try and get into the QCD game, but to try and get into it from a direction where I had some advantage, from my QED work. I'm reminded of Hans Bethe commenting once about how so many of his friends, when they won the Nobel prize, would cease to be productive because they didn’t want to work on any problem that wasn't worthy of a Nobel laureate. He said he always wanted to work on problems over which he had an unfair advantage. (both laugh) Hans had retired from Cornell just before I arrived but was still very active. He was one of the giants of twentieth century physics, having contributed to almost every branch of theoretical physics. It was very exciting to be a postdoc at Cornell having lunch with the theory group and suddenly finding Hans Bethe sitting next to you, telling jokes. (laughs) He was another reason why it was okay to go to Cornell, him and Wilson. You didn't have to explain why you chose to go to Cornell.
Now where was renormalization at this point? When you initially interacted with Ken Wilson?
Ken was inventing his own way of thinking about renormalization and everything else. I was a postdoc there for two years, and then became a faculty member, but I had a hard time understanding Ken at first. I remember early on as a faculty member being on qualifying exams for graduate students with him. Ken would ask questions about quantum mechanics, and I wouldn't have a clue what he was talking about. And I'm one of the examiners! I'd sit there and watch the poor graduate student struggle to figure out what Ken was asking about. I realized three or four years later, that all these ideas about quantum mechanics were from papers he had written in the 60s, when he was first coming to grips with renormalization as it applied to particle physics, condensed matter physics, and nuclear physics—all at the same time. My first interaction with him was giving my job talk, about bound states. He had the habit of falling asleep in seminars, so I wasn't totally sure he was paying attention. But he popped up at one point and said something like, “So the reason you're doing this is because the perturbation theory wouldn’t converge for the potential otherwise?” I had never actually thought about it that way, but he was exactly correct.
He had given me a new perspective on my problem, one which was featured in every subsequent talk I gave on the subject. But I still couldn’t understand his ideas about renormalization theory back then, and it worried me. Renormalization as I had learned it was about hiding the infinities that arise in perturbation theory for QED and other field theories. It was a highly technical subject that occupied 60 pages of your textbook which were not taught in courses. And it seemed miraculous that it worked for QED. I knew how to use it from my bound state calculations in QED and also from my first big QCD calculation, with Paul Mackenzie, my first graduate student. Renormalizability had emerged in the 1970s as an essential requirement for physical field theories. But I didn’t really understand it at other than a technical level. And Ken’s ideas didn’t seem to have anything to do with hiding infinities.
And as you say, not just a different subject, but a different way of thinking.
Totally different way of looking at even the standard problems I knew, and looking at huge classes of problems and things for which I had no background. I didn't have much experience with condensed matter physics, which is where Ken was very focused. I got to Cornell too late to participate in the seminars and courses where he and Michael Fisher and others were inventing the modern outlook in real time. I had to find my own path into the subject, building on my existing toolkit. Two insights got me there. One came from perturbative QCD. The running coupling constant and anomalous dimensions in QCD obviously had something to do with renormalization but the deep connection was initially obscure to me. Working with QCD and teaching it got me thinking about that connection, and at some point, I realized that these constructs arose trivially if you imagined doing QCD calculations with the ultraviolet cutoff equal to the largest energy scale in the problem under study. Conventionally one introduced an ultraviolet cutoff to regulate the infinities, and then one took the ultraviolet cutoff to infinity at the end of a calculation (after you had hidden the infinities). But I was making physical sense of the theory with a finite cutoff.
The second insight came from bound state QED. My old collaborator, Bill Caswell, and I would get together from time to time and reminisce about problems we didn't do back when we were writing papers together at SLAC. We started to map out one of these calculations using the formalism we had developed back then. We were adding and subtracting contributions to allow us to simplify the integrals by isolating loop contributions from relativistic momenta. At some point, in another very abrupt lightbulb moment, I suddenly realized that our pattern of subtractions and rearrangements corresponded to the renormalization of a field theory. But the field theory wasn’t QED. It was a non-relativistic version of QED, which we eventually called NRQED. That theory was not renormalizable by conventional standards, but we were showing that it could be made exactly equivalent to QED provided its ultraviolet cutoff was kept finite and of order the electron’s mass. I remember calling up Caswell and saying, “These subtractions are renormalizations! Actually, it's a field theory. It's a different field theory and we're making sense of this other field theory.”
We were understanding how a quantum field theory with a finite cutoff could be made arbitrarily precise, and what the true function of the cutoff was, which was to exclude physics details (from relativity) that were irrelevant to what we were calculating. NRQED is an example of what we now call an “effective field theory,” but the full implications of that concept weren’t widely understood or discussed back then. So, I had a lot of fun giving seminars about it. My favorite was called “What is Renormalization?” which I gave first at a summer school for graduate students in Boulder. I had several students pointing to textbooks and saying my theory was non-renormalizable and didn’t make sense — it was great fun arguing with them. The second time I gave this seminar was at Michigan and, by complete coincidence, Hans Bethe was there and in the audience. Hans’ famous but somewhat controversial 1940s calculation of the Lamb Shift in hydrogen is very elegantly derived from NRQED and easily extended to get the full result. So I added that to my talk. I think this kind of practical approach appealed to Hans. In any case this was me coming to grips finally with renormalization and in a way that, in retrospect, turned out to be very Wilsonian. (both laugh) It fundamentally changed my approach to field theories. Renormalizability isn’t a miraculous feature of select field theories but rather the normal consequence of formulating a low-energy approximation to arbitrary high-energy dynamics. The important question about a field theory isn’t whether or not it was renormalizable, but how renormalizable it is—it’s a question of degree. This understanding would soon be reinforced by my experience with lattice field theories, which are very usefully thought of as effective field theories.
Peter, did you feel as a postdoc that you were part of a broader QCD community beyond Cornell, or were you rather parochial in your research, in your collaborations at that point?
I came to Cornell, and I was very limited in my collaborations because I was still focusing on QED.
When I started work on QCD form factors, that opened up. I had joined the mainstream. So, now I was getting invited to speak at conferences about my work with Stan. I remember going to a meeting at Caltech, and having Feynman invite Stan and me to lunch so he could tell us that we were totally wrong, and (laughs) having a delightful argument with him. He wasn't right. He had a model for how form factors worked. It wasn't our model, and I understood his model and how it related to ours. He only understood his model and kept insisting on it in the lunch. But he also told colorful anecdotes about being at Cornell (laughs) and so it was a very memorable lunch. Unfortunately, I didn't keep the napkin he kept writing on—a big mistake.
Now, was it a two-year postdoc initially? Was that the initial appointment?
Yeah, that's all that people did then.
Right. So, when you were coming up on the end of that second year, what were you thinking at that point?
That I had to get a job.
Right. (laughs) Were you actively on the job market? Were you getting recruit offers?
I'll tell you a little bit of context. There's an interesting paper by Sol Gruner and collaborators, that shows that the size of the physics community grew exponentially from at least the Civil War in the US up until 1972, when it abruptly stopped growing. In 1972, the supply of physicists finally caught up to the demand, and things like PhD production, which had been growing exponentially, doubling every 15–18 years or so, suddenly went flat. What also went flat was the job market, and so when you became a graduate student in particle physics, your advisor was obliged to tell you that you probably were not going to get a job in academia. So, it wasn't clear to me as a student that I would ever get any jobs anywhere at any point, but I decided that this was fun and I would ride it as far as I could. I managed to land a job after Stanford, despite working on a fairly unfashionable topic, thanks I assume to strong recommendations from Stan. But then at Cornell doing the QCD work, which was fashionable, got me invited to conferences and to places like Princeton and...
And Peter, to the extent that physics is not immune from trends, in terms of the job market, did you feel like this was, in terms of your career, was this a very good thing to work on at the right time?
Oh, I love QCD. I really liked what people had done, and I did not want to do high precision QED forever, because it's too hard—you make even the smallest mistake, and your paper's not worth anything, because someone else is going to measure it or calculate it correctly and you won't get any credit at all. It was too nerve-wracking, and I was eager to join more of the mainstream. It wasn't so much about getting a job. It was more about being able to understand what people were doing, and what they were talking about in the weekly seminars or in their preprints that you would see on the preprint shelf every Friday. Having your own QCD project, figuring out your own formalism for a corner of QCD, you're learning a lot of technique and concepts that make you much more confident about your grasp of the big picture. It seemed to work for jobs. So, I was getting job interviews. I got good offers. Then, or at some point in my life, I've probably had offers or at least inquiries from most of the departments that are more highly ranked than Cornell in physics, but I really liked Cornell.
And you mean that holistically? You liked Ithaca, you liked the people. It was the whole package for you.
Yeah, Ithaca is very beautiful in ways that appealed to me. But also, the university was very friendly, and the department was very friendly. It had very little backbiting. It was very supportive. I didn't realize until years later that that was not necessarily true of other departments.
We were all on one hallway, everyone’s door was open, and there were postdocs and graduate students with offices across the hall. You could hang out in the coffee room. You could walk into Hans Bethe's office and tell him something cool that you figured out. Everyone would have lunch together every day, and the whole department had lunch together every Monday. So, it was a friendly supportive environment and still is.
This is not like many other top-flight physics departments. This is a fairly unique kind of place.
It’s maybe because we're so isolated—you can't get away from your colleagues. (both laugh) You go to the grocery store and they're there, you go to your son's little league baseball and they're there. They're everywhere. But I think Cornell as a whole is unusually collegial and it's an advantage to working here, and I appreciated it and still do.
Peter, did you have any opportunities to do undergraduate teaching before you joined the faculty proper?
Just in TA-ing, office hours as a TA. I did some classroom problem sessions, but not much.
I did like office hours. As an undergraduate, I had sometimes tutored people and enjoyed it a lot. I enjoyed explaining things to people. Well, actually, not that. I enjoyed helping them explain things to themselves—I understood at an early age that that's how you teach people. But I had had no real experience of teaching before I joined the faculty. I loved it though, from the first minute. Back then they started you out at Cornell as part of a team, teaching big intro courses, with someone else handling the lecturing part. You would run recitation sections with smaller groups of students, which was an easy way to transition in. In other semesters they gave me nice courses like quantum mechanics, which were very easy to be enthusiastic about,
Peter, when did computers start to become really effective for studying lattice QCD?
That’s an excellent question. They were not even remotely adequate when lattice QCD was invented by Ken Wilson. Lattice QCD was Ken’s entree into QCD; he felt he was being left out and he wanted to break in somehow, in the end building on his experience with spin models in condensed matter physics. He did not use computers initially, but very quickly was trying to use them. But the computers were quite inadequate. A mixture of inadequate algorithms and inadequate computers held lattice QCD back for about 25 years. Computers were doubling in speed every 18 months all through that period. And lattice people were very much involved in that development. But it wasn't enough to overcome the major obstacle, which was dealing with full QCD. Back then most people were using an approximation to real QCD called “quenched QCD.” It was impossible to predict how much error was introduced by this approximation in most applications, but it hugely reduced the computing costs. The uncontrolled errors, however, meant that lattice QCD was of limited use to people outside the subfield—to experimenters, for example. We had to unquench somehow. Around 2000, a bunch of us suddenly realized that there was a very fast way to do this, to get rid of the quenching. It involved a particular way of putting the quarks on the lattice, of discretizing the theory. There had long been a way of doing this called “staggered quarks,” that was super fast, but also very inaccurate. But right around 2000, some work that Doug Toussaint and Kostas Orginos and their collaborators were doing, and work that me and my collaborators were doing collided, and we collectively realized that you could actually dramatically improve the precision of the staggered quark discretization so it became a viable way of discretizing quark actions. It was maybe 1000 times faster than any other alternative.
That leap was equivalent to 15 years of doubling every 18 months. But you got that 1000 all at once by using staggered quarks. Suddenly you were able to do QCD simulations that really had to agree with experiment, and if they didn't, then QCD was wrong. QCD was invented around 1975. Lattice QCD was invented six months later. But it was really just in the first years of the 2000s, right after 2000, that we could expect results from lattice QCD that were accurate to a few percent. It was a very exciting moment. I remember during that period calling up people whom I knew or had worked with before and inviting them to come to Cornell to develop a physics plan to take advantage of this moment. These were people like Christine Davies and Junko Shigemitsu, and Paul Mackenzie and Andreas Kronfeld. We didn't have any money for a meeting, but people found ways to get to Cornell, or wherever we were meeting, for informal workshops. We as a field had been trying for 25 years to do real lattice QCD simulations and now it was finally working.
So, what do we do now? What's the program? We decided very early on to focus on areas in experimental physics that needed lattice QCD. The area that really needed it was obviously heavy-quark physics, B physics. Cornell’s experimenters had been at the center of B physics for several years, so I knew a lot about it. Given lattice QCD’s lackluster performance over the past 25 years, one obvious challenge was going to be convincing people that we knew what we were talking about when we claimed few percent precision for lattice simulations. I remember one morning, during one of these informal workshops at Cornell, worrying with my colleagues about how to make the case. We sat with the Particle Data Book looking for things that had already been measured accurately, say to 1%, that we could calculate with our simulations to the same precision. Success with these would validate our claims for other calculations. We found very little that was accurate enough. The problem was that experimenters had not bothered to push most of their measurements to the 1% level because there had been no theory available that could match that level of precision—so what was the point of further precision? I had to leave my colleagues at lunch time to attend a lab meeting at Maury Tigner’s house. Maury was our lab director then and was using a series of meetings at his house to explore options for the lab’s future. Cornell had dominated experimental B physics in the 1990s but now had overwhelming competition from new facilities at SLAC and KEK.
So, we needed a different future. That afternoon was aimed at discussing the possibility of lowering the energy of our facility to make it into a high-precision c-quark factory. The experimenters in the working group pointed out that such a facility would allow us to revisit many old measurements about D mesons and charmonium mesons but with far greater precision, perhaps down to a few percent. They asked me, “Would anyone be interested in that?” I replied “Well just this morning I was talking with a bunch of colleagues about how desperately we need high-precision measurements for things like D physics. These would have a big impact on the development of lattice QCD and B physics because we could use them to establish the precision of lattice methods so that you could trust them for b quarks.”
The discussion that afternoon led ultimately to the CLEO-c proposal and the next decade of physics at the lab. And lattice QCD was an integral part of the proposal, which was probably a first for lattice QCD. I remember giving talks to the NSF and the DOE and saying, “Here are all the things we're going to calculate with lattice QCD. They're all related to the CKM matrix and these other quantities that you care about for new physics. And we're going to calculate them all to 2% or better.” And there were ten processes there, maybe 15, none of which had been measured or calculated to better than 15 or 20%, if at all. I really enjoyed writing papers ten years later showing that lattice QCD had delivered what it had promised, as had CLEO-c and the B factories. We had all done what we said we would. It was extremely energizing to be so strongly-coupled to an experimental effort. We'd get invited to their meetings, they'd get invited to our lattice meetings, and that back and forth between experimenters and theorists in lattice QCD had not really happened before. In the beginning most experimenters treated lattice QCD as one of many approximate models theorists used. By the end of this period, everyone appreciated that lattice QCD delivered the real answer. By then lattice QCD was affecting what experimenters were doing and planning. Theory and experiment were (and are) interacting in a very creative way. And it was great fun!
Peter, kind of a broad question, and you touched on it a little bit. As QCD was maturing, you were definitely part of an effort to bring what you were learning to other subfields in physics. Nuclear physics, atomic physics, even condensed matter. I'd like to ask sort of chronologically what were the subfields that you saw immediately needed the QCD approach or expertise?
The piece of expertise that looked to be most useful in other places was effective field theory. This was the renormalization technique that allowed us to develop a non-relativistic version of QED (and later QCD), and that completely changed my understanding of renormalization and quantum field theory. From my (mostly undergraduate) teaching I could see that all sorts of things in many branches of classical physics could be better understood within that framework—for example, the Abraham-Lorentz formula for the radiation reaction force in electromagnetism. It is a technique for isolating physics at different length scales so the different scales could be treated differently, more simply. It looked very powerful to me, and I felt that there must be useful applications in other areas. I didn't have enough time to go too deeply into other areas, and never did. Condensed matter physics didn't need me. They had Wilson and lots of other people who had their own way of using these same ideas. I became somewhat interested in nuclear physics, however, because I felt that the semi-phenomenological nuclear potential models that people used could almost certainly be put on a rigorous foundation using effective field theory. I did a bit of an effort with one graduate student that got bogged down, in part because we paid too much attention to a short paper by Steven Weinberg who was dabbling in the subject. A much more serious effort came from the University of Washington where David Kaplan, Martin Savage, and others were starting to play with that same idea. I got invited to spend a summer at Washington, where I was exposed to some of this work by sharing an office with Martin.
So, I started thinking about it again. Towards the end of my stay, I received an invitation to give a week of lectures in Brazil at a summer (their summer) school, and I needed a topic. It suddenly occurred to me that I could do renormalization theory not just with quantum field theory but also with the Schrödinger equation, and that one of the nice applications might be nuclear physics. I ended up developing a set of lectures which I called "How to Renormalize the Schrödinger Equation" as a way of illustrating effective field theory, but without a field theory. I developed the ideas with a toy model, with computational examples that people could experiment with on their own. But then the last section was applying those ideas to proton-proton scattering at low energies, which is a nuclear potential problem. Giving the lectures in Brazil was tremendous fun in part because I was figuring things out just days (or day) before I presented them. (both laugh)
Peter, when did you start working significantly on hadrons that contained heavy quarks?
I eased into it slowly over time, starting with work in the mid 1980s, with graduate students Luo Ning and Beth Thacker. We were adapting ideas from NRQED to lattice QCD in order to study mesons made of a heavy quark and anti-quark, like the J/psi meson. This evolved into a larger effort, with colleagues like Christine Davies and Junko Shigemitsu, in the 1990s. The most active area of research in heavy-quark physics then concerned mesons like the D or B that had only one heavy quark or anti-quark. We weren’t working on that, but I was following the theoretical developments to some extent. I was helped in this when the experimenters at Cornell asked me to give an impromptu mini-course on heavy-quark effective theory, which had recently been developed. I had some understanding of the topic from my own work in non-relativistic QCD, but I hadn't really gotten into the specifics of B physics until the experimenters asked me to give these lectures. I told them, “I'm happy to give as many seminars as you want, but you need to tell me what you want explained. Tell me which papers, and I will explain them.” So, they did. It was a form of extreme lecturing because they would select papers or topics for the next meeting, and I would have to figure them out and how to package them in time for a lecture a week later. I really enjoyed the back and forth with my experimental colleagues. It prepared me well for when, in the early 2000s, we decided that B and D physics was where the most potent applications of lattice QCD would be initially. Heavy quark physics is where experimenters first started to take lattice QCD seriously. It took five years, six years, for them to fully absorb the fact that that's what they were doing, but by the end of a decade, lattice QCD was fully established. It was helping us discover beyond the standard model. Which, of course, we didn't, alas.
Yeah, that actually gets me to another really broad question and that is, to sort of historicize the tenacity of the Standard Model. We began earlier in our conversation, you know, it's still holding up. What were some points over the course of your career where it seemed most promising that you were coming up on research that might break out of the Standard Model, or really might up-end some of our fundamental understandings?
The turning point was supposed to be when CERN turned on the LHC.
The SSC down in Texas was canceled in the early 90s. It should have been the accelerator that was big enough to break the Standard Model. That left a big gap until the LHC turned on 15 years later. From the late 1970s, theorists thought they had a pretty good idea of where the Standard Model would break down. It was clear that the theory was an effective theory and that it would start to fail at energies around a TeV. Parts of the Standard Model were held together by theoretical duct tape, and that tape had to fail at some point, revealing new, more fundamental physics. Unfortunately, the duct tape is proving to be much stronger than (both laugh) than it should be, and it's getting a bit discouraging. For decades it seemed clear that as soon as you got to a TeV or so, we'd be in the new territory. There were various side efforts through this time, seeking hints of the breakdown—things like B physics and neutrino physics. But it wasn’t clear how much you would learn about the breakdown from such hints. The energy frontier, the LHC was where things would start to become clear. But it hasn’t happened yet. And it's causing theorists to question assumptions we made about how field theories work and how the Standard Model might have emerged from the messy physics in the Big Bang—maybe that history is relevant to the structure of the Standard Model. Theorists are contemplating a much richer variety of options around such issues than they did before the LHC. Maybe we will end up in a much more interesting place than we expected.
Absolutely. Peter, in what ways—there's been so much exciting development in condensed matter over the years. In what ways has your research really moved that subfield along?
It hasn't. Condensed matter physics is following its own path. Ideas migrate from particle physics into condensed matter physics and sometimes back. Theorists are really good at imagining wild theories that have nothing to do with the physical world, and they sometimes get criticized for it. But in condensed matter experimenters, they have access to little gods who can create new worlds in their labs. (both laugh) So, if a theorist has an interesting idea for a theory in two dimensions rather than four dimensions, a condensed matter experimenter might be able to manufacture that world. Some of the more formal, abstract parts of particle physics have had strong cross-over with parts of condensed matter physics. But that is not the sort of thing I do.
And to go to much larger scales, of course as you well know, so many particle physicists over the recent generation have migrated to astrophysics and cosmology, and so I'd like to ask you in particular: in what ways has QCD really helped those fields as well?
Not a huge amount. It’s important to understand the strong interaction if you are studying dense objects like neutron stars. They might have quark matter at their cores, for which you would want an equation of state. Lattice QCD can provide some of this information. The twin mysteries of dark matter and dark energy might well be resolved by accelerator experiments. Dark matter might consist of as yet hypothetical particles called axions. That would be fun for a lot of reasons, and it would be directly connected to QCD since the axion is a consequence of symmetry breaking in QCD. Axions, if they exist, are another kind of particle like the neutrino where huge quantities of them could flow through your body without you noticing.
Peter, I'd like to ask a few questions about the administrative side of your career. In 1999, you're named chair of the department. There's two approaches to that. One of course is that it's a necessary service, it's your turn, you finally say you agree to do it. And the other is, some professors see this really as an opportunity to get things changed in the department both academically, culturally, bureaucratically, and it's a great opportunity to get those things through. I wonder where you see yourself falling on that spectrum?
For me it was a bit of both. I was starting to become aware, in the years leading up to 1999, of the age structure of the department. When I became chair, I was 47 or 48, I can't quite remember. There was maybe one person in their 50s, and then half the department was 60 and older. Everyone else was younger than I was. There was a big gap between the young half the department and the half of the department closing in on retirement. The people in their 60s had done us the tremendous service of running everything since they had been hired back in the 1960s—they were the department chairs, the lab directors and so. It allowed the younger people, like me, to avoid administrative jobs. I had been approached earlier to be chair and was amazed that they had asked me. But I said no. I had too many other things to do. Too much research to do.
But I was slowly becoming aware of the need for the younger half to start to participate, before the people with all the experience retired. One of the people who helped me appreciate the need was Persis Drell. She and Barbara Cooper, another member of the younger half of the department (and the department’s first woman), were the people who talked me into agreeing to be chair. Persis had helped me appreciate how the younger people in the particle physics part of the department had lots of opinions about the lab and its future, but had a hard time being heard at lab-wide meetings. It wasn't because the older people didn't take them seriously. It was because the senior people had been there for decades and were so used to talking at meetings and so comfortable with each other that they would dominate most discussions.
I remember Persis at one point organizing the young faculty in the lab to meet separately around an important issue, because she sensed that they had an opinion that was not coming out in the lab-wide discussion. This alerted the lab leadership to the fact that the young faculty had an opinion about the issue, and they were totally open to it and welcomed it. This sort of experience helped me see that it was important for me and others to start taking on some of these responsibilities. So, I agreed partly because it was my turn, but also, speaking to your second point, because there was an important structural change, driven by demographics, that was coming and needed to be managed. I only agreed to do it because Barbara and Persis agreed to help me. (laughs) So, I had a kitchen cabinet ready. Barbara suddenly and very tragically died of cancer right after I became chair—a terrible loss for the department and for me personally. Persis became director of graduate studies for the last couple of years before she left for SLAC, and that was fantastic. My first taste of administration was a very rewarding experience, even as a new era of my research career was starting, since my big lattice QCD effort began just then.
And I was just going to say, in this new era of your career, you're named Dean of College of Arts and Sciences. That must have been a difficult decision for you to take on an even greater administrative responsibility while there was all this new research to do.
It was a complicated transition. My predecessor as dean had fallen out with the administration and had been fired. They needed a replacement. I was on the search committee for a new dean, and we had helped them find one. But he bailed on us at the last minute, because he been given a deanship at his home institution. The administration was really stuck, because they needed a new dean in four weeks, and...
So, this was an emergency appointment essentially.
So, it was an emergency appointment. It had never occurred to me even for a moment that I might be a dean. This was the farthest thing from my mind. But the provost, Biddy Martin, who I'd gotten to know fairly well on the dean search committee, called me in and flabbergasted me by opening with, “How would you like to be dean for a year?” I took a day first to recover from the shock and then to decide. This gave me a chance to have a long discussion with the college’s associate dean, Jane Pedersen, who managed the administrative part of the college. Jane convinced me that I could survive in the job, mostly I thought because I would have her supporting me. And (laughs), and it also convinced me that they really desperately needed someone new to come in. So, I decided, ah, it's just a year.
"Just a year." (laughs)
And it'll be a really interesting experience. I had enjoyed being chair way more than I thought I would, although I had not planned to be chair beyond the end of my five-year term. But then this came up. I was tempted into it and then found it was actually interesting too. And I found that I could continue my research, which I did throughout my deanship. I got a lot of papers published then because I had supportive collaborators. I tried to sequester a little part of every week to devote to it and working on research made getting stranded in airports or on long plane rides somewhat enjoyable. People would ask, "How can you possibly manage to be dean and continue your research?" And I would say, "I couldn't have made it through as dean if I didn't have this world to escape into, with its equations over which I have control." Unlike a college, which you don't have control over. So, I ended up agreeing to be dean, but with a very firm idea that it would be over in six or seven years, something like that. And it would have been, except we had the financial downturn. I didn't want to leave and have a successor come in and have to make 10% budget cuts as their first move. So, I stuck around and made the budget cuts and started some of the process of rebuilding and then left. I was dean for ten years, leaving one year before my second term ended—I wanted to go back and enjoy teaching and research again.
What do you see as your chief accomplishments as dean?
The main job of a dean is to enable and facilitate the fantastic work of all the faculty in the college—their fantastic work with your students, and their fantastic work in their research and their scholarship. That’s the most important thing you do. You have to help departments to function effectively and to hire well. About a third of the college’s faculty were hired during my tenure as dean. Your largest contribution to society is the thousands of students you graduate, sending them out into the world with the education that you helped make possible. So, these are the most important things. There are lots of other things I did, some of which I'll tell you about, but just making the college work, the fundamental part work, and doing it despite a major financial downturn, and figuring out how to recover from that without destroying half your departments, and without destroying faculty morale, was a big deal. We also completed a fantastic physical sciences building under my watch, the first new one in the college in half a century. And I fundraised for and launched the building of a new humanities building, the first new one in a century—a really gorgeous humanities building. I sit in these buildings today and enjoy remembering working on them with the architects when the buildings were nothing more than some little Styrofoam blocks. The buildings are not as important, though, as making sure that the faculty are able to function, that they can do what they need to do. But it is important that they have an office.
Peter, how did you get involved with PCAST in the Obama administration?
I was asked by Carl Wieman, whom I knew from graduate school. We were both in the same class at Stanford. We'd go cross-country skiing and biking together and shared a house for a while. Carl was in Washington then, high up in OSTP. The future of STEM education was a strong focus for him there and he was agitating on several fronts to get the government and the country to pay more attention (or even any attention) to the extensive research on how to improve STEM teaching. One such front was PCAST who were interested in writing a report on the impact of education research, and especially discipline-based education research on STEM teaching at universities and colleges. He asked me to co-chair the PCAST working group for that project in part because I was a dean and could provide that perspective on the problem. I agreed because Carl asked me, but also because I'd always been interested in education research, but never with enough time to really get into it seriously. I thought this might be an efficient way to get up to speed. And it was a great. It led to a 2012 PCAST report called…oh...I forget what it's called. Engage...How can I forget? I remember the dinner where we came up with the title. (laughs)
[reads the title] “Engage to Excel: Producing One Million Additional College Graduates with Degrees in Science, Technology, Engineering, and Mathematics.”
That's right. That's the title. I remember the dinner in Washington where a group of us from the committee were batting around names trying to come up with a name for it. I liked the Engage to Excel, but I was less sure about the one million jobs part—it was not as well grounded in evidence as the other part.
Peter, kind of a chicken and the egg question: your interest in physics teaching and pedagogy. Did it come as a result of this work, or was that sort of always there in the background for you?
It was always there in the background because I taught. You're always trying to do a better job. And I love teaching. It's hugely important to me. And hugely important to my research career, too, by the way. I find teaching to be very connected to my research. It deepens and enriches your subject knowledge in ways that have both tangible and intangible benefits for your research. But I was always aware of shortcomings in my teaching, of things that could be improved. I wanted to try new ideas. I had a really fun semester teaching a large intro course on waves for engineers, where Persis Drell was assigned to teach with me.
Persis and I talked at some length about what was wrong with the existing course and came up with a significantly different approach to the material. This is the kind of thing that happens regularly when you work with Persis. We decided to build computing into the course as part of a strategy to emphasize understanding over rote memorization. We focused on qualitative questions and estimation—for example, giving people a graph of a function and having them estimate the second derivative from the picture (not from a formula) as part of an analysis of a system or simulation. We tried to connect the course back to problems that might be relevant to engineers, like why this famous bridge fell down. We put a huge amount of work into this course and it was hugely fun to do it, and got a lot of interesting reactions from the students, but I remember, again and again, thinking to myself, “I have no idea whether this is actually a good idea or not.” I had some thoughts about what you might do to figure out if it was a good idea, but I didn't have anywhere near the time to pursue any of them.
It was always like this when teaching. You always have nine jobs as a faculty member and time for only four of them. I had heard a bit about using clickers in classes and I was trying them out in my teaching just before I went into administration. Clicker questions totally changed the feel of a class, but I wasn’t at all sure I was making the best use of them (I wasn’t). It was enough to intrigue me. So several years later I went to Washington and I was on this committee, meeting people who are doing the actual research behind active-learning (clickers, for example)—people who really know about it, like Carl, and Dan Schwartz from Stanford. I would have dinners with Carl where he would tell me about carefully controlled experiments they were doing at UBC that showed active learning shifting entire grade distributions upwards by more than two letter grades relative to conventional lectures.
I started reading Dan’s papers where he would describe really beautiful multi-stage experiments to evaluate different active-learning strategies. The results were not small—not mere 10% improvements. Rather, he was finding that students would learn two or three times as much when these new strategies were employed. There was much, much more research than I had imagined, and some very clear signals emerging from the research about how university teaching should change.
This was all very exciting, but it created a crisis for me as a dean. A constant frustration for this research community was that none of their colleagues in STEM departments seemed to be paying attention to the results coming from the research on teaching. All these amazing results, but most people keep doing the wrong thing. Many saw this as evidence that faculty don’t care about teaching. This last point was at odds with my experience as dean. I found that faculty love to talk about their teaching. It's how I made small talk with faculty across the campus when I was dean. I would run into someone on the Arts Quad and ask, “What are you teaching this week?” They'd get all excited and by the end I’d be thinking, "Where was this course when I was in school?" (laughs) No, I felt the real challenge for faculty vis-à-vis education research was that they don’t have time even to learn about it, let alone redesign all of their teaching. The personal crisis for me then was that the obvious person in a college to do something about this problem was the dean!
(laughs) Peter, were you involved at all in AAPT?
No. I knew it existed, I knew they had journals, but I didn't have time to really get involved. And I knew there was a community working on these problems, people like Joe Redish. While I was Physics chair, I tried to figure out how to bring more of these ideas into the physics department, but I wasn't there long enough to pull it off. Then I went on this committee, and I'm dean, and I'm thinking the dean should be a key player in making something happen. But what exactly should the dean do? That problem was solved by Carl, who was already trying a scheme at UBC and in Colorado that addressed what I felt was the central problem for faculty, lack of time. They were funding teams of faculty in departments so they could hire extra postdocs to work with the faculty on course transformations. How to do this effectively is not immediately obvious, but Carl had already worked out a lot of the details.
So, I came away from that experience with some idea of what you ought to do, and all I needed was money. But, of course, we were doing 10% budget cuts then in response to the financial downturn, so I didn’t have money (laughs). But this was knocking around in my head and by dumb luck, fresh from one of my Washington meetings, I found myself having lunch with a large group of Cornell donors at a big table. I was talking to one donor about what I'd heard at this meeting and getting excited about education research and its potential. At one point I said, “But of course it's really hard to get this out to any of the faculty. I've got ideas about how to do it, but they would need money and resources to pull them off.” The person I was talking to was very polite. I wasn't asking him for anything, I was just making small talk at lunch. But there was a fellow on the other side of this big table who was eavesdropping, and afterwards he came over and said, “I'd really be interested in supporting that.” That’s a very significant phrase for a dean who spends 40% of his time fundraising. This donor made a program at Cornell possible. I launched what is now called the Active Learning Initiative in my last year as dean, and came back later to run it as director, after a year of recovering from being dean. And I still run it today. We have worked with well over 100 faculty at Cornell in 16 departments. And unlike Carl’s program, we have moved beyond just STEM subjects with some amazing projects in places like the Classics Department. We still have many departments to go. We want to infect as many departments as we can with active learning by helping a team of faculty in each to learn about and successfully apply active learning in their courses. So that PCAST involvement turned me into an active-learning evangelist.
Peter, what did it feel like when you won the Sakurai Prize in 2016?
It was very pleasant. It was unexpected. And what made it very nice was that one of my former graduate students, Andreas Kronfeld, campaigned for it.
I bought him a nice dinner. (both laugh)
Did you feel like it was more a lifetime achievement award, or was it given to you for something specific?
They picked three specific things, which completely spanned my career. They cited the work I did with Stan Brodsky, right at the start. Then there was the work on non-relativistic field theories that I started with Caswell, but then adapted to QCD later on, with Geoff Bodwin and Eric Braaten. The QCD applications had a much larger impact than the QED application. And then there was work on lattice QCD. So, it was my first QCD stuff, my first work on effective field theories, and then work on lattice QCD, all of which were major high points for me in terms of my engagement and the fun I had (laughs) working with colleagues to figure things out. In a quote I love, Feynman talks about how he liked figuring things out so he could give talks about them. And I very much relate to that. Half the fun of figuring stuff out is so you can describe it to people. And engage them and actually if you're lucky, they help you take it in new directions. But just the fun of discovering something and sharing it with someone is really great.
And Peter, just to bring the narrative up to the present from the past five years or so, what have been the most important research areas for you and your collaborators?
We're still beavering through lattice QCD projects that we couldn't do before, because we needed better techniques and bigger computers. And it's a mixture of things like heavy quark physics and the magnetic moment of the muon, which has a QCD contribution. And I mentioned that numerical integration program VEGAS, which was my second publication and my last contribution to the numerical analysis. That paper is one of my most highly cited papers, but it isn’t an area I had worked on much since then. But I needed to use that integration algorithm myself for research in 2013, which got me thinking about again. And I suddenly realized that there was a trick I should have used in the original code that would have made it a lot better. I tried out the trick and it actually made it a lot better—there was a whole class of integrals that were 10 or 20 times more accurate if you used this variation on my old algorithm. So, I coded up a new version and posted it online. That was 2013. One of my pandemic efforts was to write up a paper describing the new algorithm and submit it to the same journal as the original article. So, I have been revisiting my early graduate student days. Maybe that's about getting old. (laughs) When you try to relive your youth. That's probably it. Oh, that's a distressing thought.
(laughs) Peter, that's a great segue for the last part of our talk. First, I'd like to ask a sort of, a few broadly retrospective questions about your career, and then we'll look to the future in the last part. So, first is, I'd like you to reflect sort of broadly. Given that you were really present at the creation of QCD, and given that it's still a very fertile field to work on, what are some of the things in QCD that are really understood now, and what are areas in QCD where there's much work that remains to be done?
Yeah, that's an excellent question. My career is the same age as QCD.
And the large majority of my papers have been on applications of QCD. When QCD began, you were only able to explain phenomenon that involved very short distances. So, if you asked about how two quarks interact with each other at distances much smaller than the size of the nucleon of a proton, you could calculate that. You could use perturbation theory, you could use the methods of QED. There was also important work on non-perturbative aspects of the theory, things like instantons, but these were hard to connect to experiment. A major advance was lattice QCD, which Wilson invented right at the start. And he did something amazing with it. He proved that quarks were confined in the limit of very strong coupling in this lattice gauge theory. It was the only proof of confinement in a model that might be rigorously connected to real QCD. And confinement was an obvious feature of nature. But Wilson’s strong-coupling approximation didn't lead anywhere else, as it turned out. By the end of the 1970s lattice QCD became focused on numerical solutions using large computers. But the computers weren’t large enough and the algorithms powerful enough to connect with experiment.
So, lattice QCD chugged along for 25 years or so, mostly disconnected from experiment other QCD-related activities. During that time practical applications of QCD relied almost entirely on perturbation theory, often using highly innovative techniques to solve highly non-trivial problems but still perturbation theory. Then around 2000 lattice QCD finally came into its own. That marked a major expansion in what you can actually do with QCD. For example, you can calculate the radius of a proton or the mass of a proton now, to a fraction of a percent. We didn't used to be able to do that because those aren’t short-distance quantities. We could only study physics that was much smaller than a proton. So, we're much closer to realizing the original promise of QCD when it was discovered, that it would “explain” the strong interaction. Originally, it didn't explain the strong interaction. All it explained was QCD at short distances, where the interactions aren’t strong. Lattice QCD can do a lot but there's still a huge number of phenomena that are not explained and where lattice QCD provides little or no help. Lattice QCD is a fairly crude numerical approach to the theory. You can make it quite accurate, but the numerical scheme really only works well for problems that are fairly static, like the radius of the proton. But if you want to, for example, smash an electron and a positron together to create a quark and an anti-quark, lattice QCD has little to say about how the quark and anti-quark pull the vacuum apart to form high-energy jets of pions and protons and other hadrons.
Jet formation is something that happens in almost every event that we study at CERN, and we can say a lot about it, but a fundamental theory of jets doesn't exist at the moment. We can parameterize it in terms of unknowns, but I can't do a calculation straight from the QCD Lagrangian that will predict how that quark is going to fragment. The problem with jet formation is that it evolves in time. And it can be very complicated—at CERN, a jet can be hundreds of particles. Another thing that's very hard to do in lattice QCD is equations of state. I mentioned quark matter, at the center of neutron stars, as something QCD can help us understand. But lattice QCD has a problem, again because of the numerical methods we use, simulating quark matter. We can simulate empty space well, but as soon as you want quark matter, as soon as there's a non-zero chemical potential, you've got what’s called a “sign problem,” and it's very hard to solve.
So, there are huge parts of QCD that we don’t know how to analyze. Physics often gives you the impression that you're answering everything, but that's because physics focuses on the things you can answer. (both laugh) And that raises an interesting question about the nature of science, which you folks, historians of science, and philosophers of science, argue about. It concerns the extent to which you get trapped in your theories by what you're able to do and what you're able to figure out. And you certainly do. I could see it as a student in the 70s, because the experiments that people were paying attention to before the J/psi was discovered, and the experiments they were paying attention to afterwards, were completely different. The courses I took before the J/psi had a lot in them about diffractive scattering, and things like Regge poles and Regge calculus. These relate to phenomena that occur in nature (and at CERN) much more frequently than hadron jets. But these phenomena involve long-distance coherent behavior in QCD about which our current QCD tools have little or nothing to say. And who knows? If we had actually figured out how to analyze those things theoretically in a competent way, maybe we would have never discovered QCD—I doubt it, but maybe the subject would have looked very, very different.
And to talk about those traps, in terms of all the things that we don't understand, do you tend to think of that more in terms of the theoretical limitations or the experimental limitations?
Both. You just can't do one without the other.
But it's very obvious in the period before QCD, the field was flailing about. It was trying on ideas like quarks, but not sure how serious to take them. To many people, quarks were just a mathematical crutch, to help deal with group theory. After the J/psi came along it became obvious that you really had quarks, (laughs) you couldn't walk away from them anymore. The change was very sudden. Right now, we are flailing about again, trying to find physics beyond the Standard Model. We are pretty certain there has to be more physics and there are really creative ideas all over the place. I don't follow them closely enough to have a favorite. Creating these new ideas is essential but it is likely that all of them are wrong. What will refocus the field will be a new piece of data, and that data may bring a half-formed idea to life or it may create a new thought that hadn’t occurred to us. Before the LHC many were invested in supersymmetry as the next theory. This is partly because you can analyze at least some supersymmetric theories with perturbation theory, and perturbation theory is the (only?) tool that everyone has in their toolkit. It is much harder to write a paper about a nonperturbative scenario for beyond the Standard Model if you have no tools with which to analyze your theory.
The more formal theorists in particle physics are making inroads into non-perturbative physics, coming from string theory and conformal field theory. There are a lot of interesting ideas there, and they come at the world from a very different direction, but I don't know whether they will bear fruit in our effort to understand the physics of the real world. On the experimental front one worries about the extent to which detectors are designed and optimized for the physics theorists can understand and analyze, but possibly to the exclusion of radically different possibilities. Theory and experiment are inextricably connected, each constraining the other. It is impossible to design an experiment absent some idea (theory) about what is going to happen. It is equally impossible to sort among the infinity of theoretical options without strong constraints from data. I had an early experience of the push and pull between theory and experiment when I was a graduate student. I mentioned that my first published paper was on ortho-positronium decay with Bill Caswell and Jonathan Sapirstein. We started that calculation because we were certain that the theorists who had done it had made a mistake with the bound state physics. I was just starting to learn how to do the bound state physics, and Bill and Jonathan were looking for interesting problems to work on. Bill was the one who pointed it out. So, we redid this calculation. There had been two experimental results which gave values that were up here, and then a theoretical result that gave a value up here as well. And then there was a lone experimental result that was way down there, many standard deviations below the two previous equally-accurate experiments, and the theory.
The rogue experiment used a very different technique; it came from Art Rich and his team at Michigan. Their result was obviously wrong because it disagreed with two previous experiments and with theory. Clearly there was something about their new technique that they didn’t understand. Bill, Jonathan and I did actually find a mistake in the theory, but it was a trivial mistake, not the sophisticated one we expected. So, we fixed the theory, and it came down way below the Michigan number (both laugh) but closer to the Michigan number by far than any of the other numbers. In short order, the Michigan people redid the experiment the “conventional” way, and some of the other experimenters redid their experiment. And now everyone was heading towards the new theory.
The Michigan people are the heroes in this story because they dared to publish a result that completely disagreed with previous experiment and theory. But they were too high as well, probably because they had tried everything they could think of that might increase their result bringing it closer to the previous results. This kind of interaction between theory and experiment is pretty common. It is also a fairly trivial sort of interaction. The more profound interactions have to do with how your whole framework and your toolkit constrain you. I was very aware of this teaching quantum field theory recently. I love quantum field theory. My whole life is quantum field theory, but it is very technical, and I had avoided teaching it previously, preferring to teach less technical but very concept-rich undergraduate courses. But they needed someone to teach quantum field theory and one wants to be helpful. So, I started teaching it. As I developed and taught the course, I became very aware of the way quantum field theory textbooks are written to make perturbation theory easy. The topics that they treat, and the order in which they're presented are all about optimizing the explanation of perturbation theory. That means there are all sorts of things missing that you need for things like lattice gauge theory that are nonperturbative. For most of its history in particle physics, quantum field theory had only one method, which was perturbation theory. All of the textbooks follow a design that Dyson created in his lecture notes for his Cornell course in 1951, during his one year as a professor at Cornell. I tried to rebalance my course, to give more weight to the missing topics, but I never had quite had enough time to do an adequate job. This was partly because I was making it an active-learning course, which was an adventure in itself. I find myself tempted to write a textbook, but I've managed to tamp that temptation down every single time it's come up in the past. And I think and I am succeeding right now.
(laughs) Peter, for my last question, looking ahead. To come back to this idea that there's this duality right now in QCD where there are so many theoretical and experimental traps that we don't know how to get out of. And yet, it's so obvious that there's so much fundamental and exciting discovery to be done if we get out of these traps. I wonder if perhaps as a window into the advice you give graduate students or to think about if you were starting your career now, what are the things that you're most excited about in the future that are feasible if we can figure a way out of these traps, what's the most feasible path forward for ongoing discovery kind of like back at your Stanford days, where the textbooks needed to be rewritten almost every year?
Go into astrophysics and cosmology.
That would be my advice. Particle astrophysics and cosmology have interesting overlap areas with particle physics, but they are also areas where you're going to get new important data frequently. The problem in particle physics right now is we're starved for data that can challenge the Standard Model. That’s not to say that you can’t make a good living on things like lattice QCD, where there is a steady diet of data. You can. Doing high-precision QED calculations was worthwhile long after it was a central thing in the field, with a lively (but small) community of people still working on it today. I cut my teeth on it as a grad student even though it was already pretty unfashionable. QCD will become like that. It will continue to be something that is important to invest in and to have good people doing it. Also, there could well come a time where the simulation techniques that we use in QCD are absolutely essential for understanding beyond the Standard Model. Most beyond-the-Standard-Model theories have a sector that involves strong interactions. So lattice field theory could be very important there, as a method for dealing with strong coupling, and it'll be way easier to build on lattice methods now that we have them working for a real physical theory, QCD. So, it's important to keep QCD and lattice QCD alive and going, if for no other reason than we don't know where we're going with beyond the Standard Model. But as a young person right now, I'd probably be paying a lot of attention to particle astrophysics and cosmology. I was tempted by general relativity, as a young person, but it seemed like there would never be much experimental data relating to it. I was already wrong back then, but really wrong now when the fact that gravity bends light, for example, is an important but now routine ingredient in observational designs to look for other things. This would have been mind-blowing given the mindset one had about general relativity back when I was a student. The progress is stunning.
And I have enjoyed and still enjoying watching it happen. I am also excited to look forward. It would be nice to stay enough connected to physics as I start to pull away over the next several years, that I'll be able to understand what actually is beyond the Standard Model. (both laugh)
Something for us all to stay tuned on.
Yeah. The big questions of my generation really haven't been answered. And instead, a whole bunch of new questions were added, like dark matter, dark energy. That's part of the fun. And again, it's all driven by instrumentation and experiment.
Peter, it's been a great pleasure spending this time with you. I really want to thank you for doing this and for sharing your insights and perspective over the course of your career. This has been fantastic. Thank you so much.