Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
Please contact [email protected] with any feedback.
Credit: Lance Hayashida - Caltech
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of John Preskill by David Zierler on February 26 and March 12, 2021,
Niels Bohr Library & Archives, American Institute of Physics,
College Park, MD USA,
For multiple citations, "AIP" is the preferred abbreviation for the location.
Interview with John Preskill, Richard P. Feynman Professor of Theoretical Physics at Caltech, and Director of the Institute for Quantum Information and Matter at Caltech. Preskill describes the origins of IQIM as a research pivot from the initial excitement in the 1970s to move beyond Standard Model physics and to understand the origin of electroweak symmetry breaking. He emphasizes the importance of Shor’s algorithm and the significance of bringing Alexei Kitaev into the project. Preskill discusses the support he secured from the NSF and DARPA, and he recounts his childhood in Chicago and his captivation with the Space Race. He describes his undergraduate experience at Princeton and his relationship with Arthur Wightman and John Wheeler. Preskill explains his decision to pursue his thesis research at Harvard with the intention of working with Sidney Coleman, and he explains the circumstances that led to Steve Weinberg becoming his advisor. He discusses the earliest days of particle theorists applying their research to cosmological inquiry, his collaboration with Michael Peskin, and his interest in the connection of topology with particle physics. Preskill describes his research on magnetic monopoles, and the relevance of condensed matter theory for his interests. He explains the opportunities that led to his appointment to the Harvard Society of Fellows and his eventual faculty appointment at Harvard, his thesis work on technicolor, and the excitement surrounding inflation in the early 1980s. Preskill discusses the opportunities that led to his tenure at Caltech and why he started to think seriously about quantum information and questions relating to thermodynamic costs to computing. He explains the meaning of black hole information, the ideas at the foundation of Quantum Supremacy, and he narrates the famous story of the Thorne, Hawking, and Preskill bets. Preskill describes the advances in quantum research which compelled him to add “matter” to the original IQI project which was originally a purely theoretical endeavor. He discusses the fact that end uses for true quantum computing remain open questions, and he surveys IQIM’s developments over the past decade and the strategic partnerships he has pursued across academia, industry, and at the National Labs. Preskill surveys the potential value of quantum computing to help solve major cosmological mysteries, and why his recent students are captivated by machine learning. At the end of the interview, Preskill reflects on his intersecting interests and conveys optimism for future progress in understanding quantum gravity from laboratory experiments using quantum simulators and quantum gravity.
OK, this is David Zierler, Oral Historian for the American Institute of Physics. It is February 26, 2021. I am so happy to be here with Professor John Preskill. John, it's great to see you. Thank you so much for joining me.
Well, I'm glad to do it, David.
To start, would you please tell me your titles and institutional affiliations? And you'll notice I pluralize everything because I know you have more than one.
Oh, OK. Well, I am the Richard P. Feynman Professor of Theoretical Physics at the California Institute of Technology, and I'm the Director of the Institute for Quantum Information and Matter at Caltech. And we can leave it at that.
When were you named to the Feynman Chair?
Well, when Kip [Thorne] retired. Actually, the background on that is interesting because the donor endowed the chair around 1990. So there was a lot of discussion at Caltech about how we would make use of the leverage of being able to offer the Feynman Chair in Theoretical Physics to someone. And Kip and I actually went on a secret mission to Cambridge to offer it to Stephen Hawking in 1991.
Oh, wow. I did not know that.
Yeah. And so, he was very polite. Of course, he already was the Lucasian Professor obviously. So, the upshot of that was that Stephen didn't accept the Feynman Chair, but he did agree to make regular visits to Caltech. Which he did for many years. He would come for a variable amount of time, typically a month or six weeks in the depths of the Cambridge winter and enjoy the California sunshine. But then, Kip wound up being the Feynman Chair. And I guess I'd have to look up when he retired. It was probably about ten years ago. And I became the Feynman Chair at that point.
Is it known who the donor was back in 1990? Or is that anonymous?
I think I can say. Actually, it's an interesting fellow who's still with us. Mike Scott is his name. And he was, at one time in the early days, the CEO of Apple Computer. When Steve Jobs and Wozniak founded Apple Computer, and they wanted to produce the Apple II, they needed to raise capital, and their investors insisted that they bring in someone with business experience to help manage the company, and that was Mike Scott, who was at Apple for just a few years and had other interesting experiences in his career. But the connection with Feynman is that Mike Scott was a Caltech alum who was among the class that attended the Feynman lectures at ‘61-‘62, ‘62-‘63, the whole two-year sequence. And like many of the students who attended, he was profoundly impressed by that experience. And I think that's what induced him to endow a chair in Feynman's honor.
John, you knew Feynman, and Feynman being who he was, what did it mean for you when you were named with this honor?
Well, it gives you a sense of impostor syndrome, right? Who can live up to that title? But I guess I just shrug it off and carry on.
[laugh] Tell me about the origins of IQIM, the Institute.
Well, of course, I started out my career doing particle physics and with occasional forays into cosmology. So the backstory of IQIM is, I made a mid-career shift in research interests in the mid-1990s. And that happened around 1994, when I learned about Peter Shor's factoring algorithm. But I was primed, I think, to get interested in the subject of quantum information and quantum computing for a couple of reasons. One was that just the previous year, the SSC had been canceled. And for my generation of particle physicists, this was really a terrible blow because we had come along a little bit too late to participate in erecting the Standard Model. I started graduate school in 1975, and so, there was still controversy about what the right electroweak model was at that stage. But the J/psi had been discovered the year before, all the great stuff had been done. Not that there wasn't still a lot of interesting particle physics to do.
But our big hope for beyond the Standard Model physics, we were going to be the generation that would unravel the origin of electroweak symmetry breaking and all the new physics associated with it, and the SSC was going to be the source of the rich phenomenology that we thought we were going to mine as theorists to probe more deeply into nature's secrets. And when the SSC was canceled for complicated political reasons, even though they had already been digging the tunnel in Texas and had sunk a couple of billion dollars in it, one realized it was going to be quite a while before we were going to have the experimental input that we needed to really understand what was going on with physics beyond the Standard Model, which it was generally believed would be discovered when we got up to those energies. And so, I was sort of in a mood to think about different things.
And in fact, while I was sort of waiting, as many of us were, for the SSC to come along, I had already been doing things which were not very phenomenological, like thinking about black holes and how they process information. So, I had sort of become acquainted with principles of quantum information—which were not so widely known by physicists except for a small community—just because I thought that might be useful for understanding what's going on with black holes. And when Shor discovered this factoring algorithm, about a month later Artur Ekert, who was a pioneer of quantum cryptography, visited Caltech and gave a talk. And he mentioned this recent breakthrough that Shor had discovered that you could efficiently factor with quantum computers. And it's possible I've embellished the memory in hindsight, but I was quite amazed by this.
Feynman had been interested in quantum computing, and I knew that. And I wasn't very impressed by the whole subject. I didn't quite see the point of it. But I realized with the discovery of Shor's algorithm that it really meant there was a big difference between what problems we'd be able to solve and which ones we'd never be able to solve with computing technology because it's a quantum world instead of a classical one. Things became possible thanks to quantum mechanics that just wouldn't be possible in a different type of physical world. And I still think that's one of the most amazing things we've ever learned about quantum physics, the difference between quantum and classical. Sorry, I'm giving a rather longwinded answer to your question.
No, this is the intellectual origins of the Institute.
Correct. And so, I had a colleague at Caltech, Jeff Kimble, still at Caltech, and he was also quite excited about this surge of interest in quantum computing. He was a quantum optics guy. And he had made experimental advances in squeezing states of light and using them for metrology and stuff. But it's different now. Atomic Molecular Optic physics is widely recognized by most physicists as an intellectually rich and exciting field. And that's happened largely, I think, in the last 25 years, because of the connections with quantum computing and because of the connections with condensed matter, the possibility of simulating interesting states of quantum matter using these AMO systems. But back then, there was sort of a feeling in the quantum optics community that they didn't get respect because, from the perspective or somebody like me, what was the point, you know? As a particle physicist, I was trying to understand new laws of nature. But what can you do with quantum optics in the lab that you couldn't just figure out with pencil and paper what was going to happen?
And quantum computing, at least in principle, kind of changed that. Because I think it drove home that you should be able to do experiments where you're learning something from the experiment that you couldn't just simulate or calculate. And so, Jeff would have to say himself what got him excited, but I think he realized he had experimental tools that were very relevant to exploring quantum information. And I got excited from the theory perspective and wanted to learn more about what was experimentally possible. So, we formed what we called the Quantum Computing Club at the time, and we started having joint group meetings. And so, I learned some things. I've never been deeply knowledgeable, really, about how experiments work, but I learned a lot more than I had known about what you could do with quantum optics tools. And meanwhile, I was trying to understand whether you could protect quantum computers from noise, which led to the development of the idea of quantum error correction.
But actually, we wound up getting a DARPA grant. This was kind of interesting. The Department of Defense agencies had an immediate interest in quantum computing after Shor's algorithm because of the applications to cryptology. And they were the early supporters of the research in the field. Including the development of experimental tools. And DARPA, in particular, put out a call, and we submitted a proposal, and we got funded for a project which we called QUIC. I guess it was Quantum Information and Computation, but QUIC for short. And there were five PIs, and that helped Jeff do his first teleportation experiment in the lab, and I worked on quantum error correction ideas, and stuff like that. It was a five-year award, and after two years, they cut it. There was a new program manager. This was, I learned, not unprecedented for DARPA --- a new program manager comes in, and what you think is a multi-year commitment turns out not to be.
But I had learned something under Jeff's tutelage, that with money, you can do things. As a theorist and particle physicist, I had it pretty easy as far as funding a group because at Caltech, we had this big DOE high energy physics umbrella grant, which was mostly for experimentalists, and the theorists were a little pimple on it. And that was enough for us to support post-docs and so on. And we also had Caltech funding for theoretical physics post-docs, which helped a lot. But when we had this DARPA funding, I was, for example, able to bring in Alexei Kitaev as a year-long visitor and pay him a salary. I'd never had the resources to do that sort of thing. So that was kind of an eye opener for me.
In my naivete, I'd never worried much about raising money, and applying for grants, and stuff because I sort of had it made with this DOE grant, which was always renewed time after time. But then, with Jeff's encouragement, we applied for a quantum computing center. Actually, NSF started to show an interest in quantum information in the late ‘90s, and they asked a group of us, including me, to organize a workshop because that's how they do things, which took place in the fall of 1999. These were the waning days of the Clinton Administration. And the conventional wisdom at the time was that, partly due to Al Gore's influence, NSF got a surge of funding for a program they called Information Technology Research, which included a lot of practical things, but also sort of a lunatic fringe of blue sky research. And that's what we were part of.
We applied to this ITR program, and we asked for a big center, which would encompass experiment and theory. And the NSF program manager involved, a guy named Mike Foster, said he wasn't interested in the experiment, only in the theory. So we wound up getting a million dollars a year just for a theory institute. This was in the fall of 2000, which was the Institute for Quantum Information. There was no Matter then, just a theory institute. But the timing was great because there were all these young people who were excited about the field, who were getting PhDs. We were able to build a group of really strong post-docs and attract Caltech students into research in that area. And we could pretty much get any outstanding post-doc we wanted because there wasn't so much competition then. There's a lot more now. So, we had an amazing group of young people in the early 2000s who came through, many of whom are leaders of research in quantum information now, like Patrick Hayden, and Guifré Vidal, and Frank Verstraete, and quite a few others.
Today, of course, there are several centers that have a similar research focus. But at the time, there were not, of course. You were really at the vanguard of all of this. So, the question is, what was your model? What other centers were out there that you might have used to base your ideas on…on where this ultimately would develop?
Well, actually, my model was the experience that I had with the particle theory group, which I didn't appreciate immediately was a bit culturally different than most research efforts in AMO physics and what was then the nascent interest in quantum information—which was I wanted to bring in the best young scientists and give them a lot of freedom, to create a community of people who had some common interests, but also complementary backgrounds. So, I deliberately would put a computer scientist in the same office with a physicist so those guys would talk. And I guess that was the model. Now, the Institute, the next one which had a big investment, was the Perimeter Institute. When it was founded, they saw quantum information as a core part of their mission. And then, later, there was another institute at Waterloo, The Institute for Quantum Computing, which had a lot of resources, all thanks to Mike Lazaridis, and the Canadian government, and government of Ontario.
But we got off to such a quick start, and we already had a track record of bringing in great people who did great research while they were at Caltech, and then went on to later career success. And we were able to continue to recruit the best young people very successfully. The first thing I did with the IQI funding is, we hired Alexei Kitaev. This is interesting, too. When I got that DARPA money, I thought, “Hey, I could bring in a long-term visitor with this funding. So, who should that be?” And so, I asked a few colleagues for suggestions. And indirectly, I heard from Richard Joza that he had met this amazing young Russian at a conference. The conference was in Japan. That was Kitaev. And I didn't know much about Kitaev, but he had a paper on the arXiv, which I then read and was blown away by. He had sort of reformulated Shor's algorithm in a more general and powerful way.
And so, I arranged to have him visit in 1997. Actually, the legend about that paper that he wrote is, in 1994 he heard about Shor having discovered that you could factor with a quantum computer. He was at the Landau Institute in Russia. And he wasn't in the in-group that had access to the preprint. It wasn't posted on the arXiv, although the arXiv existed at that time. But it was just kind of circulating around by email. And he wasn't able to get it. So he had to figure it out for himself. Now, it's a huge advantage to know that it's possible, so he had it on good authority that Peter Shor had discovered an algorithm for factoring large numbers efficiently on a quantum computer, and then he figured it out. His approach was different and more general than Shor's. That was the paper I read, his version of what we now call the Hidden Subgroup Problem. He called it the Abelian Stabilizer Problem, and Shor's algorithm fit into that framework.
So, this guy is clearly very interesting. And I arranged it for him to first come for a shorter visit. It was his first time in the US, I think. And the first day that we met and sat down for coffee, he started telling me about this idea he had to use non-Abelian anyons for quantum computing. And here's something funny. I was very interested in non-Abelian anyons. That was one of the things I was fooling around with waiting for the SSC to turn on. Non-Abelian anyons are particles in a two-dimensional medium which have exotic statistics, more general than bosons or fermions. And non-Abelian means that you can actually have a state of many of these particles be modified just by braiding them around one another.
And what Kitaev had realized is that this was an approach to quantum computing that would be resistant to noise because it was topological. The effect of exchanging a pair of these anyons, because the information is encoded in a very, very nonlocal way, the environment buffeting the system locally doesn't interfere with it. This was a very brilliant idea. And I understood it immediately after 15 minutes of taking about it over coffee because I knew about non-Abelian anyons, and I was very interested in quantum error correction. And it had never occurred to me that these two things that I was very interested in were related. And I guess that shows that I'm not Kitaev.
But you can spot a Kitaev when you see one.
Well, that's true, and I was ahead of my time in that regard. He was underappreciated for sure in 1997. And so, he came back the next year supported by this DARPA grant as a visiting professor. And we actually jointly taught a course on quantum computing. And then, when we got the NSF award in 2000, the first thing I tried to do was hire him. Of course, I couldn't hire him as a professor by myself. That had to go through the usual Caltech hiring process. But I could hire him, although it had to be approved by a committee, as what we called at the time a senior research associate. Now, we call it a research professor. It's a position that we have at Caltech for people who are world-leading researchers with stature comparable to a tenured professor, but it's a soft money position, and it's paid out of a grant. No teaching responsibilities.
This is what Sean Carroll has, for example.
That's what Sean has. And that's what John Schwarz had, actually, when I first came to Caltech. That's another story, speaking of someone who was underappreciated for a while. Yeah. And so, we had Kitaev, and we had this amazing group of young people. And then, a lot of students came through and trained. I think probably in terms of impact on science, leading the IQI and establishing it is the most impactful thing I've done when you look at all the people who came through and how they've become scientific leaders. But anyway, to come around to answering your question, for ten years, we were just the IQI, and we went through several cycles of renewal at NSF. And Jeff spearheaded this. I wouldn't have thought to request a grant to start a theory institute if Jeff Kimble hadn't been pushing me, so I'm grateful to him for that.
And in 2010, we applied for the Physics Frontier Center program at NSF, very competitive thing. There are ten of them in the country in different areas of physics. And that turned out to be successful, and as a result, what had been the IQI expanded to a larger center that did encompass both theory and experiment, pretty much as we had envisioned back when we originally proposed it in 2000. We had something like that in mind, but NSF at that time said, “We only want the theory.” But in 2011 we became the Institute for Quantum Information and Matter. And now, that's been around for almost ten years, since 2011, and has been very successful.
John, that's a great overview of your current titles and affiliations. So before we take it all the way back and develop your personal history, I'd like to ask a very in-the-moment question. As you say, of course, we're all working from our home offices now. As a theoretical physicist, I wonder if in some ways, these past 11 months have been more productive for you because the social and physical isolation perhaps has given you a bit more headspace or bandwidth to work on some equations or problems that you might otherwise not have. On the other hand, I wonder if your style as a scientist really depends on in-person, interpersonal interaction, and in many ways, your research agenda has suffered as a result.
Well, as the question suggests, it's a complicated issue with tradeoffs. One big change for me is I was traveling a lot. And I get, of course, as we all do, lots of invitations, most of which I turn down. But for opportunities to lecture, attend conferences, and things like that, there were a certain number of them which I really thought I had to accept. So the last couple of years, I had been making lots of trips. And it was really a bit of a relief to put a stop to that for a while and not be chasing around so much.
On the other hand, the kinds of interactions you have when you visit other places to attend a conference or give a talk and so on, the kinds of informal interactions, those are not very well simulated in the Zoom era, although there are various attempts to do that. And so, you do miss that kind of thing where you go to dinner or lunch and just chat. And sometimes, that's a good way of probing questions and coming up with ideas. So, I think we've all suffered a bit from missing that kind of interaction.
In my group, it hasn't been too bad. We have our group meetings on Zoom, and I'm able to keep up with what students and post-docs are doing, and so on. But I think it's hard for the new students. They can attend meetings and stuff like that, but it's hard to become sort of integrated into the community in our online existence compared to when we're able to hang around and chat in our offices or at a coffee break. But in response to your question, yeah, I think I have had a bit more time for reflection than was the case, say, in the previous couple of years, and that has been helpful. And it's also given me a little more time, maybe, for reading and catching up on things.
One thing that I had been increasingly feeling was missing from my education or knowledge base was the students are more and more interested in machine learning, and I really just didn't know much about it. And I still know only a limited amount about it. But I did take some time to read textbooks and papers, and I also am collaborating with some students who know a lot more about machine learning than I do. And so, that's been a plus over the last year.
It's interesting with the experimentalists. They seem to be much more challenged than we are as theorists. Some of the labs were closed down for a while. Now, they're operating under socially distanced protocols, and that slows things down. But I've also had several experimentalists tell me that they're getting the best data ever because the lab is so quiet. There's nobody walking around, people aren't opening and shutting doors. And a lot of experiments are operating remotely or with minimal physical presence in the lab of group members, and that's had some benefits. So, it's not all bad, even for the experimentalists.
The big question going forward, what are the best aspects of the current dynamic that you plan to continue using once we're out of the pandemic?
Well, I don't know. I think the model of doing seminars and conferences online will have a place going forward. Like I said, it's not really the same in terms of the personal interactions as a face-to-face conference. But it's still pretty effective. So, I've attended workshops, and, of course, things get recorded, so you can watch them later. That was happening anyway. Usually, when there was an event, people were making videos. But since it's just not feasible to travel to all the things that one wants to attend, having that option of participating in a meeting with people all over the world is something we'll probably take advantage of more than we have in the past, going forward.
Well, let's take it all the way back to the beginning. Tell me about your parents and where they're from.
I grew up in Chicago. My dad, Alfred Preskill, his parents were Eastern European Jews, his mom from Latvia, his dad from Lithuania. And like many Jews, they came to the United States in the 1880s or 1890s. In the case of my grandfather, he and his brothers would've been drafted into the Tsar's army if they had stuck around in Lithuania. That was one of the incentives for leaving. And they all came to Chicago. And that's where my grandfather met my grandmother.
I assume Preskill is an Anglicized name.
Well, according to family folklore, in Lithuania, it had a similar sound. And I've sometimes pondered whether it's related to names like Peskin and Peshkin. But we think in Lithuania, they were saying it more or less the way I do as Preskill. And there are several alternative spellings that were adopted when people immigrated. So, there are some Preskills still around the Chicago area, but there are also other spellings, like Preaskil. So anyway, my grandfather's business that he started was a harness shop. He would make the rig that you would use to attach your horse to your buggy. But when automobiles came in, he realized that wasn't a good business model, so he opened a hardware store. And when my dad was a kid, he used to work in the hardware store. So even in later life, he considered himself to be an expert on tools.
And my mom has a rather different origin story. She actually converted to Judaism when she married my father, and then later in life, she actually got bat mitzvahed. Much later in life. But she did not grow up Jewish. She grew up in Cleveland. Her father was a lawyer, and his family had been, for many generations, farmers. And my mom's mother also came from a family that had been farming in Pennsylvania and Ohio for many generations. We think they go back to before the American Revolution in Pennsylvania. But when my mom was a kid, she would work in my grandfather's law office. He was a probate lawyer, did wills. And very successful in the sense that he was very highly regarded in his profession. He wrote a textbook on Ohio probate law that was widely used and had some high-level connections.
One of the famous family stories is my mom, as a teenage girl, was working the switchboard in the office, and she cut off the Vice President of the United States by pulling the plug while he was talking to my grandfather. She wound up going to law school, and it was pretty unusual for women to attend law school. She was the only one in her class at what was then called Western Reserve, now Case Western Reserve in Cleveland. And I think she might've gone into practice with my grandfather. But then, World War II came.
And going back to my dad for a minute, he wound up going to the University of Chicago. He was very good student. He graduated high school at 16, and he graduated law school at 20. At the University of Chicago, he was able to get a bachelor's degree and a law degree more or less concurrently. And that was 1932. It was the Depression. Nobody wanted to hang around in school. Everybody had to go out and earn a living if you could get a job. So, he was in a big hurry. Because he was 20, he couldn't take the bar exam because he was still a minor. He had to wait til he was 21. And he passed the bar exam and worked for a law firm for a while. But when the war came, he was 4F because of a medical condition. He was about 30 then, but he couldn't enlist.
So, the way he did public service was he became a federal employee. He moved to Baltimore and worked for what was then called the Federal Security Agency, which was setting the legal foundations for the Social Security system, which was still sort of being fleshed out. And that's where he met my mom. When the war came, she also thought she should work for the government, and they wound up in Baltimore. The reason they were in Baltimore is a lot of the federal agencies moved out of DC because that was being taken over for military purposes. And that's where they met. They were both lawyers in the same office, and they got married 1944 and moved back to Chicago.
My mom did give up the law when she had her first child, my older brother David, in 1947. But she was really a remarkably capable woman. And so, she volunteered for everything. She was the President of the PTA, the League of Women Voters, and a local philanthropic organization. And she learned Sign Language so she could work with the deaf, and she worked with kids with Cerebral Palsy, and she volunteered in the hospital, and as a tutor at the high school. She was really a dynamo and has a very different personality than me—I'm quite introverted, she was very extroverted.
I wonder, as a product of her generation, if her decision to leave law was because that was sort of externally expected of her. In other words, in latter generations, the same person would not have done that.
I think that's right.
Did she ever express regret or frustration with that?
Not to me. And like I said, she managed to have a lot of impact outside the home in quite a number of ways. She was pretty amazing in that respect. My dad started to think the law was boring, so he joined a business called Allied Radio, which my uncle was involved in. And he worked there for over 20 years in marketing, became the VP of marketing. So one of the ways that impacted me was he would bring home these kits. Allied Radio made what were called Knight Kits you could assemble yourself with a soldering iron—radios, and walkie-talkies, a photoelectric relay, and things like that. So that was my introduction to electronics, starting when I was around 10. I really enjoyed putting those things together. And I was surprisingly uncurious about how they worked, actually. I built radios, and I was very proud that I was able to break the iron curtain and hear a broadcast in Russia on a shortwave radio, but I didn't really understand what the tuning coil, and the capacitors, and the resistors were doing. I just thought it was fun to put them together.
Growing up, how Jewishly connected was your family? Particularly with your mom, was she more interested in doing stuff than your dad in certain regards?
My dad was the more interested one, and we belonged to a reform congregation on the North Shore of Chicago. We moved to Highland Park, one of the northern suburbs. Or they did, before the first child was born. And we belonged to this huge congregation called North Shore Congregation Israel with over 1,000 families. And he was involved in the temple one way or another at various times in his life. He was the Chair of the Board of Religious Education there for a while.
And later, actually, after he retired he was very interested in studying Torah, and Talmud, and stuff like that with classes that the rabbi would lead. My dad was quite scholarly—I think he might have been an academic if he hadn’t come of age during the Depression. And my brothers and I went to religious school. It was usually on Sunday, actually. Reform Judaism. And you could be bar mitzvahed. I chose not to be 'cause I wasn't too keen on the idea of having to go to Hebrew school after school from 3rd grade through 7th grade. And my parents said that was OK if I didn't want to.
But I did get involved. I became, actually, when I was in high school, the audio-visual supervisor at the Temple. And so, one of my responsibilities was to make sure that the rabbi's sermons were recorded at every Friday night service. I had a crew of volunteers who would sign up. And if something went wrong, and we failed to record the sermon, the rabbi was not pleased. So, there was a little pressure there. And then, at the religious school, as the supervisor of audio-visual activities, we used to show movies sometimes, so we had to thread the projector. And that was also a bit stressful because every once in a while, the film would break, and you'd have to do emergency film repair with some magic tape or something. But that was my most active role in my youth at the Temple.
John, you went to public schools throughout your childhood?
Highland Park High School. Highland Park public school all the way in the town we lived in. It was a good school system. And there were a lot of Jewish kids in the community. We had a tracking system, which was a pretty common practice back then, where for each discipline, they would put the kids in—I don't actually know how they decided this—level 1 English, level 2, and level 3. And the level 1 would feed into the AP classes. And so, even though it was a big school, there were over 2,000 students, if you were in those level 1 classes, it was the same kids you'd see in most of your classes year after year.
Was stuff like the space race, the moon landing, formative to your development as a kid?
Hugely, yes. I still remember vividly, or at least I think I do, my dad bringing home a newspaper in early 1961. It was the Daily News, the afternoon newspaper in Chicago, with this huge headline, a couple inches high: “Russian First Space Man.” Yuri Gagarin was orbiting the Earth. And it was a huge deal. Of course, the US had a space program, too. The Mercury astronauts had been chosen, and they were training, and the Russians kind of beat us to the punch with Yuri Gagarin's first flight. And Alan Shepard's first flight was a month later or so, I don't remember exactly. But those Mercury astronauts were heroes. Whenever there was a flight, Alan Shepard, Gus Grissom, John Glenn, Scott Carpenter, and so on, it was a huge national event. And it seemed like the world came to a standstill, and we were holding our breath while these guys were flying into space and managing to return to the Earth.
And, of course, in those days, there were three TV networks, and they all had news organizations. And they'd all stop regular programming so they could cover these missions. And so, I ate everything up. I read everything I could. So [in] 1961, I was 8 years old. But I could go to the library and get a book about rockets. And there'd be a feature story in TIME magazine or whatever, lots of newspaper coverage, and I'd read all that stuff. I wanted to know everything about how Mercury was going to lead to Gemini, which was going to lead to Apollo. And so, I very avidly followed all that, and I think it did have a significant role in awakening my interest in science.
John, were politics a topic of discussion at the dinner table as a kid? Would you have known if your parents were voting for Nixon or Kennedy?
I remember watching the Kennedy-Nixon debate, as a matter of fact. I don't remember my dad being there, but my mom was. And they were Democrats. Well, I shouldn't say that. My mom always identified as an independent. She always said she'd vote for the best candidate. Usually, it was a Democrat, but not always. And things were a little less polarized then than now. So the idea that you could, in a given year, prefer the candidate of one party, which was different than that of the previous election, did not seem wildly unlikely.
In middle school or high school, were there any standout math or science teachers who exerted a real influence on you?
Well, there was one in high school. His name was Donald Ens. He was a math teacher. He was a young guy. There was an English teacher, too, who I admired a lot. But the thing about Mr. Ens was he really loved math. And at that time—after a few years of being very interested in space, and rockets, and then chemistry—I had a chemistry lab, and building the radios and stuff, I decided really, the coolest thing was math. And the thing I loved the most in the reading I did was Gödel's Theorem. The idea that there were limits to what we could prove or what we could know is true in mathematics. That really impressed me. And Mr. Ens loved that kind of stuff, too. So, I had somebody I could talk to about those sorts of things.
And in fact, when I was thinking about where to go to college, I had some rather funny notions, and one was that if you wanted to do math, Princeton was the place to be. And I'm not sure what that was based on, maybe because Einstein had been there or something. But that was firmly implanted in my head. And another idea I had was that you shouldn't go to Harvard. Because I had a friend whose older brother went to Harvard and majored in biochemistry. And when he'd come back from college, he'd always complain that all the classes were taught by graduate students. And they had all these famous professors, but you never saw them. So I thought, “Well, that doesn't sound good.”
At least Princeton claimed to be a more undergraduate-focused institution. So that's where I decided I wanted to go, and indeed, where I went. And when I went to Princeton, I was thinking I'd major in math. I talked my way into a graduate-level course on Set Theory and Logic my freshman year taught by a guy named Dana Scott, who was a distinguished logician and philosopher. And I had to get permission from the guidance office, and I had to pass out of freshman English, and stuff. I was very insistent that I had to take this class because this was going to be my future, Set Theory and Logic. And I wasn't sure if Dana Scott would be teaching it again. And it was a fun class.
John, this was a pure math environment, not an applied math environment?
That's right. But I realized, by the end of my freshman year, several things. One of them, I think, I'd known all along. I'm just not cut out intellectually to be a mathematician. I'm just not good enough at that kind of thing. Meanwhile, I was taking freshman physics, and in the spring term, we used this book by Purcell, Electricity and Magnetism, which is a great book, and it's still used in some places. And that really impressed me because I was learning in my math class calculus on manifolds, and about differential forms, and things like that. It was all very abstract, and very beautiful and fun. But no hint of what it was good for.
Well, maybe I'm exaggerating. But certainly, the emphasis was not on what you do with this stuff. But then, in Electricity and Magnetism, learning Maxwell's equations, and why you would want to take the curl or divergence of a vector field for some useful purpose, the fact that I could piece those two things together, this very abstract math and then this physics class, which was making use of those concepts, that made me appreciate that maybe physics was a more natural home for me.
Did you sense, even as an undergraduate, the hierarchy of theory above experimentation in those days?
Yeah, and in fact, even back in high school, I had this very snooty attitude that theorists were somehow superior. I was terrible, looking back. I thought that the intellectual pinnacle was to do theory, and that experiment just didn't appeal to me personally, let's put it that way. And so, maybe I had a perspective, which, of course, is completely wrong, that experiment was not the best route to a deep understanding of the secrets of nature, that thinking would do that. It's completely wrong. But I really did have that attitude.
Can you either affirm or deny the famous quote attributed to Wightman that he referred to the experimentalists as “the help?”
Arthur Wightman said that?
He was my senior thesis advisor.
“The help.” Well, I'm not sure I knew that. He was a wonderful man, but I'm surprised he would say that.
It may be apocryphal, I don't know.
Although, of course, he was a mathematical physicist and proved theorems, when he was young he did more practical things. He worked out details like how ionizing radiation deposits energy in materials and things like that. So, he had some appreciation for that type of knowledge building. Actually, another college teacher who had a big impact on me was John Wheeler. My sophomore year, he taught a class that I took for the whole year, covered everything in physics. We called it Honors Physics. And we did classical mechanics, and E&M, and stat mech, and quantum physics, and waves all in one year. And it was a very idiosyncratic course, to put it mildly.
Of course, to us undergraduates, there was something kind of god-like about Wheeler. So, this was 1972. The thought that he had worked with Niels Bohr seemed unimaginable --- that anyone could be that old. He was 61 at the time. Here, I'm 68, so it doesn't seem so old now, but at the time, it sure did. And he always came to class in a suit and tie, and that also made him seem like a denizen from another generation. And, of course, he had this marvelous ability to use the blackboard to draw intricate illustrations on the spot. But the thing that was most memorable is–here's what he did on the first day of class, or at least how I remember it. We're going to do classical mechanics. We're going to use Goldstein. We're going to learn Lagrangian Mechanics. And we're going to learn Hamiltonian Mechanics from this book. And I'd already dipped into the book a little, I was excited.
And so, I figured he was going to tell us about the calculus of variations, and the Euler-Lagrange Equations, and stuff. I kind of had a hint what that was about. But he comes in, and he goes up to the board, and he draws A on the board and B. And then, he draws a line going from A to B. And he said, “An electron is going to travel from A to B. How does it know how to go? What path should it take? Well, of course, it takes all the paths. It adds them all together with an E to the iS …” “What?” He was trying to explain that what we were learning was the classical limit of quantum theory. Although Goldstein wasn't saying it that way, he thought it was important for us to know right from the start that that was the context, and that you could understand why this calculus of variations stuff was relevant by thinking about how the phase when it's stationary would add up constructively.
Of course, this is a wonderful insight coming from Feynman, who was Wheeler's student. And I thought this was great. I just was dazzled. And a lot of students, understandably, were a bit upset because then, we had to do the homework problems in Goldstein, which said, “Here's a couple of springs and a mass. Write down the Lagrangian.” What were we supposed to say? “Well, the mass is going to follow all the paths. Add them up with an E to the iS.” That didn't really help you do the homework. But Wheeler was inspiring.
This obviously planted a seed in you later on.
It did. And here's another thing he said, which I never forgot. And this was later in the year. He came into class, and he told everyone to take out a piece of paper. He said, “I want you to write down, on your piece of paper, all of the equations of physics. Everything that one needs to know in order to derive everything else in the world.” I don't know how much time we had, a few minutes. You could write down the Maxwell Equation and the Schrödinger Equation. Fluid mechanics. Maybe the definition of entropy, and so on. And then, he collected all the papers. And he put them on a table in the front of the room, and he said, “Here on the table are all the equations of physics.” And then, he said, “Fly.” And he's talking to the equations. “Fly.” Nothing happened. The papers just sat there. And he said, “What went wrong? Here are all the equations of physics, but they won't fly. Yet, the universe flies.” That was Wheeler. [laugh]
On the social side of things, you may have heard the quip that at Princeton, the 60s came in the early 1970s. It was a little later to the game than places like Harvard or Berkeley. Were you political at all? Were you involved in any of the anti-war protests or Civil Rights things that were going on at campus in those days?
I participated, but rather passively. I guess it was before I was in college in 1970 was when a lot of campuses shut down after the invasion of Cambodia. I was in high school then. When I was at Princeton, there were some anti-war protests, and I would attend, sometimes with my friends. But it was not something that I devoted much of my time or my mindfulness to. I was pretty focused.
Was the draft something you needed to contend with?
Well, yeah. So, by that time, there was a lottery. And there would be an event where they would, on national television, take balls out of an urn, and it was based on your birthday. So, you would get a number for each date of the calendar year, and if you had a high number attached to your birthdate, then you were unlikely to be called. And if you had a low number, there was a serious possibility of being called. And I had a high number, January 19. My number was over 300. So, I knew it was pretty clear I didn't have to worry about being drafted.
Was a senior thesis at Princeton standard? Or was that an above and beyond kind of thing for you?
Every Princeton student does it. So, it's a big deal. You spend a lot of your senior year doing it. Actually, there were junior papers as well that I think everyone had to do. In physics, we had to do one the first term and second term. And actually, looking back, maybe this was sort of formative as well. So, you're a junior, you don't have any idea what to do for a research project. You're supposed to knock on doors and talk to faculty, see if they have suggestions, say you're interested in working on something. “What do you propose?” And so, I don't know why, I guess maybe I was assigned to him, I went to see Marc Davis who's a cosmologist, he's been at Berkeley for many years now, but he was at Princeton then. And so, he asked me what I was interested in. And what I said was, I was interested in the interpretation of quantum theory. And he said, “Well, you know what --- you might be interested in is the EPR Paradox,” which I had never heard of.
And so, he explained a little about what it was. He didn't really know. But that piqued my curiosity, and it turned out that there was a new instructor who had just arrived at Princeton that year named Stuart Freedman, and he had just done an experiment with John Clauser to test the Bell Inequality. And so, I went to him and asked him to fill me in a little bit about that. And he said something that stuck, which I thought was really weird. He had done the experiment with Clauser, which seemed to confirm violation of the Bell Inequality. But there was a competing experiment that had found a different result, that the Bell Inequality was satisfied, so the idea of local realism seemed to be confirmed by that competing experiment, which was done by a Russian group. And I said, “Well, how do you account for the discrepancy?” And he didn't give a scientific answer, he gave a political one, which was, “Well, it has to do with dialectical materialism. So, there's a bias in favor of local realism.” I thought, “Boy, could that really be it?” Anyway, it just kind of shocked me that he said that.
So I wound up reading up on the Einstein-Podolsky-Rosen paper and other papers, and I wrote my JP on that. That's what we called junior paper, JP. And I didn't really think much more about that stuff for some time. But then, when I came back to quantum information, of course, a lot of it was about entanglement. So maybe having had that experience in my formative years helped make me receptive to those kinds of ideas, I don't know.
But in the case of the senior thesis, again, the onus is on the student to find an advisor. And I had had an experience I guess late in my junior year. I used to go to the bookstore, the Princeton U Store, where there were various physics books on display, and I'd browse through them. Every once in a while, I'd buy one. And I found this book by Streater and Wightman, which was called PCT, Spin and Statistics, and All That. And I thought that was a very charming title. And so, I started browsing through it, and having still a sort of mathematical predilection, it appealed to me that there was rigorous mathematics about Quantum Field Theory. And I thought, “Boy, if I really want to understand Quantum Field Theory, I should understand what all this is about.”
And I decided I would ask Arthur Wightman to be my thesis advisor. But then, a kind of really embarrassing thing happened. I won an award that fall at the beginning of my senior year because I had the highest academic standing in my class. And the President of Princeton in the opening ceremony presented this award, and we chatted a little. And he asked me, “Who are you going to do your thesis with?” And I said, “Oh, I'm planning to do it with Arthur Wightman.” But at this point, I'd never spoken to Wightman, he had no idea who I was. You know how it is with professors, they're hard to catch. So I went to his class, and I went up to talk to him after class. And I told him who I was, and he says, “Oh, yeah, I've heard about you from a surprising source.” He had talked to President Bowen who had said, “Oh, I talked to this guy Preskill who's going to do a thesis with you,” and Wightman had said, “What?” So that was pretty humiliating. But because Arthur Wightman was such a sweet man, I didn't stay embarrassed for long.
And looking back, he spent an extraordinary amount of time with me that year. And I had sort of a typical undergraduate’s sense of entitlement. Whenever I saw him in his office, I figured I could barge in and start asking questions. And he never turned me away that I recall. He had sort of a gift for making you feel at ease, like he was really enjoying talking to you. At least I always felt that way. You know how sometimes people wish you'd go away, you can tell, even if they don't come right out and say it. But he was never like that.
In many ways, a senior thesis is a tryout for real scholarship later on. And so, with that in mind, I'm curious how parochial your worldview was, or not, given the extraordinary excitement and advance in particle physics in the early 1970s. Were you aware of what Sam Ting was doing? Were you aware of Grand Unification with Glashow and Georgi? Were these things on your radar? Or was your world of physics really confined to Princeton?
Well, I guess it was a little insular. Of course, at Princeton, asymptotic freedom was discovered by Gross and Wilczek while I was there, and also by Politzer at Harvard. But I was not so aware of that. I do remember the J/psi, the so-called November Revolution. I was a senior, and we had a speaker from the SLAC experiment actually, from SPEAR, who, not long after the discovery was announced, described the event. So even the undergraduates, the excitement bubbled down to us about the discovery of the J/psi and a lot of discussion of what it could mean, what it could be. And so, I was aware of that excitement, but I wasn't clued into the latest developments the way you are when you're a graduate student. Not as an undergraduate. I did read, under the tutelage of Arthur Wightman, a paper by Sidney Coleman that I found very remarkable, and that was part of the reason I wanted to go to Harvard. It was the paper by Coleman and Erick Weinberg.
Erick Weinberg had been Sidney's student at Harvard. And this was the paper about spontaneous symmetry breaking driven by radiative corrections. Very beautiful paper, which I studied in detail as an undergraduate and made use of ideas from it in my senior thesis. So that was fairly current. I guess that paper came out in '73, and I was reading it the next year. And I went into that particular paper in some depth. But I don't think I was aware of Grand Unification until I got to Harvard. Although, the original papers appeared when I was a senior in college.
In terms of Wightman's mentorship, did he essentially hand you a thesis problem to work on? Or you more or less came up with it on your own?
He handed it to me, and it was way too hard. Way too hard. It was to prove that spontaneous symmetry breaking occurs in the Yukawa Theory. And I mean prove it in the sense of rigorous mathematics. That’s a problem about one plus one dimensional field theory, but the tools that he wanted me to use had just been developed that year, the Osterwalder-Schrader Axioms for Euclidian Quantum Field Theory. And he believed that those tools would enable one to show that this theory of fermions and scalars would have a phase in which a discrete symmetry was spontaneously broken. And I tried to do that, and I kind of nibbled around the edges but didn't really make much headway towards a proof. The problem wasn't solved for quite some time. Maybe it took another 15 years before it was solved by real mathematicians. So, I really was not very well equipped for it either intellectually or by background, but I learned a lot. And I think most senior theses turned out that way.
What kind of advice did you get, or not, in terms of choosing graduate programs, particular professors to work with?
Well, I did talk to Wightman about that, I recall. Actually, here's something else, though, which maybe is worth mentioning. At that time, the attitude was widespread that if you tried to get a PhD in physics, you'd never get a job. In the ‘60s, there was a surge of hiring sort of in the post-Sputnik building of science. And so, all these young people got hired as professors in theoretical physics in particular, and all the jobs were filled. And it wouldn't be until the late 1990s that they'd start to retire and there'd be an opportunity to get a faculty job again. I heard this all the time, including from faculty when I was an undergraduate of Princeton. “If your goal is to get a PhD in physics and go on in academia, think again. Because there aren't any jobs.” But somehow, although that should've been very discouraging, it wasn't. I'm not really that conceited of a person, I'm well-aware of my limitations, but somehow, I thought, “Well, for me, it'll be different. And if you don't try, how are you going to know?” Didn't bother me so much.
But anyway, Wightman was quite positive about Sidney Coleman in particular as a potential mentor. The other thing, which I got more from talking to other students, was there was this kind of cultural divide at the time between so-called East Coast and West Coast physics. And at least the buzz with the students was, “The exciting stuff is happening at Princeton and Harvard, and Caltech and Berkeley are still doing what they were doing in the ‘60s, and they haven't caught up. What's exciting is gauge theories. And they're still doing S-matrix theory at Berkeley, so you better not go there.”
What about the theory group at SLAC? Was that something you considered?
No, not so much. Who would I have been aware of? Of course, I knew Sid Drell because I had read his textbook.
Bjorken, for example?
Yeah. I don't know. I wasn't too excited about Stanford or SLAC. And at Caltech, the feeling was the glory days were behind. That's what the students were saying in the mid-70s, that Gell-Mann and Feynman were in their declining years. I had one friend who graduated a year ahead of me, good friend, Orlando Alvarez, who had been a Princeton undergrad. And he went to Harvard. I talked to him a lot, and I figured, “Well, if it's good enough for Orlando, that's probably where I should go.” And I was aware of Steve Weinberg. He had been hired at Harvard relatively recently, I think in '73 he had moved from MIT. He was sort of supposed to fill Schwinger's shoes. And I was aware of the Weinberg-Salam Model, so I knew he was supposed to be a big deal.
But to the extent that I did it, reading papers to get an idea of what people were doing or what would be interesting that faculty members at Harvard, for example, were working on-- I was very impressed by Coleman's papers at the time, less so by Weinberg's. A lot of that had to do with Coleman’s style, which was extremely clear and clever. He would use methods that he would explain very brilliantly, and which you might not have thought of yourself.
So Coleman was really the primary motivation for you wanting to go to Harvard?
That's how I recall it. It didn't turn out quite that way because I became Steve Weinberg's student.
Did you have any interactions with Coleman before you got to Harvard? Did he ever come to Princeton? Did you know him personally?
It's funny because he was at Princeton on leave at the time asymptotic freedom was being discovered. But at that time, I guess I was a sophomore. I don't remember interacting with him at all or even being aware he was there. So, I had not met him. I knew him by reputation and by reading his papers. And getting an assurance from Arthur Wightman that he was doing extremely interesting things. And like I said, I was aware that Weinberg was a big shot. I don't know if I was so aware of Glashow when I was an undergrad. But yeah, I decided I wanted to go Harvard, and that's how it turned out. And as a first-year graduate student, I took Sidney's Field Theory course, which was very popular, and the room would be filled to the rafters. In fact, that was the year that videos were made of all the lectures. And those videos were later used as part of the basis for a version of Coleman's Field Theory lectures that were recently published.
And so, those lectures were beautiful. Coleman was a legendary lecturer, always extremely clear and entertaining. But it was the kind of thing where while you were listening, you thought everything was perfectly clear, but then afterward, it would be very hard to remember why it had been so clear. So, I would go over my notes at great length afterwards and try to re-derive everything. Sometimes I would go and watch the videos, actually, with another friend from the class. And I was determined to master whatever he talked about.
So, that was a very memorable experience. There wasn't any other course I took at Harvard that was taught nearly as well, even though they were taught by distinguished people. That same year, I took Weinberg's gravity course based on his book, Gravitation and Cosmology. General Relativity course. I liked the book, but his lectures weren’t very good. It seemed like he would come in unprepared, and then he'd open the book and start copying equations out of it. It was very uninspiring. And Shelly Glashow was not a very good lecturer, either. And you kind of got the impression he was winging it. I remember taking group theory from Shelly. It was fun, but it always seemed like very little preparation had gone into the lecture.
It's been said that pedagogy is much more prized at Princeton than it is at Harvard. I wonder if you had that experience, even though those are very different perspectives as an undergraduate to a graduate student.
Well, no, I've never really thought about it that way. But I guess that does align with my experience. I thought there were some very well-taught classes when I was an undergrad. Actually, I'd mentioned taking that freshman course on electricity and magnetism. I didn't mention the instructor. It was Val Fitch. And, of course, he won the Nobel Prize for the discovery of CP-violation, which I didn't know at the time. But he was an inspiring teacher. And I took another course from him on more advanced so-called modern physics, in which he discussed at great length the K-Kbar system, and flavor mixing, and CP-violation. And I wasn't aware until told by another one of the students that that was his research bread and butter. But he sure seemed to know a lot about it. [laugh] And so, that was another very memorable class.
If Steve Weinberg didn't give you a great impression as a student in his class, how did you end up becoming his student?
Well, everybody wanted to work with Sidney because he could explain things so clearly, and he was receptive to a certain degree to supervising students. But in a way which was only half-joking, I rather vividly remember him saying, “I have graduate students like a dog has fleas.” And, of course, he meant it as a joke, and I actually thought it was funny. But he really had a lot of students, and his personal habits were different than they were in later years. This was before he was married and before he'd been diagnosed with adult-onset diabetes. And he was a very heavy smoker and kept unconventional hours. He'd stay up all night and then go to sleep at dawn, and he'd come in in the afternoon. He always insisted that his lectures be scheduled for the afternoon because he would be sleeping in the morning. And when he would arrive in mid-afternoon, sometimes late afternoon, students would be queued up outside his office because they wanted a moment with Sidney. And I just thought, “Who needs it? I've got to stand in line to get a few minutes with my advisor?” So that was part of it.
But meanwhile, I guess I became more acquainted with some of the things Weinberg was doing, and I realized, although I wasn't that impressed by the quality of the instruction in his cosmology class, I thought cosmology was really interesting, and the idea of someone who was pursuing research that was relevant to both particle physics and cosmology appealed to me. And even more so, when the idea started to bubble up that we could learn things about particle physics by studying cosmology. Grand Unification had a lot to do with that. Of course, I did learn about Grand Unification. I would say that was one of the obvious exciting things going on in my early years in graduate school. And it became more exciting when Georgi, Quinn, and Weinberg computed, from the running of the couplings, the Grand Unification scale. Originally, Georgi and Glashow had just noted the scale had to be high, or else the proton would be too short-lived.
But by actually calculating the coupling unification scale, that seemed to indicate at first that proton decay might be right on the edge of observability, and that helped to stimulate the early experiments to detect proton decay, which wound up detecting neutrinos from Supernova 1987A and all that. But the idea that you could observationally learn something about these incredibly high-energy scales by doing the right kinds of observations was exciting to me. And then, the idea came along that baryogenesis, the origin of the excess of matter over antimatter in the universe, had an explanation coming from Grand Unification, where there would be baryon-number-violating interactions, and the history of the very early universe, I thought that was very exciting.
And Steve jumped on that, too. The first paper I remember about that was by a guy named Yoshimura. And I thought that was a really cool idea right away. Actually, around the same time, Dimopoulis and Susskind were working on this, though I wasn't as keenly aware of what they were doing, but the idea that you could understand the excess of matter over antimatter using Grand Unification and early universe cosmology, I thought that was really exciting. There was a bit of a courtship in getting to know Steve. Steve was really only interested in talking about what he was interested in generally. And I didn't spend all that much time talking to him. And when I did, he was usually pumping me for information. But the way I managed to get his attention is I thought at the time that the other really exciting thing going on in theoretical particle physics was the connection of topology with particle physics.
And the two main aspects of that that were intriguing were that 't Hooft and Polyakov had pointed out that in unified theories, there could be magnetic monopoles. And also the idea of instantons, which came around the same time—again, with Polyakov and 't Hooft having a key role. These were quantum tunneling events that occur in Yang-Mills Theory, and they had consequences for QCD, in particular, providing a way of solving what people were calling at the time the U1 problem. There seemed to be a symmetry that QCD should have, which wasn't really a good symmetry.
And it turned out that was due to a so-called anomaly, that the symmetry was good at the classical level but broken by quantum effects. And to understand how that worked, you had to use these topological ideas, or at least that's how people understood it at the time, having to do with instantons. And Steve got interested in instantons at some point, and I had been actually working on a problem relating to instantons, so I knew a lot about it. And so, I was always able to answer his questions. He would ask me technical questions about instantons, which he was trying to learn, and I actually knew.
So that's how he learned my name. But yeah, he was certainly inspiring in many ways, but he never gave me much guidance, and I didn't really mind that so much. But I got guidance from other people, most of all, from the post-docs. And, of course, you learn a lot from your other students. But there were remarkable post-docs at Harvard at the time. The two who I was most inspired by were Ed Witten and Michael Peskin. And Peskin, really, was the closest thing I had to a mentor in graduate school.
What was Peskin working on at that point?
Well, we worked together on a project, actually, which had to do with instantons. And also, with my friend Orlando Alvarez, who I'd mentioned had come to Harvard from Princeton a year ahead of me. We were trying to use these instanton ideas to compute contributions to electroproduction, high-energy inelastic scattering. And we did, we worked on that a lot. It was the first serious research project I worked on. Orlando and I did most of the calculations, but Michael kind of got us started. And we had some pretty interesting results. So that was sort of how I came up to speed on these instanton methods. And my first seminars were about that work, which I'm ashamed to say, and I find a little bit inexplicable, we never published, never wrote it up, although we really did have some good results. We all sort of got distracted by other things. Not long after, Misha Shifman, and Vainshtein, and Zakharov covered some similar ideas in their papers.
The key thing that we realized is that when you do these instanton calculations, they have infrared divergences, and they show up as the instanton has a size, and you have to integrate overall the possible sizes. And if nothing cuts off that integral, it looks like you get infinite results. But what we realized is those infrared-sensitive pieces could be factored into matrix elements, and so there were other short-distance pieces that you really could compute. It was really pretty nice work. We should've written a paper. We didn't. But it still helped me get going because it gave me some confidence that I could do research that people were interested in. Howard Georgi was interested in what we were doing.
And also, Ed Witten seemed to be interested. And my first talk at a conference was actually at Caltech. That was in early 1979, there was a meeting where students were encouraged to attend, and some of us went from Harvard, and I met other students there for the first time from Princeton and other places. But there was a session in which students could volunteer to give 20-minute talks, so I signed up for that. And I was quite excited because it was an evening session, but Feynman came, and he was in the audience. He was sort of listening to the talks. Every once in a while, he'd go out in the hallway and just have informal conversations with people.
But anyway, this session went on, and on, and on. It started at 7:00, and I didn't get to talk until 10 pm. Feynman was long gone. I had a terrible cold. I could barely speak audibly because I was so hoarse. But I gave the talk, and it went well. And again, that also helped to build confidence. And I gave similar talks at Harvard. And so, then I started to feel like I was ready to do serious research.
In what ways did this work feed into what ultimately would be your thesis research?
Well, what you might be surprised to hear is that the work I did in graduate school, which became well-known, which was about magnetic monopoles produced in the early universe, was not in my thesis at all. I wrote my thesis on something different. Actually, I think it's interesting that I drew on what I learned from Sidney Coleman and from Steve Weinberg to find my problem having to do with monopoles in the early universe. I was very interested in this idea that magnetic monopoles could be understood using topological ideas applied to unified gauge theories, and that Grand Unified theories should have these magnetic monopoles. But the question I remember discussing with some of the other students, Steve Parke was one of them, was, “Who cares? Because these things are so heavy, you'll never see them experimentally. They're completely irrelevant to any physics we'll be able to do in our lifetimes. So why are you even bothering to learn about these magnetic monopoles?”
This is the very early beginnings of Henry Tye’s and Alan Guth's collaboration. Were you aware of what they were doing? Did you know either of them?
Well, I knew Alan Guth, but I wasn't aware what they were doing until later. Alan Guth was an instructor at Princeton when I was an undergrad. And speaking of great instruction at Princeton, he taught a beautiful class on classical mechanics, which I took as a junior, Goldstein Classical Mechanics. And, really, he's one of the best lecturers. He was Coleman-caliber. And he was clearly working very, very hard on that class. He told me later he was putting an enormous amount of time into it, as I'm sure he must've been. And so, I knew him for that reason. He remembered me later as a student in that class.
But no, I didn't know that Guth and Tye were interested in the issue of production of magnetic monopoles in the early universe. And there were other things that I didn't know and found out later, which preceded my work. One was Kibble had written this paper in 1976 on topological defects that could be produced in a cosmological setting. His focus was mostly on cosmic strings. I didn't know about that at the time.
And there was also a paper by Zeldovich and Khlopov about magnetic monopoles produced in the early universe. I didn't know about their work, either. But I started working on it myself. And Steve was not interested. I tried to explain it to him. “Look, there's something really interesting here,” speaking to Steve Weinberg. Grand Unified theories, we have reasons to believe they're the truth. They make this prediction, these very heavy magnetic monopoles. If there's a phase transition in the early universe, these could be created in such a phase transition, and they should still be around. In fact, there should be so many of them around that the universe would've been closed by monopoles, and it wouldn't look anything like the universe we inhabit. And first of all, he said, “Well, I'm not really sure about the magnetic monopoles existing, and I'm not really sure why I should believe you that they were produced in the early universe. This is all so speculative.”
And it was a little discouraging that my PhD advisor thought I was barking up the wrong tree. But there were other people who did encourage me. Michael Peskin was one. And some people, I actually got some technical advice from. One was Bert Halperin, a condensed matter physicist, but he knew a lot about topological defects in the condensed matter setting. And he helped me to set up a calculation of how many of these monopoles would be created in a phase transition. And another was Ed Purcell, who was, of course, a wonderful man. And I knew Ed because I was TA for his quantum mechanics class. And he was very interested in magnetic monopoles, and in fact, had been involved in searches for them some years earlier, and followed subsequent efforts to detect magnetic monopoles or put limits on their abundance. And actually, there had been a little bit of a false alarm around the time I was entering graduate school. Price claimed to have detected a magnetic monopole in a cosmic ray event, which was later debunked by Luis Alvarez and others as just a misinterpretation of something that could be explained by more conventional phenomena.
It's almost too delicious to think of how, in some ways, Bert Halperin, as a condensed matter theorist, was more helpful in developing your dissertation idea than Steve Weinberg. Can you explain a little bit the science for how Bert's background might've actually been useful? Because at first glance, it's hard to see how condensed matter theory would be relevant for this line of inquiry.
Well, as I mentioned, what I thought was the most exciting thing in my first few years of graduate school that was happening was these topological ideas coming into particle physics, particularly in the theory of magnetic monopoles and instantons. But topological ideas were also becoming increasingly useful in condensed matter, where in different materials, there can be topological defects associated with spontaneous breaking of symmetries. Not usually gauge symmetries in the case of the magnetic monopoles. Well, actually, in the case of a superconductor, a vortex in a superconductor is an example of a topological defect in a gauge theory. That is sort of a prototypical example of such a topological defect. Bert knew everything about superconductors. He knew all about vortices. But also, there were point-like defects that occurred in three-dimensional materials like liquid crystals. And Bert knew about that, too.
Furthermore, what I was interested in is what would happen if there was a phase transition in the early universe. If it's SU5, for example—which is the gauge group—it had been understood in the previous few years that at high temperature, even if the gauge symmetry is spontaneously broken—if the Higgs phenomenon occurs at low temperature, at high temperature that symmetry would be restored. So, you would expect very early in the universe that the SU5 symmetry was still intact, but as the universe cooled, the symmetry breaking would occur. There might be a sequence of phase transitions, but there should at least at some point be a transition to the phase in which SU3 cross SU2 cross U1, the symmetry of the Standard Model, is the unbroken remaining gauge symmetry. And one could show that the breakdown of SU5 to the Standard Model would give rise to stable magnetic monopoles. The question was, how abundantly would they be created?
And there were a couple of ways of looking at that, one of which was really the idea that Kibble had discussed, although I didn't know his paper at the time, which is that there's an order parameter, which is fluctuating around, you're in the symmetric phase, but then it freezes out. Like, for example, if you're cooling a material, and it goes from paramagnetic phase to a ferromagnetic phase, the magnetization locks in, and all the spins line up. And that is the same kind of phenomenon where the symmetry is restored at high temperature and then becomes spontaneously broken at a critical point.
And I thought the same thing would happen in a unified gauge theory. And that that would give rise to the possibility of magnetic monopoles, for one thing, just because of relativistic causality. When the magnetization turns on, the magnetization at one point in space has no way of knowing to line up with the magnetization of another point in space because there hasn't been time for a light signal to travel between this domain and that domain. And so, as the spins start to line up, there will be knots that get locked in. Those are the topological defects. And that's how magnetic monopoles can form.
I had that idea, but Bert told me a different idea, which was that even if the phase transition were a smooth phase transition, if it were second order, that because the order parameters would be fluctuating, you would expect to get a lot of defects, even not taking into account the effects of relativistic causality, and that's what he showed me how to calculate. Which, because I wound up writing a very short letter-length article, got squeezed down to, like, a paragraph or something without many details. That was another thing. Steve did give me one piece of advice. I asked him where I should publish the result that he didn't find very interesting, and he made a suggestion which really surprised me. He said Nature. The particle physics students didn't read Nature. That was where there were papers about astronomy and stuff like that, or biology. But Steve at least had the notion that there was something of broad interest about what I was doing because it related to cosmology, to particle physics, and even to condensed matter ideas.
It was of broad interest, but not particularly interesting to him.
Not interesting enough, but broad. [laugh] But Bert said, “No, Physical Review Letters would be better.” So that's where I submitted it, and it was rejected because first of all, it wasn't novel enough. It'd actually be interesting for me to dig up that referee report. I think I still have it. But also, that it didn't seem to be right because I had overestimated the abundance of the monopoles for some reason which the reviewer didn't explain. But the editors, to their credit, said, “Well, we'll give you a chance to respond and resubmit.” But it was a very bad day because I was already in my fourth year of graduate school. I had no papers. And this was my first one. And it got rejected. And I was pretty depressed. So, I remember my wife thought this would cheer me up—we went shopping, and we bought a color TV set. Up until then, our TV set was this little black and white TV set. And it did cheer me up a little to have a color TV. But anyway, I resubmitted the single-author paper, and it did get accepted.
But what really seems funny and odd to me, looking back, is that when I was applying for post-docs, I had no publications. I had this one preprint, the one about magnetic monopole production in the early universe. And nothing had been published, and that was it. And yet, that didn't seem to be too big an impediment to getting good post-doc offers because that one paper was getting a fair amount of attention. Now, coming back to Henry Tye and Alan Guth, after my preprint came out, Alan invited me to visit Cornell. I don't think Henry was there. I think he was traveling. It was during the summer.
He was probably in China at that point.
Yeah, I think that's probably right. But Alan was there, and, of course, like I said, I knew him from my undergraduate days. And by that time, Michael Peskin was at Cornell. And I think Steve Shenker was there. Yeah, I think he was still a student there. Steve Shenker was there. And Ken Wilson. That was the first time I had a chance to sit-- Ken Wilson is one of my heroes. And although I had met him during his visit to Harvard, I'm sure he didn't know me. But during that visit, I got a chance to sit down with Ken and chat. I think that may be really the only time that we ever had a serious talk about physics, so that was memorable.
Do you remember what you talked about?
Yeah, magnetic monopoles. And actually, he thought my paper was wrong. And he thought it was wrong because he thought the magnetic monopoles would be confined. And he was wrong. And I'm not sure I convinced him.
You were confident at this point though.
I knew a lot about magnetic monopoles. Yeah. But anyway, I talked to Alan a lot. I don't remember, was their paper out yet? Not sure. But they had some of the same ideas, and I guess you've already talked to Alan. But I'd been pretty careful in analyzing if a significant number of monopoles were produced, how many of them would survive. And I had an argument, which I thought was pretty convincing, that unless something nonstandard happened in the cosmology, it just couldn't work, that the production of the monopoles was unavoidable. It would be copious, they wouldn't annihilate fast enough, and the universe would be closed by them many times over, and that couldn't be our cosmology, so there had to be some way out.
And during the following fall, I probably should've thought about that more. Because I figured there had to be something about the phase transition that was unconventional. But by that time, I had gotten interested in a different topic, which is what I did end up writing my thesis on, which was technicolor, as we called it at the time. The breaking of electroweak symmetry by strong interaction. So, of course, Alan famously continued thinking about it and had the insight that inflation could blow the monopoles away, but he also, to his credit, realized that that could explain the flatness and isotropy of the universe. And, of course, that idea was very explosive when it came out. That paper had a lot of impact right away. I remember him coming to Harvard and giving a talk about it, which was received with a lot of excitement.
Did you see the transition to technicolor as switching gears? Was it related?
It wasn't that closely related, but I thought the ideas were quite exciting. I was particularly inspired by a paper by Lenny Susskind, which is actually a little ironic because Steve Weinberg had written a related paper. He didn't call it technicolor, but he did call it dynamical breaking of electroweak symmetry, which is what it is. And his paper was, in a way, sort of typical Weinberg style. He calculated everything, and he correctly discussed all the issues. It was a little dry.
And Lenny Susskind is also one of my heroes because of his creativity as a scientist, but also he's a very charismatic communicator in writing and in person. And this was an inspiring paper. And what he realized, which Weinberg had not, was something quite simple, which was, in the Weinberg-Salam Model, the so-called rho parameter, which basically says that the ratio of the W to Z mass is determined by the Weinberg mixing angle theta-W. Steve, by the way, always claimed the W in theta-W stood for weak, and he wouldn't call it the Weinberg angle. He called it the weak mixing angle, in a burst of modesty.
But at any rate, Murray Gell-Mann always liked to say in his snide way, “Oh, we call this angle theta-W because W stands for the last letter in the word Glashow.” [laugh] Anyway, Murray and Steve were not fans of one another. So what Lenny said in his paper was that you could understand how the Weinberg angle was related to the W of Z masses just from a symmetry consideration, and that in the dynamical symmetry breaking scenario, that symmetry would naturally be present, and that the dynamics that you needed was dynamics that we already understood fairly well from QCD—the breaking of chiral symmetry in QCD, which is responsible for the pion being much lighter than other hadrons. That could occur with this new strong interaction with a similar structure to QCD, but which becomes strong at a higher scale, at the weak scale like a TeV, or a few hundred GeV rather than a few hundred MeV, as in QCD. That could account for how the electroweak symmetries get broken. And Lenny called this new strong interaction “technicolor.”
And what I found so appealing about this was that because it was dynamical, it should be highly constrained. One thing I found very curious and was very interested in, for some years, starting when I was in graduate school is, where do the quark and lepton masses come from? In the Standard Model, they're just free parameters. You write down Yukawa couplings, they can be anything, and those determine the mixing angles like the Cabibbo angle and the Kobayashi-Maskawa matrix, it's all free parameters. Same thing for all the masses. And what fun is that? You'd like to be able to explain where those masses come from. And I thought in a dynamical scenario, we'd be able to do that much better.
I was very interested in those ideas for a couple of years, and it turned out that these dynamical scenarios are so constrained that it was very hard to come up with a viable phenomenology. Because you didn't have the same kind of freedom you do when you have Higgs fields, where you can choose Yukawa couplings to be whatever you want. Explaining the masses of the quarks, and leptons, and all that became very challenging. Actually, I'll tell you something funny. When I first came to Caltech, that was in 1983, I thought the important problem in particle physics was to explain those quark and lepton masses. I thought, “If we could do that, if we could understand what that hint was telling us, that would be a good path to understanding what's beyond the Standard Model.”
And so, to remind myself that was important, I made a chart which showed all the masses, the spectrum of quarks and leptons, and I posted it in my office on the bulletin board so I'd see it every day to remind me, “This is the important thing to think about.” And then, a couple of years went by, and one day, I was talking to Mark Wise in my office, who occupied the office next door, and we looked at that chart, which had been on the bulletin board, roasting in the sun every afternoon, and the masses had all faded away. They'd been bleached by the sun. And we took that to be some kind of metaphor for how this problem somehow was too elusive to admit an easy solution. And by that time, I wasn't thinking about it anymore.
I want to ask, at this point, when you're really starting to solidify your identity in theoretical physics, going from magnetic monopoles to technicolor, did you feel at the time that you dipped a toe into cosmology, and then went sort of back to your home intellectual environment of particle theory?
Well, yeah. I don't know if I looked at it that way. But because I got excited about technicolor, I sort of dropped the cosmology ball for a while and focused on technicolor. My interest in cosmology got reactivated partly because of another experimental false alarm. In 1982, Blas Cabrera thought he saw a magnetic monopole. It was on Valentine's Day, 1982. I was still at Harvard. By then, I was on the faculty. And that seemed incredible and really exciting. And hard to explain. He had this little loop of superconducting wire and saw the flux jump, which he interpreted as evidence that a magnetic monopole had passed through the loop. And so, one needed to understand why magnetic monopoles would be plentiful enough for Blas Cabrera to detect one, and at the same time, not do other things, which the astrophysicists told us would be bad. Parker, in particular, had gotten a bound on the abundance of monopoles from observing that if there's a magnetic monopole plasma in the galaxy, it'll short out the galactic magnetic field on some time scale short compared to the galactic rotation time, which cranks up the dynamo.
And so, was there something wrong with that argument? Guess that wasn't really cosmology. But at any rate, we did realize that if the monopoles were very heavy, the story was changed because Parker had assumed they got relativistic velocities, which for the types of monopoles predicted by grand unified theories, needn't be the case. They'd more likely have typical virial velocities in the galaxy like 10 to the minus 3 c. Of course, it turned out Blas Cabrera never saw another magnetic monopole. But it was exciting for a while. And actually, that helped to elevate my star a little bit maybe because now everybody was excited about magnetic monopoles and where they came from. And I was asked to give talks about that and things like that.
To clarify, when you say that Cabrera never saw another one, is that to suggest it's possible that what he saw was a magnetic monopole?
Well, it seems extremely unlikely, right? Because he would've had to be incredibly lucky to see that one and be consistent with other bounds we have on the flux. So no, I don't think he ever explained, or at least never publicly explained, what went wrong or what the right interpretation was of the event he saw. But no, it wasn't a magnetic monopole, sad to say.
Yeah, so then, the next foray into cosmology which had some impact concerned axions and predicting that they could potentially be the dark matter. And probably Alan Guth told you about this workshop, the Nuffield Workshop in 1982 in Cambridge. I was there. It was organized by Stephen Hawking and Gary Gibbons. It was a pretty exciting event. And the big topic there was whether inflation could explain the origin of galaxies by seeding the density perturbations from which galaxies grew. And there was a lot of disagreement at the beginning of that three-week workshop about what inflation predicted.
And I'm sure you discussed this with Alan, but after the idea of inflation, which seemed very exciting, trouble was brewing because how inflation ended was unclear. And Alan and others had done computations of how as bubbles of true vacuum appeared in the false vacuum in a phase transition, whether those bubbles would succeed in filling up the universe and giving rise to a reheated universe that would then be described by Big Bang cosmology, and he couldn't get this to work. But then, around I guess it must've been the end of 1981, the idea by Andrei Linde, and Albrecht and Steinhardt, that instead of having to go through a barrier, the universe could sort of roll off the table to end inflation. The energy density would be high because you'd be on a plateau of a potential function, but rolling along, and then you'd start to oscillate in the potential after you roll off this flat part. And that would give rise to reheating.
So, what everybody was interested in was what kind of perturbations of density would be produced in that transition from the inflationary phase to the more standard radiation-dominated phase. And so, Alan, and Starobinsky, and Turner, Steinhardt, and Bardeen, and Hawking, they were all trying to calculate those things. So that was sort of the focal point of excitement. But I went there to talk about magnetic monopoles and to think about what axions might have to do with cosmology. And Frank Wilczek was there, too, who had an interest in axions, as the founder of them—
Actually, just to backpedal for a second, this is sort of a funny story. Or maybe, I don't know, sort of a typical experience of a graduate student. In the fall of 1977, I crank up my courage, and I go to see Steve Weinberg. I'd like suggestions for a research problem to work on. And so, he responded immediately with the thing that he was thinking about that day. What was it? Well, he had just read this paper by Peccei and Quinn that would explain potentially the solution to the strong CP problem, why CP is a very good symmetry of the strong interactions. And their idea has something to do with the Higgs sector, and how you can introduce another Higgs field, and that can help. “So what might be interesting to work out is, what's the phenomenology of this type of model with more than one Higgs field?”
So, I thought that sounded interesting. So, like any graduate student would, I spent the next couple of weeks reading everything I could find on Higgs phenomenology. But then, Steve, after a few weeks, announced he was giving a seminar, and he explained the idea, which we now called the axion. He actually called it the Higglet, because it was a little Higgs, a light Higgs, at the time.
The Higglet never caught on.
[laugh] Higglet didn't catch on. And Frank's good at names, isn't he?
And so, Steve was trying to figure out at that point whether the Higglet was ruled out by experiments that had already been done. But I was a little miffed because I thought, “Boy, Steve suggested this problem. Why didn't he tell me that he was making progress? And here I am, spending every waking hour learning about Higgs phenomenology, so I'll be ready to dive in.” But, of course, I'm sure Steve didn't give it another thought. I doubt he remembered that he had even mentioned it to me. I just happened to walk into his office at the time he was looking at the paper or something. Anyway, I was reminded of that. [laugh]
While we're still on graduate school, who was on your committee?
Weinberg, Coleman, Georgi, and Estia Eichten, who was junior faculty at Harvard at that time. I do have a very disturbing memory of my exam, actually. You're not going to believe it when I tell you this, probably. Well, here's the thing. I didn't understand what a PhD defense was. Somehow, I didn't realize I was expected to give a presentation. How could I have not known this? All the other students seemed to know it. So, I thought, “Well, they've all read my thesis, and they'll come in, and they'll ask me questions about it.” I had nothing prepared. My thesis was related to technicolor. Didn't have anything to do with cosmology and magnetic monopoles. But actually, it was something Steve was very interested in.
I'll tell you something funny about that, too. It was very Weinbergian, what I did. I studied what's called the vacuum alignment problem. And what that means is, you have spontaneous symmetry breaking, but you also have some explicit breaking of the symmetry. And the explicit breaking of the symmetry determines which of the degenerate vacua will actually get preferred. If you have a ferromagnet, and you turn on a small magnetic field, then the lower energy vacuum will be the true vacuum. And so, in this case, I had some global symmetries, but then because I also introduced gauge interactions, that explicitly broke some of those symmetries. And the interesting thing was that the way that the symmetry breaking aligned with the gauge symmetry gave rise to some phenomenological predictions, that there would be light mesons coming from the technicolor sector that you might be able to see in collider experiments and stuff like that.
And I gave a talk about this in early 1980 at Harvard. And Steve was there, and he seemed enthusiastic about it. And then, maybe a month or two later. Now, I mentioned Michael Peskin earlier. Michael, that year, the 1979-1980 academic year, was spending the year in France at Saclay as a visitor, and he had written a paper on a very similar topic with very similar conclusions while he was in France, and I hadn't been communicating with him. And he sent it to Weinberg. And so, I don't remember exactly why, but I came into Steve's office, and he said, “I have this paper from Peskin. It's very interesting, and he does blah, blah, blah.” And I said, “But, Steve, that's what I talked about at that seminar two months ago.” He didn't remember that at all. Later, maybe he recollected, he was apologetic about expressing that enthusiasm about Peskin's paper without realizing that much of it overlapped with the content of my thesis.
So anyway, that's what was in the thesis, so I figured I had a receptive audience because I knew Howard was also quite interested, and Estia, too. But I didn't prepare anything. And Arthur Jaffe also came, and he brought Cliff Taubes, who was his graduate student and was actually my officemate. And Cliff became a famous topologist. He’s won many awards, and he's a great mathematician now. They thought I was going to talk about magnetic monopoles, which they were both interested in, so they came in to hear my talk. And I just got up there, and Steve said, “OK, now you can begin.” And I thought, “What”" I had nothing prepared. So I just started mumbling about what was in my thesis very stream-of-consciousness. It must've been excruciating to listen to. And that was my PhD exam.
But you survived. You lived to tell the tale.
I lived, yeah. But I try not to think about it. But that's what really happened.
Was the game plan to stay at Harvard already buttoned up before your defense?
Yes. So, I became a junior fellow after my PhD in the Harvard Society of Fellows. The Society of Fellows, at least in those days, would appoint eight new fellows every year, and they were in all fields. Not just science, in fact, humanities as well. But it was kind of typical to have one or two theoretical physicists in a class, and pretty often, they were Harvard graduate students, not always, who became junior fellows. Some of my predecessors the previous year or two were Paul Steinhardt, who got a Harvard PhD and became a junior fellow, and also Ian Affleck, who later became a very distinguished condensed matter theorist, though he was doing particle theory at the time. So, in my year, I became a junior fellow, and also in that same year was Mark Wise, who became a good friend. He had been a graduate student with Fred Gilman at Stanford. And Cliff Taubes, who was doing topology. We were all junior fellows together.
Was your sense that the Society was essentially finishing school to see if you could elevate to become a Harvard professor?
I didn't really look at it that way because it was so rare for junior fellows, or even Harvard junior faculty, to become tenured professors.
So as naive as you were about what a thesis defense was, you clearly understood the culture of not promoting from within at Harvard.
Oh, that was well-known. Although, actually, we used to joke about it, the students, because we were aware that there had been, in recent years, strong assistant professors doing excellent research who had not gotten tenure at Harvard. Tom Applequist was one who was a couple years ahead of when I arrived. And actually, I had two collaborators who were junior faculty while I was in graduate school, Estia Eichten and Ken Lane. And there was not any serious expectation that they would become tenured professors at Harvard, and they didn't. But, of course, they both went on to good careers. And that was the typical pattern with the Harvard junior faculty, and with the junior fellows, that they would usually go elsewhere and be successful. Now, I did something unusual. I was a junior fellow for only one year, even though it was a three-year appointment. I became an assistant professor and then an associate professor in the following two years.
And, of course, the associate professor is not tenured.
No, and I didn't really think it was likely that I would get tenure, and I wound up going to Caltech.
But to be promoted to associate is an indication that it's a step in the right direction.
Well, maybe so. But actually, what happened was this. My wife had just gotten her business degree at MIT at the Sloan School, what everybody else calls an MBA, but they call a Master's of Science in Management, and she was working at a company that seemed like a real up-and-coming company, Digital Equipment Corporation, which made the VAX minicomputer and other products. And it looked like she was off to a great start in her career, and we wanted to have the flexibility of staying in the Boston area longer. And I thought if I transitioned into the junior faculty slot, although it would mean I'd have to teach and other stuff, we would at least have the flexibility to stick around longer. As it turned out, I didn't do that. I was only at Harvard for three years.
Actually, I remember I was visiting Santa Barbara. This was at the very beginnings of what was then the Institute for Theoretical Physics in Santa Barbara, now the Kavli Institute for Theoretical Physics, and there was a program that Frank Wilczek was organizing, which I was participating in. This was in early 1981. I've only been a junior fellow for a few months, and Sidney Coleman was calling, and he asked me if I wanted to be an assistant professor. So, they've gone through all the assistant professor applications, and for some reason, they think, “We're not interested in these people. Why don't we promote one of the junior fellows?” So, I said to Sidney, “Why would I want to do this? I've got the perfect job now. I'm a junior fellow, I don't have to teach.” And he said, “You're right. I don't have any idea why you would ever do this.” And then, I said, “Do you happen to know what the salary is?” And he said, “No, I have no idea. But I'm sure it's an absolutely pathetic salary compared to anything but the salary of a junior fellow.” [laugh] And so, then we hung up, and I thought I had turned down the assistant professor offer.
But I had an officemate who was a somewhat more senior physicist, Ling-Fong Li was his name. He's, I think, still a professor at Carnegie Mellon after many years. Maybe he's retired by now. And so, he heard the conversation, at least my end of it, and he gave me some sort of brotherly advice. He said, “You really ought to think more carefully about this because it's a step up the ladder. Maybe you shouldn't just off-the-cuff turn down a junior faculty offer, even if it's one that is not really in any realistic operationally meaningful way tenure track.” And so, after talking to Roberta, my wife, and realizing, “Well, maybe having the flexibility to stick around Boston longer would be nice,” I talked to Howard, and I decided I would do it. So, I became an assistant professor one year after my PhD. And then, after that one year, I had the offer from Caltech, and I think it was in response to that Harvard offered the associate professor position. Which was really just the same thing, but slightly more prestigious sounding. But then, I decided to take the Caltech offer.
Now, associate professor at Caltech is a tenured position?
No, in fact. Because I had been offered the associate professor at Harvard, in my negotiation, such as it was, with the division chair—who was Robbie Vogt at the time, who later became the Director of LIGO, I said that I thought they should bump the offer from Caltech up to associate professor because that's what I was at Harvard. And he agreed, and we also agreed to advance my tenure clock, so that I would be reviewed after four years instead of six. And I thought that sounded like a good deal. I don't know if it was really the best thing. Actually, I'll tell you another funny thing I remember about when I was trying to make up my mind about going to Caltech. I went to visit, and I had one-on-one meetings with a number of faculty. This wasn't really an interview, I already had the offer. It was to convince me to come.
Who was driving the recruitment at Caltech?
I think it was David Politzer. Barry Simon claims he was the chair of the committee, maybe so. And Murray called me a few times to try to persuade me. And Barry was one of the people I talked to when I visited. That's the story I was going to tell. And he was trying to reassure me because I was, of course, interested in the tenure review process. And it was very different from Harvard at Caltech. Junior faculty were normally promoted to tenure. At least, that was more common than not, whereas at Harvard, it was quite the opposite. And he said, “Well, here's the thing. We don't expect you to be a Witten. That would be unrealistic to think that you would turn into Ed Witten. But we do expect you to be a Georgi or a Wilczek.” And I actually thought that sounded great. [laugh] So I tell this story to Ken Lane, and he said, “Are you crazy? That's like saying you don't have a prayer.” [laugh] But somehow, I didn't hear it that way when Barry first told me that.
Well, I guess when you start the conversation with Witten, that really just messes up all of the parameters.
[laugh] Uh-huh. And so, Mark Wise also got an offer from Caltech, and we both went more or less at the same time.
Did you consider tenured offers at that time?
No. In fact, I had another nibble from Berkeley, and it came not that long after I'd taken the Harvard faculty position, and I said I didn't want to be considered. And, you know, my reasoning at the time being, right now, as far as a research environment, Harvard is much more exciting for me than Berkeley. And I didn't really put much weight on the fact that if I went to Berkeley as a junior faculty member, the path to tenure would be much more realistic than at Harvard. I figured, “Well, I'm still young. I'm just starting out as a junior faculty member. I'm going to get tenure offers from somewhere.” I don't know what I was thinking, but I kind of brushed off Berkeley. And then, when I heard from Caltech, I had had a little time to think about it. And I think it was David Politzer called me and asked if I was interested, and I figured, “This time, why don't I just see how far this goes?” And so, I wound up getting an offer and accepting it.
To go back to an earlier comment about your impressions of West Coast physics when you were considering graduate programs, as you say, in the mid 1970s, the word on the street, as it were, was that Caltech was sort of living in the past. Was your sense a decade later that that was still the case? Or was a new era dawning, and hires like you and Mark Wise were part of that transformation?
I thought it was still the case, and I thought a transformation was needed. And in fact, they made multiple offers. They tried to hire Michael Peskin and Lenny Susskind. And I think Larry Yaffe, who had been a post-doc at Caltech, in fact, and then wound up going to the University of Washington. So, they realized they needed to do some rebuilding. And if Mark weren't going, I would've been more hesitant about it. Now, the other person who was active in the particle theory group was John Schwarz. At that time, he was not on the professorial faculty. And, really, at the time, string theory was not so respected in most of the particle physics community. And John was plugging away in his work with Michael Green and making rather brilliant progress in string theory, and identifying type 1 and type 2A and type 2B, and so on.
But it was very underappreciated. And that changed overnight in 1984 when they discovered the anomaly cancelation. And suddenly, Schwarz was a hero. He became a Caltech professor, I think, not long after. But yeah, so there was John Schwarz doing his string theory, which was sort of seen as outside the mainstream, Feynman and Gell-Mann, not so active anymore, David Politzer, still active, but maybe not on as steeply upward a trajectory as he had seemed to be a few years earlier. Steve Frautschi, Fred Zachariasen, also not very active. So, it was a group in need of some fresh blood.
Were your interactions with Feynman intellectually, in some ways, the starting point of your budding interest in quantum information?
I'd have to say no. I did have interactions with him having to do with QCD, which I think we both enjoyed a lot. We were both interested in quark confinement, and chiral symmetry breaking, non-perturbative QCD. We talked about those things a lot. And I actually knew a lot of things about that. Feynman would often give the advice to students, “Work things out for yourself. Don't necessarily pay attention to what other people are saying.” Not such good advice, in my opinion, for most students. That was not my style. I tried to read everything. Maybe I overdid it. But in particular, I knew most of what was going on at that time, people trying to understand quark confinement.
And among my heroes were Sasha Polyakov and Gerard 't Hooft, and I knew the things that they had done. Which was actually related to what Feynman was trying to do, but he hadn't studied their work, and because I had, he appreciated that I had something useful to tell him. I did not disguise the fact that I was mostly reporting what I had learned from these other papers. But he didn't seem to pick up on that. He didn't really seem to care whether it came from 't Hooft and Polyakov. He was hearing it from me. And so, he developed what I interpreted as admiration for me, and he would often want to discuss those things. But while I knew he was interested in quantum computing, I was not so interested. At one point in the mid-80s, I was made aware, I think it came from Charlie Bennett, that David Deutsch had written an insightful paper about quantum computing, which I read. And I got the incorrect impression that quantum computing was really not much different from randomized computing, from rolling dice in a computation, and was just sort of that dressed up in a fancy style, and that it wasn't really more powerful in a meaningful way than ordinary computing. And, of course, that was wrong.
This is something that's actually puzzling to me because I did a little dipping into Feynman's papers for his 100th birthday, because I was asked to give a talk at an APS meeting about Feynman's scientific legacy. His paper based on his 1981 talk at a conference on the physics of computing called Simulating Physics with Computers, was really, very insightful, and he has the idea and expresses it fairly clearly and correctly that when it comes to simulating complex, many-particle quantum phenomena, ordinary computers just won't be able to do it efficiently, but with a quantum computer, you could. Now, this is a truly great idea, and 40 years later almost, it's still a very current and relevant idea. And then, he wrote one more paper, which was also rather insightful, about quantum computing in 1985.
But when I said I dipped into his papers, what I mean is, he taught a class jointly several times with Carver Mead and John Hopfield about the limitations and potentialities of computing machines, in which he talked about quantum computing. And in those lectures, in the notes and in the recollections of the students and even TAs, he did not, in class, ever say anything about quantum computing being more powerful than ordinary classical digital computing. That great idea from 1981 didn't make it into the class. The class did include content from the 1985 paper, and I think the first time he taught it was before that paper appeared. He seemed really interested in, was there a thermodynamic cost to computing? Charlie Bennett had written a famous paper about that, saying in contrast to what Rolf Landauer had said earlier, that you can compute for free, in principle, that processing quantum information has no unavoidable thermodynamic cost. What does have a cost is erasing information because that's a dissipative process. So if you just want to compute, and you don't insist on erasing, in principle, you don't need a battery. I emphasize in principle.
And so, that was a great insight. And Feynman was interested in whether that would still be true quantum mechanically because Bennett's analysis had been classical, and he concluded yes, it was still true quantum mechanically. And that was what he focused on in the class, not that quantum computers would be able to compute things that we couldn't possibly compute efficiently with classical machines. Now, he was well-aware of this, and it's far more exciting, it seems to me, and it would seem to almost anyone, that that statement is true than that you can quantum mechanically compute without paying a power bill. But that wasn't what he chose to emphasize. Anyway, in answer to your question, we never really talked much about quantum computing. I knew he was interested in it. We talked about other things. I wasn't that interested. And, of course, I regret that now because eventually, I became very interested.
What were you most interested at the time you joined the faculty at Caltech? What were you working on at that point?
Well, I was still interested in the question of the quark and lepton masses at first, and models in which quarks and leptons were composite is one of the things I was working on. I'll tell you a story about that, too. When I first came to Caltech, I had been at Caltech to give seminars a couple of times, and Feynman had been there. In fact, he had given me a hard time. My first time I had gone to Caltech to give a seminar, I think it was in 1981 when I was making that visit to Santa Barbara, I came down for a couple of days and gave a seminar. Feynman and Gell-Mann were there, and Feynman was quite hostile. And in fact, I was talking about the idea that quarks and leptons are composite. And he made some quite disparaging remarks. But when he would do so, Gell-Mann would defend me. And then, I said something that Murray didn't like. And then, he made some rather cutting remarks, and then Feynman would defend me. I had often heard that when Feynman and Gell-Mann gang up on you at the seminar, it can be a very unpleasant experience. But things didn't go that badly for me. I always had one of them in my corner.
I don't know if Dick [Feynman] remembered that. Probably not. But anyway, when I first arrived, I thought, “Well, I'll have to go up and introduce myself to Feynman.” I went to Helen Tuck, who was the group's secretary, and I said, “When's Feynman going to be around?” “Oh, he's at his vacation home. Probably be back in a couple weeks.” I said, “Well, let me know when he's in town. I'd like to chat with him.” And so, one day, I'm in my office, and then I hear this kind of drumming. Feynman's just walking down the hall, drumming on the wall with a kind of pent-up nervous energy. So, I know that's got to be Feynman because who else would be drumming on the wall? And, indeed, it is.
So I popped out of my office, and Helen saw we were both there, and she introduced me to Feynman. And she said, “Professor Feynman, this is Professor Preskill, our new faculty member.” And Feynman looks at me like he has no idea who I am. And he said, “Oh, yeah? What group?” And I said, “Particle theory.” And he said, “Oh, well, particle theory, that could mean anything. Some people work on quantum electrodynamics. What do you work on?” And I said, “Well, I've been interested in the connections between particle physics and cosmology.” And then, I said, “And I've also been working, without a great deal of success, on the idea that quarks and leptons can be composite particles.” And he kind of looked at me, and he said, “Well, I must say, your lack of success has been shared by many others.” And then, he pivoted and went into his office. That was my introduction to Feynman as a Caltech faculty member.
So anyway, that's, I guess, partially the answer to the question that you asked-- that's what I told Feynman in 1983 I was interested in. I actually got interested in other things having to do with topological defects. I got interested in cosmic strings and whether they could be responsible for the formation of galaxies. And that kind of led me into other properties of defects, including anyons, particles with more general statistics than bosons and fermions.
And then, I guess a couple years after I got to Caltech, something struck, which some of us called “Cole-mania.” This was an idea that came from Sidney Coleman that we could explain the cosmological constant being zero using the idea of wormholes in spacetime. Topologically nontrivial spacetime histories. And I thought this idea was very exciting. I worked on it for a couple of years. It sort of petered out, but in a way, it's come back quite recently, but in a different and more successful form. The idea of wormholes and Euclidian quantum gravity has had surprising success in the last couple of years in helping us to understand how information leaks out of an evaporating black hole, encoded in its Hawking radiation. And it's rather amazing because it turns out that you can compute the information content of the Hawking radiation coming out of the black hole without using string theory or any other microscopic theory of gravity, but just by doing semi-classical gravity and the path integral formalism in imaginary time.
And it's a bit baffling, actually, because we always thought that getting information out of black holes would involve understanding deeply the microscopic properties of the theory. And here, just semi-classical calculations seem to secretly know that's how things have to work. We were working on related things back in the 1980s. But didn't quite have the right ideas. In fact, we were concerned that these wormholes would want to be really big instead of just microscopic. We thought they were supposed to be little, tiny things so that their effects on large-scale physics would be sort of disguised. But it turns out, in these new calculations, big, fat wormholes are the answer rather than the tiny ones. And so, I spent some of my time trying to stamp out the giant wormholes and wasn't really able to do so. But it turns out, they were more of a blessing than a curse, and I didn't appreciate that at the time.
You already explained the formative and negative influence of the cancelation of the SSC on your research interests. What about on the other side of the narrative, during the planning stages of the SSC in the mid-late 1980s? Were you following these developments and thinking about your research agenda along the lines of what might be discovered at these higher energies?
Well, not deeply. Of course, I was aware of planning for the SSC, partly because of Barry Barish, my Caltech colleague, who was heavily involved in one of the major experiments planned for the SSC. One of the post-docs I worked with at Caltech, Ben Grinstein, became one of the first staff members of the SSC lab. He's one of the early members of the theory group. So, he moved to Texas. But as far as thinking in-depth about what experiments could and should be done and what research opportunities that would create, no, not so much.
What were you doing in the early days with regard to quantum information yourself, regardless of your interactions with Feynman?
Well, something that's happened to me several times in my career is, teaching a course was kind of pivotal for me. The first time, actually, when I was at Harvard, and then continuing at Caltech, I taught an advanced applications of Quantum Field Theory class. And I spent an enormous amount of time preparing those lectures, and I wrote up notes. They were handwritten notes. I didn't know how to use LaTeX or anything in those days. But those got widely circulated. More important for me, a lot of what I know about QCD, and Field Theory in general, and non-perturbative methods, I cemented that knowledge by teaching that class in the early 80s, and then several more times. Then, in 1990, Kip Thorne encouraged me to teach a class on Hawking radiation. This was going to be the third term of our three-term sequence on General Relativity.
So, the students would know about black holes and other classic applications of GR, but they wouldn't know Quantum Field Theory, so it was my job in ten weeks to bring them up to speed enough on Quantum Field Theory to understand why black holes radiate. And in discussions with Kip I decided, pedagogically, the right way to do that is to start with the beginning and explain the free scalar field, and I explained it in a nontrivial spacetime background, and then to explain the Unruh Effect, why an accelerated observer in the vacuum sees radiation. And that gives you sort of an efficient path to explaining Hawking radiation because if you consider an observer who's close to the horizon at a fixed position away from the horizon, that observer is uniformly accelerated compared to a freely falling observer. The freely falling observer doesn't think anything special is happening at the horizon, so it looks like the vacuum. And the accelerated observer, who really is static in the Schwarzschild coordinates, sees that thermal Unruh radiation. And then you can understand why it's there. The radiation tries to escape from the black hole, but most of it falls back down. But if it's directed radially outward, or nearly so, it manages to escape, and that's the Hawking radiation that makes it to a distant observer. And you can understand in some quantitative detail how it all works.
So that's how I structured that class, and in the process, I think, I deepened my understanding of Hawking radiation, and it deepened my puzzlement about what it means for physics and how it can be that the information that falls into a black hole isn't lost forever. Or is it lost forever? That's what Hawking claimed. And so, I started to direct some of my effort towards understanding quantum aspects of black holes. And that sort of heightened my interest in quantum information. Actually, I wound up collaborating with Frank Wilczek and Sidney Coleman on some papers on what we called quantum hair on black holes to explain how some quantum phenomena would affect the Hawking radiation. But we were pretty far away from giving a complete account of how information manages to escape from black holes.
But it was in the process of that research program that I became acquainted with things like the No Cloning Theorem, the idea that you cannot copy, with very high fidelity, unknown quantum states. And my appreciation of the relevance of that to black holes was heightened by discussions with Lenny Susskind at the time. And I wrote a paper in 1992, which was called “Do Black Holes Destroy Information?,” in which I kind of laid out the arguments why you might expect that black holes do destroy information, and saying, “There's clearly something big missing, that there's a potential revolution.” And that got, I guess, widely read.
If you can explain, what is the information that black holes do or do not destroy? What does that even mean?
Well, what Hawking had done is, he had explained, by a kind of semi-classical argument, that the radiation emitted by a black hole would be thermal. It would be featureless. And so, if we imagine a process in which we encode a lot of information in the collapsing body from which a black hole forms, or even if a black hole which has existed for a long time is sitting around, and I have a lot of confidential information in my diary, I figure to make sure nobody will ever see it, I throw it into the black hole. And as long as no one is foolish enough to jump into the black hole and be destroyed, it seems like that information is safe from anyone who wants to invade my privacy. But we expect that as a black hole continues to radiate, its mass is reduced. Eventually, it disappears completely. And according to Hawking, the radiation that it emits is completely featureless and doesn't tell me anything about that diary. Physicists like me found that very disturbing because we are steeped in the idea, which is an essential prerequisite for the way we think about Quantum Field Theory, that evolution in a quantum system is unitary.
But this said that a state which is initially pure, a quantum state about which we, in effect, know everything that can be known, if a black hole forms and then evaporates, that will become a highly mixed state, and you won't be able to recover information about what that initial state was, including the information in that diary. And so, it was actually an interesting cultural split, and to a certain degree, still is, between people with a particle physics training and a General Relativity training. Those of us who originated in the particle physics community have been very resistant to the idea that evolution can be non-unitary, that it can be irreversible, that information can be destroyed and lost from the universe forever.
Those with a General Relativity background say, “Well, look, there's a singularity. And time ends there, and information goes there, and it can never come back. It's lost forever.” And they're comfortable with that. And to reconcile those two points of view has been a great challenge that people have been working on for 45 years, or a little longer now, since Stephen Hawking first proposed the idea that black holes destroy information. And the struggle goes on. I think we're making progress, but it's still not a completely solved problem.
Where does quantum supremacy enter the picture?
Well, I first used the words quantum supremacy, or one of the first times, anyway, at a Solvay Conference. This was in 2011. As has been the case in recent years, David Gross was the organizer, and they had a committee of people who decided on the agenda. And they decided that the theme would be a very broad one. It was going to be the Theory of the Quantum World, the title of the conference, and different aspects would be represented. Quantum gravity, cosmology, condensed matter, but they also decided to include quantum information. And that was sort of a milestone in my view, that quantum information had, by that time, garnered sufficient respect that it was considered to be part of that agenda, one of the core issues in theoretical physics. So, I wanted to participate because I thought that was a good thing. Tony Leggett and I were the quantum information speakers.
And so, there were a few points which were sort of second nature in the quantum information community, which I thought maybe were not so well-appreciated by the broader theoretical physics community, and I wanted to focus on some of those things. And one of those was that we should be able, with quantum systems which are not far in the future, based on the quantum technology we currently have or will soon have, to do things that we'd never be able to do in any reasonable amount of time in a classical world. And that this was an important principle of physics. This distinction between quantum and classical, there were things that could happen quantumly that we can't even simulate efficiently by any classical means, including with our most powerful supercomputers, that that was something that physicists should set out to verify in laboratory experiments because it's such a fundamental thing.
And so, I outlined a few different experimental paths to doing so, and none of this was very original. It was more of a synthesis of things that were fairly well-known in the quantum information literature, but not so well-known more broadly. And I pointed out that demonstrations of this notion, which I called quantum supremacy, would be accessible before we necessarily had powerful quantum computers that could solve hard problems that we care about. And so, there should be sort of a near-term goal just to show that there are things that these quantum platforms could do that we couldn't otherwise do, and that this is a verification of something about quantum mechanics that we've never been able to test in the laboratory before. And, therefore, is important and potentially could even fail.
So, I emphasized that, and I wrote up the talk, put it on the arXiv. And some of the practitioners in the field sort of seized on it as a call, most notably the Google AI Quantum group wrote several papers about how they aspired to do experiments that would demonstrate quantum supremacy, and then ultimately, with some fanfare, they announced that they had achieved it in 2019.
I hope you'll indulge me while we're still on the topic of black hole information. I know you've told this story before, but I'd love to hear it from you again, the origins of the Thorne, Hawking, and Preskill bet. How did that all come together?
Well, actually, there are three bets. So, it started in that visit that I mentioned earlier that Kip and I made to Cambridge in 1991, in which we surreptitiously offered the Feynman Chair to Stephen. And during that visit, we had a discussion about numerical relativity, I guess it was. And about what you could maybe learn about gravity by doing computer simulations. And I got to know Stephen better that time and in later years, when he visited to Caltech. But I'd gotten to know him in 1982 a little bit at that Nuffield Workshop. And I kind of sensed the first time I met him, I'm not sure why because I'm not usually so bold, that he would appreciate being needled and being teased. And so, once we were sitting at a meal with a group of other scientists, and he said something, and I said, “Well, what makes you so sure about that, big shot?” And then, I said, “Well, if you listen to Mr. Know-It-All over here, then such and such.” And he seemed to appreciate this. At least, I interpreted his facial expressions as appreciation for my willingness, a very junior scientist, to make fun of the great Stephen Hawking.
Who was known to have a good sense of humor.
Well, yeah, but even so, it was a little out there for me to act that way. Nobody else was doing it. And I didn't really know him that well. So he potentially could've taken offense, but he seemed not to. And so, in 1991, I was still playing the same tricks when we visited. And we talked about cosmic censorship, and I made some remark about, “Well, it could fail.” Cosmic censorship was the conjecture that Roger Penrose made that singularities will always be clothed by horizons, for example, deep inside a black hole. And I said something about how the evidence for it is not really very strong. And Stephen said, “Well, if cosmic censorship is false, then we can't calculate anything. We might as well throw out the theory. If there are naked singularities, that means that things happen that we just can't compute.”
And then, I sort of rose to the bait, and I said, “Well, what makes you so sure, Mr. Know-It-All? There you go again. And I'm especially surprised to hear that from you, Stephen, because you launched your career by proving singularity theorems about the origin of the universe, showing that if we go backward in time, there must be a singularity in our past. And you've, in recent years, been talking about how we can nevertheless make predictions”—I was thinking of the Hartle-Hawking proposal for the wave function of the universe, which is sort of a way of avoiding the singularity—"and so, it doesn't make sense to me for you to say that if there are singularities in classical General Relativity, that means we can't calculate anything. Instead, it means that quantum gravity will have to come to the rescue. And, in fact, you've been working on that.” And he kind of glared at me for a moment, and he said, “Oh, yeah? You want to bet?” And, of course, Kip was also there, and Kip and Stephen had a history of bets. They had one about Cygnus X-1, whether it's really a black hole.
And so, Kip's ears perked up, I think, at the prospect of making a bet. And so, he jumped in and started discussing the terms. The next time Stephen visited Caltech, which was later in 1991, Kip had prepared a written version of the bet for us all to sign about whether there are naked singularities or not. And some years later, I guess it was 1997, I think Kip and I were both surprised that Stephen conceded that bet because it had been convincingly argued that naked singularities can occur under very carefully tuned conditions, if gravitational collapse occurs with the initial conditions chosen non-generically. And my understanding of the bet was that that wouldn't count, that it had to be generic initial conditions. But Stephen made quite a show of conceding. He introduced me at a public lecture and conceded the bet offering t-shirts to Kip and me on that occasion because the terms of the bet were that the loser was supposed to present the winner with an article of clothing with a suitable concessionary message to cover one's nakedness.
So, I think he got a kick out of conceding the bet in such a ceremonious way. And by that time, we had had several rounds of discussions about black hole information and whether black holes really destroy information. So, we decided to write that bet up during that visit. And we wrote it up and signed it. But now, though Kip was on my side on the naked singularity bet, he joined Stephen's side on the quantum information bet. I said information could escape from black holes, and Stephen and Kip said it would be destroyed. And here, again, when Stephen eventually conceded the bet in 2004, I was surprised, and I thought his grounds for doing so were a bit flimsy. But again, he did it with great showmanship at a meeting called GR17. Every couple of years, there's a big General Relativity conference. It was in Dublin that year. Kip and I were both there to give talks as prearranged long in advance. Stephen made a last-minute request to be added to the conference schedule because he wanted to give a talk.
And, of course, if Stephen Hawking makes such a request, it's granted. And the rumor started to spread that he was going to make a big announcement about black hole information. And so, the press turned out in force. It was quite atypical for a physics conference. There were TV cameras and many print and electronic media reporters there for Stephen's talk on a technical topic at a physics conference. At the end of it, he announced that he had concluded that information does escape from black holes, and that he was going to concede the bet to me. Kip was not willing to concede, so from his point of view, the bet remained open. And Stephen presented to me right there on the stage, or his assistant did, a baseball encyclopedia. The terms of the bet were that, since it was about information, the loser would present the winner with an encyclopedia of the winner's choice, from which information could be withdrawn at will. And he knew I liked baseball, so he thought it should be a baseball encyclopedia. But we were in Ireland. You can't get a baseball encyclopedia in Ireland.
So his assistant, Andrew Dunn, made a last-minute request to have a copy of The Ultimate Baseball Encyclopedia, which had recently been published, shipped overnight. I guess it came just in time. So those were the pictures that showed up in magazines and newspapers of me holding the baseball encyclopedia. I became briefly famous. It was surprising because I didn't think Stephen's reasons for suddenly reversing his position were very well-founded. It was a little embarrassing because naturally, the press had the impression that the concession was based on my own scientific contributions, which really wasn't the case at all. And some of my colleagues even questioned whether I should've accepted it under the conditions where I didn't really consider the issue to be definitively settled as a scientific question. But what was I supposed to do? The guy was handing me the encyclopedia in front of 1,000 people.
What was your understanding of Stephen's analysis of the developments that caused him to concede where Kip had not?
Well, actually, it was based on Euclidian quantum gravity, which he had long argued was the right approach for answering deep questions about quantum gravity. And it was pretty close to the idea which I mentioned a moment ago, which has arisen just in the last year or two, of using Euclidian quantum gravity to understand how information can get out of a black hole. But it was based on a somewhat different observation, which, I think, doesn't really hold up to scrutiny. He argued for technical reasons—which I think are not convincing—that when you do the Euclidian path integral, there are different topologies that you can include in the histories of the space time, and that there's a reason to include only the simplest topology. And the interpretation of that simplest topology is the black hole doesn't form at all. And the other topologies, which were reputed to describe the effect of a black hole that forms and evaporates, he argued, gave zero contribution to the path integral when you computed it.
It was a pretty technical argument. He never actually published it in detail, and I'm not sure he, in the end, really believed it was right. But he never reneged. He never changed his position again. From 2004 on, he said he believed that information could escape from black holes. In a way, it was slightly disappointing. What were we going to talk about from then on? Because we had had arguments about black hole information going back a number of years by then. And now that we were on the same side, it wouldn't be quite as spicy from then on.
We briefly touched on it, but I wonder if you could explain a little more the creation of the Institute for Quantum Information as a purely theoretical enterprise. Matter does not enter the scene until 2011. What were some of the limitations in the field, both theoretically or experimentally, that would have prevented matter from being included right from the beginning?
Well, I will say that a lot happened between 2000 and 2011 in terms of a couple of things. One was just the quantum hardware that we used for quantum computing became more advanced and more varied. In particular, back in 2000, there was interest in superconducting devices for explorations of quantum information. But that was really just starting to work. The first evidence that a superconducting circuit in a quantum superposition of two states could maintain coherence for some non-negligible amount of time was just starting to happen around 2000. And then, in the mid-2000s, by then, people had come up with designs for qubits that were performing well enough so you could really do circuits and perform gates. And that continued to advance up until 2011. Meanwhile, the idea of using simulators, particularly cold atom systems, to explore many body quantum phenomena, the idea had just been proposed in 2000. The first experiments showing this was going to be a productive scientific direction had not yet been done. And by 2011, that was becoming a fairly well-established topic.
A lot of what we do at IQIM is related to topological phenomena in condensed matter systems. And that was partly inspired by work that Alexei Kitaev did that I'd mentioned earlier, when I first met him in 1997, and kind of set the agenda for developing better control over topological systems that we might use for quantum computing. And that was still in a very early stage in 2000. By 2011, it was becoming much more of a thing that you could really attempt to do in the lab. So a lot did happen in those ten years, which made it, I think, more exciting to combine together the theoretical side and the experimental side of quantum information. I apologize for this, David, but I see it's 2:15. And I actually have an appointment to get my second dose of the COVID vaccine.
Oh, we don't want to mess with that. What time is that?
I'd be willing to continue, if you feel we haven't exhausted, on some other occasion.
When do you need to take off?
I think I'd like to stop now, actually. But what I meant was, if you want to, or if you think it's worthwhile, we could have another session.
Absolutely. I'd say we have maybe 45 minutes, an hour of additional discussion. So we can certainly schedule that. So let me hit end now.
[End Session 1]
[Begin Session 2]
OK, this is David Zierler, Oral Historian for the American Institute of Physics. It is March 12, 2021. I'm so happy to be back with Professor John Preskill. John, good to see you again.
Hello, David. I'm glad to get another chance to talk to you.
So one of the things I love about this is, it's a historical capsule. We're talking in March of 2021. We ended the last time where you had to go and get your second vaccination, and after our talk today, I'm going to get my first vaccination.
Yeah, so it's pretty fun to have this on the record and see how these things are actually happening in real time. And like we were talking before we hit record, lots to be hopeful about, even for the theoreticians who miss seeing each other in person, right?
Yeah, and you are cohabitating with an older generation at the moment, right? And they've also been vaccinated?
That's right. My mother-in-law is fully vaccinated, so things are looking up.
We left off last time with you giving a wonderful survey of all of the developments that happened from the origins of the Institute for Quantum Information as a purely theoretical endeavor to all the things that happened that allowed for matter to come on board between 2000 and 2011. So my question there is, from the vantage point of ten years later, here we are in 2021, how much excitement or optimism was there around the creation of IQIM that quantum computing was going to be achievable in the near term? To get into your mindset of what you were thinking in 2011 relative to all of the advances that you had surveyed for me for the prior decade going back to 2000.
Well, in 2011, first of all, the vision for the IQIM was not just quantum computing, it was broader than that. It was using the lens of Quantum Information Theory to advance physics across a broader front. And so, some of the themes of the research that we were proposing back then included—and this is something, of course, we're still interested in—topological quantum computing, which Alexei Kitaev had originally proposed already back in 1997, and which by then Microsoft was already embracing. But we were interested in the longer-term future of topological quantum computing, the kinds of topological phases of matter you can realize, how you can study them, what kinds of potentially useful for quantum information processing excitations they could harbor, what kinds of anyons. Kind of more forward-looking than, “How soon can we do rudimentary quantum computing?” It was kind of laying the foundations for the longer term.
And another theme back then was quantum optomechanics. To what extent can we make macroscopic systems behave quantum mechanically, like mechanical oscillators? And can you prepare oscillators in their ground states and create highly non-classical states? What can you do with squeezed states? And so on. As well as kind of core quantum information theory. Can we advance the idea of quantum error correction? Can we propose new types of devices that might have advantages? And as far as quantum computing, back then, the experimental group that was most engaged in sort of cutting age quantum information science was Jeff Kimble's group. And they were interested in things relating to quantum networking, like establishing entanglement between different atomic ensembles, and stuff like that. When we originally started the IQI back in 2000, I think I mentioned this last time, we proposed a broader activity that would encompass experiment, including some of the things I was just talking about. And NSF wasn't up for it at the time. They were really interested in supporting a theory institute.
And a lot happened between 2000 and 2011. Back in 2000, as far as quantum information processing platforms go, ion traps were the leading technology. And there was a lot of talk about superconducting devices, but they were really lousy back then with short coherence times, and there was even some lingering controversy over whether you could see macroscopic quantum interference in superconducting devices. And that had changed a lot by 2011 because of better ideas about how to use superconducting circuits to realize qubits and manipulate them. And that's continued to advance in the ten years since then. Optimism about doing quantum computing in the short term is what you asked about.
I can refine the question because one of the things that people debate about now is, regardless of the technical feasibility, isn't there still the existential question about, what is quantum computing even good for? Or are you beyond that?
Well, no, it's a very good question. I would say there are three most compelling questions sort of confronting the field now. One is the question you just asked. In the long term, what will be the most impactful applications of quantum computing? What's the potential for having a positive impact on society and changing people's lives? Another is, we have these near-term platforms that are starting to become serious now in the sense that, arguably, we can do things with them that would be hard to do with classical computers. Is that near-term technology useful for something? In particular, can we use it to solve problems with an advantage over classical devices, which are problems that users really care about, problems that people want to solve? Not ones that we formulate just for the purpose of demonstrating that quantum platform has an advantage in some narrow sense.
And the other big question is, can we, and how will we, scale up from the types of quantum devices we have now to what we expect to be necessary for really impactful applications? And that is a really, really big challenge, and I can hardly overestimate how hard it is because it probably involves a big stepping up in the scale of the devices—the number of qubits they contain. Because we think you're going to have to use quantum error correction since, at least for the foreseeable future, the quantum computing hardware is going to be very noisy. You asked about the applications. I would say the one we can most clearly foresee, which we think will have an impact in the long run, is pretty much the one that Feynman proposed 40 years ago. It's the natural application, using quantum computers to simulate the behavior of complex quantum systems, which can be applied to chemistry and materials, and, I think, as a tool for physics discovery is already becoming interesting.
But breaking beyond the research community and producing applications that are really broadly useful, the time scale for that is still highly uncertain. So one of the things that has surprised me, if you had asked me about it in 2011, is-- well, there are two main things, actually. One is the very steep ramping up of interest and investment in industry in quantum computing. And now, there are many big ten companies that are making significant investments and building quantum groups. And lots of startups, over 100. Some building hardware, some offering services like telling you how your business can benefit from using quantum technology. And one worries about the unrealistic expectations. I've become more acquainted than I was in the past with the way the business community responds to emerging technologies that are potentially very promising. And there's a hype cycle. I guess everybody knows that.
And the expectations can get a little out of step with the reality. And I try, when I have the opportunity, to sort of tamp things down a bit, saying, “Yes, quantum computing is going to be a big deal, but we don't really know how soon, and it might be a while yet. We have to keep our eye on the long-term goals for developing the technology and realizing the potential. It's not necessarily going to happen in five years.” In the tech industry, anything beyond ten years is infinity. So, nobody wants to talk about something that's going to happen more than ten years out. It's always going to happen in ten years. And, well, maybe it won't. And you worry, naturally, about what will be the consequences of those expectations being too high and what's actually achieved falling short. I've noticed that when I go around giving talks and hammer that theme that we should be careful not to have unrealistic expectations, nobody minds. I thought I would be the skunk at the party because others would be sort of ramping up the hype, and I'd be saying, “Let's not get too far ahead of ourselves.” In fact, though what I say doesn't seem to dampen anyone's enthusiasm.
So, I guess the other thing that has surprised me a bit is on a very different track. The quantum gravity and string theory community has embraced quantum information ideas as a route to a better understanding of the quantum physics of spacetime. That, too, happened more suddenly than I expected. Of course, because of my roots in particle theory and even the study of black holes, I recall 20 years ago, colleagues saying, “Why this sudden interest in quantum computing? It's just quantum mechanics. There's nothing fundamental about it. We've known already since 1925 what quantum mechanics is all about.” And, of course, what attracts one to high-energy physics is, you're thinking about the most fundamental questions, trying to discover the new laws of nature, and that's very exciting and appealing. Quantum computing may seem very different. I think the perception from outside is that it's much more applied.
But I've never looked at it that way. I've always seen the study of quantum information, entanglement, many particle systems, computational complexity as fundamental science and a path to understanding physics more deeply. And, of course, trying to understand quantum gravity is a great challenge, which is still not completely met, and people have been working on it for many, many decades. But now I think there is a community, to which I guess I'm an adherent, who are feeling a lot of progress is being made by understanding spacetime from the point of view of quantum entanglement. And maybe that's too big of a digression from what you were asking about.
No, no. With respect to already starting to see evidence of exciting applications of quantum information in those biggest mysteries in physics, when you're talking about advances that need to continue to be made in quantum gravity, does that include the most basic questions with gravity, like integrating it into the Standard Model? Or are you thinking in different lines?
Well, ultimately, that's where we hope to wind up. I would say the current main research thrust is different than it was, say, back in the 1980s. There was a so-called string revolution in 1984, which I'm sure you've talked to other physicists about. And that was a stunning development. We were very keenly aware of it at Caltech—where John Schwarz was, at that time, on the faculty, not but the professorial faculty. And it gave rise to a surge of optimism about how string theory could now be seen as a candidate for unifying all the interactions. And I think we mentioned last time that when I first came to Caltech, which was just shortly before that, I thought, “Well, the big problem is to understand all these free parameters in the Standard Model. Why do we have the quark and lepton masses that are seen? Why do we have the mixing the way it's observed to be?”
It seemed unsettling that the Standard Model had these 20 or so free parameters that we had no fundamental understanding of. And so, there was a feeling back then, “Maybe string theory can offer the answer.” And a lot of attention in the late 1980s to the compactifications of string theory that would give rise to the Standard Model, the hope being that one would be able to understand in quantitative detail why the Standard Model is the way it is. And that didn't pan out. And, in fact, what we found was that there are a vast number of classical solutions to string theory, the exact number, nobody's ever really been able to count. But it's 10 to the 500 or something. And no one has ever found a solution that precisely matches the phenomenology of the Standard Model. And in any case, we don't know a fundamental principle that selects one of them as preferred over the others.
So somehow, string theory as the explanation for why physics is the way it is hasn't hit pay dirt. And so, that's given rise to speculations about maybe all these different solutions really have an equal claim to being fundamental, and it's, to some degree, an environmental question. We're living in one of many possible universes with different possible descriptions of the fundamental physics in each one. Maybe so. But it's hard to make progress along those lines, though people have tried. So, I think the main motivation for decades for quantum gravity, aside from the quest for unification with the other forces, has been, “What's really going on with the quantum physics of black holes? What can we really say about the role of quantum gravity in the very early history of the universe?” And that second question is the harder one of the two, actually. The progress on the first one, understanding black holes, has come largely from this amazing insight back in 1997 by Juan Maldacena that we call AdS/CFT Duality.
And without going into the details, the important thing was—from our current perspective, the perspective of this discussion—was that Maldacena was saying that everything about quantum gravity, in fact, can be encoded in an ordinary theory without gravitation, a Quantum Field Theory of the type that we do understand from a much more fundamental level. And if you could just understand the dictionary between that Field Theory and the quantum gravity phenomena, in which black holes form, and evaporate, and so on, and you had the tools for studying the Field Theory, you could answer hard questions about quantum gravity. And that program has been underway for 20-plus years. I think it's made a lot of progress. And that progress was accelerated by the realization that properties of the quantum gravity theory, of the fluctuating quantum geometry, when translated into the physics of the Field Theory, were questions about quantum entanglement. That the geometry was really encoded in the structure of the entanglement in this so-called dual quantum field theory.
And so, this as an invitation, to the quantum information community, to get involved. And it's an interesting clash of cultures because the quantum information people are knowledgeable about things like quantum error-correcting codes and quantum complexity. What are the things that are hard to do with the computer, and what are the things that are easy to do? And it's a not obviously intersecting set of questions that the quantum gravity community wants to ask, like what happens to information that falls into a black hole? And bridging that divide has been pretty successful, so that now, if you go to a talk about quantum gravity, at least the talks I go to, you're very likely to hear from people who come from a string theory or quantum gravity background talk about a quantum error-correcting code as a description of this dictionary. It's really the map that encodes quantum information in a certain way, which describes the geometry of the spacetime.
You're likely to hear about computational complexity. Some states take longer to prepare with a quantum computer than others, and we say that those are the more complex states. You can investigate how the complexity of the state in the Field Theory description is related to the properties of the geometry. And that's become another focus of attention. So it's a really cool synthesis where, really, both sides, coming from very different backgrounds, bring something to the table, and at least to a degree, have been successful in communicating with one another and learning from one another. And I've found that gratifying.
When did you start thinking about the ideas that would lead to the term quantum supremacy? And have those views changed over the past nine, ten years?
Well, first of all, although I am responsible for the phrase quantum supremacy, for which I've sometimes taken criticism because not everybody likes the name for several different reasons, the concept was not something I can claim credit for, really. But I was trying to make a point, and I guess the direct answer to your question is, I was invited to give a talk around 2010 at a meeting, and I wanted to do something different. Every once in a while, I try to do this, to provide perspective on, what are the opportunities, where is the field potentially going? And that's when I came up with the term quantum supremacy after thinking about other possible ways of capturing it. I liked supremacy because I was trying to emphasize that with a quantum platform, there would be a huge gap between what you'd be able to do quantumly as opposed to classically. And I took heat for it is because “supremacy” is also used in other contexts, which are odious. But anyway, the name caught on.
And I gave a similar talk, actually, at the Solvay Conference in 2011. That, really, the idea, just from a physics perspective, that we can't efficiently simulate in general quantum systems using classical systems is a very deep statement about physics. I think if you want to say, how are quantum and classical different from one another? This is one of the best answers you can give. Quantum systems, according to the Schrödinger Equation—I guess everybody knows this at some level, it's been discussed for decades—is really hard to simulate with classical computers. And over time, the computer scientists have come up with some useful perspective on the difference in complexity, that there's a gap between the problems you can solve with a quantum computer and a classical one. But, really, it means that quantum systems can do things that classical systems couldn't possibly do, except by taking unimaginably long times to do them.
I guess maybe I should make the remark that the separation doesn't have to do with what the computer scientists call computability. You can get a classical system to simulate a quantum system to any accuracy by evolving for however long you need. The difference has to do with efficiency. That as far as we know, and there are arguments of varying sophistication supporting this, that in order, in general, for a classical system to imitate what a quantum system is doing, there's an exponential slowdown. The classical system needs to take a time, which is exponentially long in the time that the quantum system evolves. So that's a very deep thing about physics. And the point I was making is, we're getting to the point where we can experimentally try to validate this idea.
And so, it's an important research objective, just from a fundamental point of view, to the extent that we can validate this idea to try to use the technology that we currently have or will soon have to perform some task, which might not be interesting for any other reason than for trying to investigate this fundamental question, which we can do with this quantum device, which we can argue is just too hard to do with any classical device, including the most powerful supercomputers we have now. And so, the Solvay Conference was in 2011, the next year, and I also felt this was kind of a milestone. Because, of course, you, as a historian, can appreciate the Solvay Conferences have a certain cachet going back to the beginning, and in the modern era, they've been held every three years or so. And with evolving themes, which had for some time now, you might know for how long, David Gross has been chairing the committee that comes up with the themes for the meetings. And they had been rotating among particle physics, and condensed matter physics, and astrophysics, which are three mature and large communities of scientists.
And so, in 2011, they decided they wanted quantum information to be represented in the meeting. And the theme was called Theory of the Quantum World, but there was a session about quantum information science, and there was a session, both on the theory side and the experimental side. And so, I was asked to be one of the speakers on the theory side. So, you can imagine, it's not a very big meeting because that's how they do these things. There's, like, 50 people in the room. But they're luminaries of theoretical physics, and some experimentalists, too. And there are things which are kind of second nature to the quantum information community, which just are not part of the way most theoretical physics, at least in 2011, think about quantum mechanics. And a very simple one is this distinction—which, from the quantum information community side, is very fundamental—between things that are hard to do and easy to do, what the computer scientists call computational complexity.
And so, I made a remark in that talk that Hilbert space is a myth, in the following sense. Hilbert space is vast in size if you think of it in terms of qubits, as every time you add an extra qubit, you double the size of Hilbert space. So with just 300 qubits, the dimension is 2 to the 300. And you could never, for example, describe a general quantum state in a 2 to the 300-dimensional Hilbert space using ordinary classical information. But more to the point, you can let the quantum system evolve for a long, long time, and it will never, if it evolves according to what we call a local Hamiltonian—which is the way we describe physics, where only a few degrees of freedom interact at a time—it can never visit anywhere except a tiny, tiny fraction of that huge Hilbert space. That's the sense in which the very large Hilbert space is a fiction. It's not something that'll ever be physically accessible, even to the quantum scientists and engineers far, far into the future. Of course, it's very useful to describe this gigantic space for the purpose of thinking about the system. But in a sense, it's not operationally meaningful because we can never go there. And like I said, this is a commonplace observation in the quantum information community. But physicists at the Solvay Conference coming from other fields thought this was a very cool observation.
So anyway, I made the theme of the talk the idea that we're getting close to the point where we can try to do laboratory experiments to check out this idea that quantum systems can do things that are really hard for classical systems to do. Not that I would expect surprises. But as always with experimental physics, we should push the boundaries, right? And nobody's ever done this before. So, it's an important thing to do to see what will happen. And potentially, we will discover—I didn't really expect this-- we often do experiments where we think we probably won't be surprised, but we should do it just in case, you might see that there's some fundamental reason why it doesn't work. And so, that's an important thing to study, just from the point of view of basic physics.
An administrative question. How has IQIM changed since its inception in 2011? Has it gotten bigger, sources of funding, the kinds of people who come, both in terms of senior people and post-docs, in what ways has it changed the last seven or eight years?
Well, the most important change is, we've hired twelve new faculty. And they are really the--
Whose primary appointment is in IQIM? Or do they [the faculty] have home departments?
Well, I wouldn't put it quite that way, but they participate in IQIM. It actually started in 2011. But since 2012, we have hired twelve young faculty who are working on various aspects of what I would call quantum information science or quantum science, who have been folded in to the IQIM, and are really becoming its intellectual leaders. And furthermore, and I think this is also appropriate and reflects Caltech's aspirations in the field, that cuts across many departments. So recently we've hired new faculty in chemistry, and electrical engineering, and materials science, and applied physics, and computer science, and physics, all of whom have some common DNA relating to quantum science and quantum information. So that part has been very fulfilling and exciting. I would say from the point of view of the quantum information theory, there's a much greater interest than there was ten years ago now among the theorists in what you can do with the experimental platforms. I mean, that was always there, but I'm a pretty theoretical theorist, and--
That's a great line, by the way.
[laugh] If you look at the trajectory if my career. But I have become increasingly engaged in discussions with experimentalist and experimental groups about, “What kinds of interesting things can we do?” Actually, I mentioned that when we started in 2011, one of the focuses was quantum optomechanics and the quantum mechanics of macroscopic systems, cooling oscillators to their ground states and stuff like that, where we had some real experimental successes. My colleague, Oskar Painter, had a world-leading group doing that type of science. And partly, I think largely, because of discussions that took place under the auspices of IQIM at our annual retreat, Oskar decided to change the direction of his program to developing superconducting devices for quantum computing and quantum simulations, and he brought to that some expertise which is complementary to others who are working in that area, in particular because of his experience with fabrication, and also the acoustic devices.
And so, one of the things that we're still discussing and working on is incorporating sound as well as light into systems with superconducting qubits, which has some potential advantages. I just point to that as an example of how, when you have this kind of hub of activity, and discussions take place, and you're thinking about the future together, it develops potential new directions. And I love seeing that sort of thing happen.
Of course, there's a whole big quantum world out there, and I don't mean in a physics sense, I mean, in an administrative sense. There's so many institutes across the world. In academia, in the national laboratories, and in industry, who have been, over the past decade or more, some of your key institutional collaborators or partners?
Well, let's unpack that a little bit. It's true what you said, all of it. Many institutes springing up in the last couple of years and a ramping up of activity in the national labs and in industry. And those are all good things. And Caltech, as I said, has some vision for the future of the field, and that is probably best reflected in the faculty hiring that I spoke of. But we're also interested in continuing to invest with further faculty appointments, and, of course, facilities that are needed. It's good to see the national labs getting involved. This has been partially facilitated by the National Quantum Initiative, which was signed into law in late 2018, and which called for increased funding for DOE, NIST, and NSF—and in the case of DOE in particular, prescribed the creation of national centers for quantum information science, of which there are now five getting underway, each with its own identity.
And then, there's industry, and there are some companies which were in quantum information early on. IBM is maybe the most conspicuous example because they had people working on quantum information when the field was really starting to explode in the 90s already, and they had a very distinguished theory group and some experimental activity at the time. And that has vastly expanded into making them a major center of quantum computing with superconducting devices, and they now provide cloud access to users around the world, including many students. So that's certainly been a useful service.
And Microsoft, I guess, is the other example to cite of a company that got in earlier than others because of the great enthusiasm they had for the idea of topological quantum computing. They established a center at Santa Barbara they call Station Q and funded a lot of experimental research, which they continue to do, aimed towards realizing topological devices. And then, more recently, you've seen other tech companies like Google, and Intel, and Honeywell, and lots of them, as well as all this startup activity. So anyway, getting back to your question, actually, we've had a lot of interactions with some of these companies.
Yeah. Including Amazon. You're an Amazon scholar.
Yeah, I'm coming to that, too.
I can't wait to hear what an Amazon scholar does.
But really, I was thinking of Microsoft, Google, and IBM, for example, where first of all, a lot of the people who work at these places have Caltech backgrounds. I think it came up last time that one thing I think we can say with pride is that a lot of people working in the field, including some of the leaders of the field, have Caltech backgrounds, came through as students or post-docs. Some of those have become leading researchers at the industrial companies. And that's helped to facilitate connections with them. Of course, they now hire interns from all over. A lot of Caltech students have had the opportunity to work with those groups. And they really are very strong scientists.
And, of course, the national labs are just sort of ramping up. I'm involved in one of those centers, the one which is based at Berkeley and Sandia. Each of the five centers involves at least one of the DOE national labs. And it there are exciting opportunities there, and in particular, getting this ramping up of activity at the national labs will be important, I think, for the long-term health of the field. It's possible, and I worry about it sometimes, that there will be harder times ahead if, coming back to what I said earlier, the rather optimistic appraisals of the time scale for quantum computing to have a big commercial impact turn out not to be fulfilled, at least on the time scale that's sometimes proposed. And that could cause a dip in industrial enthusiasm and investment.
It'll be important, I think, for government programs to continue to sustain the progress in the field because in the long run, it's going to pay off. And we can't just let it collapse. And having a core of activity in the national labs is going to be helpful. And in the case of the national labs, they have to think about what their long-term mission is. The national labs, the DOE labs have been a great resource for science, going back decades. And they have to continually reassess where they can have beneficial scientific impact. And the current view is that quantum information is an area where there are real opportunities. For example, the National Labs have a lot of materials expertise and a lot of the challenges that we're facing do have to do with developing new and improved materials. And, of course, we have many collaborators at universities around the world, of which there are many strong centers now in the US and abroad as well.
As you well know, the media often portrays the coming quantum computing breakthrough as something of a horse race. "Who's going to get there first? Is it going to be Google, IBM, the [National] Labs? Even other countries?" What's happening in China, for example. From your perspective, is that not the best way to look at how these things are developing, as one entity, either an academic institution, an industrial center, even a country is going to get there first? Is that a productive or not productive way to think about these developments?
It's not the way I look at it. Because I think the industry is still, for the most part, in kind of a pre-competitive era, so that any advance by any company or entity is going to benefit everybody. Of course, there are lots of companies, and some will or do have revenue streams. For example, you put your device on the cloud, and you charge people to use it. Right now, and probably for some time going forward, the people using those platforms will not really be solving problems they couldn't otherwise solve with non-quantum methods. It's more they're at the stage of kind of tooling up, and gaining expertise about how to use quantum devices, and putting some thought into how applications on quantum computers could, in the future, impact and benefit their companies.
But anyway, there's competition, if you like, although even here, I wouldn't necessarily characterize it as competition. If you're a user who is in this tooling up or investigative stage, you're going to access some quantum platform on the cloud, and you have a choice. And there are several different providers you can go to, or there are some, like Amazon Bracket and Microsoft Azure, they're offering access to different types of hardware. So maybe people make some money from that. But as far as the advancing technology, I don't think it's that useful to think of it as a horse race. People mean different things by horse race. One is, of course, there are different technological approaches. And right now, the two most advanced are trapped ions and superconducting circuits. And it's a good question, which will be more promising in the long run? Or will the truly scalable quantum computers of the future be based on a different technological approach?
I really think it's quite important now that a variety of technological approaches are being pursued, both in academia, and national labs, and in industry, too. Because in the long run, different hardware approaches might have different niches, and advantages, and disadvantages, and they might have different prospects for scalability. We're really at an early stage still, I think. And I don't think—of course, I could be wrong because nobody really knows—that we're going to see some technology or company break well ahead of the crowd in the next ten years and become the quantum company which is leaving everybody else in the dust.
Is there a unified theory, or at least shared understanding, of quantum computing so that when it happens, everybody will agree that it happened? Is there a shared set of parameters that everybody goes by and accepts? Or even, are those basic questions still being hashed out?
There is a fair amount of discussion of how to quantitatively benchmark different platforms so you can compare them. Like, how can you say that platform A is better than platform B? And even that's a complex question because A might be more useful than B for one type of application, and B more useful than A for another. But the criterion that I would prefer to use is running on a quantum computer or quantum simulator, some application, which people want to run because they can answer questions that have some practical impact that they can't otherwise answer. So, we're looking forward to when that will happen. Hasn't happened yet, and we don't really know when it's going to happen. And I would say, harking back to the point I made earlier about the three big questions, I said one is, can we do something interesting with the near-term devices? And another was, how are we going to scale them up? And the third was, what are we going to do with them?
And one of the questions, which is an important one for the current stage of industry, is, are we going to need to use quantum error correction to do something that's practically useful? And this is a really key question because what we have now—in the case of ion traps and the superconducting circuits—is devices, which have a number of qubits of order 50 or so, and you can do the fundamental quantum operations that you need to do to run an application. The key ingredient in those quantum circuits is the two-qubit gates, the gates that act on a pair of qubits in some entangling fashion. And a measure of quality of the qubits is—it's a little subtle to quantify this precisely, but let's put that aside—how often is there an error in one of those entangling two-qubit gates? And the current state of the art is that those error rates are a little better than 1%.
So that means you do, say, 200 gates, and one of them's probably bad. And that's a real limitation on how large an application you can run. Because the errors, when they occur, they spread, and they can lead to the answer of the computation, when you finally measure all the qubits at the end, being wrong with sizable probability. Or a better way to say it is, there's some signal that you're trying to get out from this noisy system, and there's a lot of noise. And you need enough signal-to-noise so that by running the computation some acceptable number of times, you can extract some statistically significant signal. And like in the Google experiment, which they did with the 53-qubit device, they actually had to run a few million times a circuit, which had a probability of success which was about 1 in 500.
So, think about that. You run 1,000 times, and 998 times, you get junk. And 2 times out of 1,000, you get a useful result. And then, to extract that from the noise, you run it a few million times. And then, you can do it. And they were able to do that in a few minutes, to their credit, because they have pretty fast gates and a reasonable cycling time. And that's how they achieved what they claim to be a successful demonstration of quantum computational supremacy. But it's hard to go much farther with the devices we have now because they're so noisy.
So, I guess part of the big question is, can we—and how soon can we—have hardware which is much more reliable, so we can get those gate error rates down a lot? That will have a big impact. But another near-term question is, if we're going to live with error rates at the level of, let's say, 10 to the -3 or worse, how far can we go towards running a useful application? And obviously, you can't have more than a few thousand gates, or it's going to fail. So, what can you do with a few thousand gates? A few thousand is probably optimistic, at least right now. Well, that's where quantum error correction comes in. We understand in principle that it's a long-term solution to the problem. It adds a level of redundancy to the computation that protects against noise, just like error correction does in classical settings.
Of course, we use error correction in classical communication routinely, like in our cell phones. And quantum error correction was actually a huge advance, I think, in physics, of historic proportions in the mid-1990s, when we understood in principle how to do this, that we can encode quantum information so it can be protected from noise. But it has a big cost, a high overhead cost, so that we'll need many more physical qubits than so-called logical qubits. If I want to run an application using 100 really, really good qubits, then I will have to encode those qubits with sufficient redundancy to protect against the noise, and how much that cost is in extra qubits and extra gates depends on what application we're trying to run, and what the quality of the qubits are. But if I wanted to do something like factor a large number with the hardware we have now, it probably means thousands of physical qubits for every logical qubit, and the number of logical qubits you need in factoring is a few thousand. So, we're getting up into the 10-million physical qubit range. And just think about what a big leap that is from where we are now with 50 to 100 qubits. There's some vision of how to get there, but it's really hard. And it's not going to happen real soon, unless there's a big surprise.
One of the big storylines over this past pandemic year for experimental physicists has been-- they have increasingly turned to computer simulations because they haven't been able to get into the laboratory. One of the huge frustrations in particle physics is, absent the SSC, who knows what else is out there at those energies, right? Can quantum computing, quantum information get to a point where building something like an SSC or an ILC isn't, dare I say, necessary? That the information that we can get from quantum can be reliable enough that we can see things virtually and be confident in them without having to see it in the real world?
Well, we're trying to understand what the right theory is. For that, I think experiment, at some level, is indispensable. If there are different alternative theories, we're going to try to distinguish. But quantum computing can be very helpful for that because it will allow us to predict by simulation the consequences of different candidate theories well enough so that we can compare with experimental data to distinguish one theory from another. So at the LHC now, and in particle physics going back decades, the key challenge is—and this is true in experimental physics and other fields, very broadly—their backgrounds. And you have to understand them in quantitative detail. And you're looking for a tiny signal, and you have to be able to know how to subtract those backgrounds. And if you're doing a collider experiment at TeV scale, that background is mostly QCD.
And there has been a lot of success in predicting properties of QCD by doing large-scale classical computer simulations. That's what the Lattice Gauge Theory community has been pursuing for 40 years, and they've made a lot of progress. But there are some things we don't expect to ever be able to do accurately with classical computers, when it comes to simulating QCD and other complex quantum systems. Namely, we think it's a hard problem to simulate real-time evolution. So if you want to bang two protons into one another at a scale of a few TeV, and you want to do that in a simulation working from first principles in QCD, that, we think, is too hard to do with any classical computer, but possible with a quantum computer. For now, it's still too hard to do with the quantum computers we have, so it's not something we're going to be doing very soon, but eventually we will.
There's a completely different scaling of resources in the quantum case. It's a good example of quantum advantage asymptotically in that it might take billions of physical qubits, in fact, but one should be able to do first-principle simulations of dynamics in QCD with a quantum computer. And we will also be able to simulate extensions of the Standard Model—which are candidates for describing physics at shorter distances—which could then be compared with experimental data to try to distinguish between different models of physics at shorter distance scales. So I think in the long run, that is going to be important, assuming that through technological advances or the expansion of the world economy, we're able to continue to do high-energy physics at higher and higher energies sometime into the future.
I asked about particle physics, and now I'll ask about cosmology to bring it all together for you. If we look at the excitement surrounding inflation in the early 1980s, where there was, to hear from all of the major players, real confidence that there would be a true or closer understanding of what t = 0 really looked like, and yet 40 years later, that's still very much an open question, are there quantum simulations in cosmology that might get us closer to those fundamental questions?
In the long run, I expect there will be. But this is different than what I just said about QCD. I think in the case of QCD, we know what the theory is, and we know exactly what the right equation is and how to put it on a quantum computer. Obviously, there's tremendous potential for improvements in efficiency. But I think at least we have the basic outline of how you would put a dynamical simulation in QCD on a quantum computer and read out useful results. In the case of cosmology, we're not at that stage yet. And I talked earlier about the big progress that has been made in quantum gravity, thanks in part to bringing quantum information ideas in. And I think I mentioned in passing, but maybe should've emphasized more, that to a large degree, that success—this isn't completely true—has been achieved in the framework of a kind of toy model of quantum gravity, namely, gravitation in anti-de Sitter space, a negatively curved space time, and in cosmology, that's probably not what we're interested in. At least the universe we're living in now is a de Sitter space, which has positive curvature. And it doesn't sound like it should be such a big deal.
If we can do anti-de Sitter, why can't we do de Sitter? But there's a big conceptual gap there. In the case of anti-de Sitter, we have this dictionary I spoke of, which allows us to map anti-de Sitter to an ordinary quantum theory. And that actually has to do with the property of the geometry that it has a boundary, and that dual quantum theory lives on the boundary of the spacetime we're trying to describe. De Sitter space is different because it doesn't have a boundary, and this actually makes it conceptually much harder to understand how to do quantum theory. Because in the case where you have a boundary, you can sort of tie the observables that you're interested in to the boundary, where you can understand them and formulate them in a way which is mathematically clean. We still have a lot of work to do, to do that in de Sitter space. And so, I think that's one of the big challenges in doing what you suggested, using quantum computers to simulate the very early history of the universe. I think maybe we talked about this last time, you might have to remind me, back in the early 80s, how much excitement there was. And you also interviewed Alan Guth recently. And a lot of optimism, in fact. It's good to be optimistic. It drives progress.
And in the interim, I talked to Andreas Albrecht, who was quite wistful about how we're really not that much farther.
I feel that way. I feel like a lot of the ideas that are still current were in place in the early 80s. Now, of course, that's not to say that cosmology hasn't advanced enormously. It has because it's become such a precision experimental science, which is a great, great story, in particular, about what we've been able to learn from the anisotropy of the cosmic microwave background. Because I think we spoke last time about how I was quite excited in the late 70s and early 80s about the connections between particle physics and cosmology, but by about 1984, I kind of sensed that the field was turning towards the focus of theory being how to interpret the observations that people are going to make, and how to model, for example, the last scattering surface, so you could understand how to map observations of the microwave background to different models to get numbers out. Like the baryon to entropy ratio, and the cosmological constant, and all that.
And I didn't think that was where I had a comparative advantage. Not the kind of thing that I felt I was particularly good at. So partly as a result of that, my interests turned elsewhere. But, of course, I'm glad that there was a brilliant community of theorists and experimentalists that continued to advance the field with great results. I guess I also have to say—speaking of wistful—that one of the most exciting moments I can recall in my scientific life was when I heard about the BICEP2 reported detection from observing the polarization in the microwave background of evidence for a gravitational wave background that had been produced primordially. And, of course, it was a hard pill to swallow when that didn't hold up to scrutiny. Maybe we'll still get lucky, I guess that would be a fair word to use, and see evidence for that kind of thing, and that would certainly be very instructive about early universe cosmology.
For the last part of our talk, I'd like to ask some broadly retrospective questions. And we've been talking about the future, but I always like to end looking ahead. So the first question is, one important area we haven't talked about is your career as a teacher to undergraduates and a mentor to graduate students. So over the years for undergraduates, what have been some of the most satisfying classes for you to teach? And in more recent years, as, amazingly, freshman now are born after 9/11, which is the benchmark that always blows my mind, this new generation of students, who are increasingly comfortable with computers, what are the kinds of interests that they have that might give you optimism that their comfort and facility with computers might really be fundamental to significant research gains in the future?
Well, I'll tell you one thing I've noticed is this. I was, for some time, a freshman advisor at Caltech. I'm not doing that the last couple of years. But my advisees, I would have seven or eight a year, were supposedly assigned to me at random, not because they were interested in my research area or even in physics, necessarily. But I would usually, in my first meeting with them, just kind of be getting acquainted and try to probe a little bit, “What's exciting to you? What about science or engineering really intrigues you, thrills you?” And even though this was supposedly a random selection of Caltech freshman, maybe ten years ago, it would be things like string theory. And maybe a little bit later, LIGO. Kind of cutting-edge physics. And now, by far, the most common answer is machine learning. They see that machine learning is really changing the world in a practical way, but it's also changing science because increasingly, we're going to use those tools to interpret data and even to advance theory.
And, of course, that gives rise to questions about the future of the human scientist, which are intriguing. Are we all going to be supplanted, just like the Uber drivers and the truck drivers and the factory workers? Will the physicists also become obsolete when the robots take over? That's another question. But anyway, understandably, they see machine learning as sort of pivotal for the future of the world and for science. And so, I get pulled into that a little bit because I sense that excitement. And, of course, it's not just the freshman, but many of the young scientists, the graduate students, the post-docs, and young faculty are engaged in using machine learning in their research areas and excited about it.
You asked what courses I got a lot of pleasure out of. Well, my first teaching experience, I was still on the faculty at Harvard, and I taught one of the freshman physics classes. I taught that with Roy Schwitters, we co-taught it. It was my first teaching experience, except as a graduate student TA. It was freshman mechanics. And it's showmanship. Obviously, you're trying to convey the science, but you're trying to make it exciting and engaging. And it's really fun because you get to do things like ride a rocket car. And we would ham it up, put on a pair of goggles, and a scarf, and a little helmet, and come shooting through with a rocket car. So, stuff like that was fun. I guess it kind of appealed to my instincts as a performer. But I haven't done much of that. I did Physics 2 at Caltech a few times. But the demonstrations weren't quite so spectacular on the special effects side.
Another thing I really enjoy teaching is statistical mechanics. We have a class for the physics concentrators, but it's also taken by other people who intend to focus on scientific areas, including chemistry, and math, and stuff like that. We teach it to sophomores. And it's actually a three-term sequence. But the part that I think is the most fun is stat mech because you can, starting from very simple principles, basically counting, explain physics phenomena like–of course, you talk about the quantum gases, so you talk about degenerate Fermi gases and what that has to do with metals and white dwarfs and neutron stars, and you talk about the Bose-Einstein condensates, and then you can say something about the revolution that's occurred in atomic physics over the last 25 years because it's become routine to prepare Bose-Einstein condensates in the lab. And then, you get to talk about phase transitions, which is also great because you can give them a taste of the modern view of phase transitions in terms of critical exponents, and scaling, and things like that. So, I found that to be a lot of fun. I also like teaching classical mechanics because it's just so beautiful.
I think I mentioned last time, I learned classical mechanics as an undergraduate from Alan Guth. And he taught a brilliant course, and I've stolen a lot of it. [laugh] I have taught classical mechanics at Caltech. But the educational experiences which I think have been most transformative for me as a scientist have been in teaching graduate classes, particularly on special topics because it's an amazing learning experience for the instructor. If you have to teach fairly recent developments to graduate students—and at Caltech, our undergraduates are very ambitious, and so they take these classes in significant numbers, too. I taught a course on Non-Perturbative Quantum Field Theory in the early 80s, and it was about quark confinement, and chiral symmetry breaking, and lattice QCD, and stuff like that.
And the experience of teaching that really deepens your understanding. I have withdrawn that deposit from the bank many times in the ensuing decades. And I think last time, I mentioned how Kip encouraged me to teach a class on Hawking Radiation, which was also very beneficial to me. But teaching quantum computing was a huge deal because I did it for the first time in 1997. The field was pretty new. Shor's Algorithm had been discovered in ‘94, quantum error correction in ‘95, how to do fault-tolerant quantum computing in ‘96. It was all fresh and exciting. And I had the opportunity to synthesize it. And by that time, the technology existed to reach a larger audience by posting lecture notes on the Internet. In my earlier classes, I made handwritten notes which were Xeroxed and distributed, and I eventually scanned those and put them on my website. And people sometimes look at them.
But in the case of the quantum computing, I had the notes LaTeX-ed and posted in more or less real time as I was doing the lectures. And I still hear from people often that their introduction to quantum information came from reading those notes, which are still available on my website, and which people still read. So, it was an interesting vignette of how the Internet has changed scientific communication because, of course, a lot of us put a lot of work into our classes, and that's to our own benefit and to the benefit of the maybe a few tens of students attending a very advanced class. But when you can put your lecture notes on the Internet, and thousands of people read them, that really amplifies the impact. But most of all, I learned the subject much more deeply from teaching it.
On the graduate side of things, as an intellectual history question, one of the best ways to gauge a professor's research intentions is what kinds of graduate students they take on at any given stage in their career because one of the things you have to do as a successful graduate mentor is to be on top of the literature in, in your case, particle physics, in cosmology, in quantum information. So just over the years, where have your graduate students been? Have they roughly followed your interests over the years? Or do you still take on students that have interests that are more in line with where you were 20 or 30 years ago?
If we go back to the beginning, when I first came to Caltech, maybe we discussed this last time, there was a lot of pent-up demand among the graduate students in the particle theory group for advisors. So, I very quickly accumulated eight or nine students. That maybe set the pattern for my mentorship in a way because there's no way I have the bandwidth to be really deeply engaged on a day-to-day basis with the research of nine students. And so, my goal and style was to sort of create a community where people were talking, and I was giving regular feedback, and where the students were learning from one another and from post-docs in the group. And I could sort of set the agenda to some degree. But already back then, I was encouraging students to come up with their own problems, which was what I had done. And somehow, I thought it was a good model. Though it's not a great model for everybody.
But the most important thing you learn as a researcher is to figure out what to work on. And, of course, as a mentor, I'm supposed to provide guidance there. And to some degree, I do. And sometimes, I make rather concrete suggestions and collaborate with students as well. But in some cases, the students come up with their ideas, and I try to provide a little guidance pointing that idea in a direction which I think is most likely to be fruitful. Now, since I went through this transition in the 90s from a focus on particle physics and black holes to quantum computing, it was a very exciting time because I was excited about those ideas, and so were some of the students.
But it was transitional because I had some students who didn't go that way, who had started out working on other things. I think it was around 1997, I had five PhD students finish the same year, and they all worked on very different things. One was working on dualities in quantum field theory, and one was working on exotic statistics in condensed matter. And then, one was working on quantum error-correcting codes, and so on. So that was kind of fun to have all those balls in the air. But it made our group a little bit less cohesive.
And now, quantum information has become a broad enough field so that you can have different students engaged in different aspects of it. Like now, I have students who are interested in machine learning and its applications to quantum information and or in another one of the things we've discussed in this conversation, the prospects for simulating quantum field theory with quantum computers. Some are interested in quantum error correction and its applications to quantum gravity. And some are interested in anyons, and whether we can realize them, and physically realistic platforms, and what we can do with them.
So that's actually pretty broad. But there's enough overlap so that at least it's my hope and belief that we function as a community, and everybody can be interested in what everybody else is doing. I really value our group meetings where we all get to hear about the progress that others in the group are making. I think that benefits me to know what everybody's doing, but I think it benefits everybody to hear about it. And I've had a few cases where I've had students who were really exceptional, and I'm proud of making very good suggestions. Daniel Gottesman was one, when I was first getting interested in quantum error correction and steered him towards that. And he wrote a thesis which is still a standard reference in quantum error correction and fault-tolerant quantum computing. He got his PhD in 1997.
Another one was Jeongwan Haah, who's now a researcher at Microsoft. I gave him a very good problem, which was to try to understand whether you could make better quantum memory if it were configured in three dimensions instead of in two. And he discovered a new phase of matter, which the condensed matter physicists are still puzzling over. So both Gottesman and Haah wrote single-author papers, which had a lot of impact. That's something I'm proud of. Of course, I'm singling out those two as particularly impactful examples, but I've had about 60 PhD students, and many of them have had successful scientific careers and were great performers in graduate school, and I'm proud of all of them. And maybe we talked about this briefly last time, we've had a lot of post-docs go through the group, and that continues, who have had a huge impact on the field, and I'm really proud of that, too.
Last question, looking forward, so much of your scholarship over the course of your career works in the world of fundamental mystery, ongoing things that we still don't understand. If you were to combine your expertise in all that you've learned in cosmology, particle physics, quantum information, and brought them to mutual benefit, is there one problem in physics that stands out for you that you're either most optimistic about solving, or you just remain most curious about and are optimistic that all of these different research areas of expertise might come to bear on each other and provide something really fundamental for the remainder of your career?
Well, I can think of a few possible answers to that, but I'll give one, which is the one that most leaps to mind. And that is, I would like to see, while I'm able to enjoy it, significant progress in understanding quantum gravity coming from laboratory experiments with quantum simulators and quantum computers. And thanks to this idea of duality, that we can describe quantum gravity using an ordinary quantum system, this is not a ridiculous proposal. And it does combine all the things you talked about because it involves building and operating quantum platforms that are up to this task, it involves formulating the questions in a way which can be conveniently addressed by doing experiments. It connects with condensed matter physics because the way you would do it is to create a highly correlated, strongly interacting system of many particles, where the geometry of this quantum gravity theory would be a kind of emergent property. A lot of the most fascinating things in quantum condensed matter concern the emergence of an effective description, which is very different from, and not so easy to predict from, the underlying microscopic description. That would be a great way to combine together all in one cool package the things I've been interested in my whole career.
Which would tell you what, best case scenario?
Well, first of all, this idea of duality, that we can describe quantum gravity in terms of a theory that doesn't seem to have anything to do with quantum gravity at all, has been understood in a few rather narrow settings. Essentially, the looking under the lamp post phenomenon. There are a few cases where, for example, the theory has enough symmetry that we can kind of figure out what the duality looks like. But I don't think we have a very clear understanding of how far-reaching these dualities are.
And so, I don't know if it's the best case scenario, but a good case scenario would be that when you have many strongly interacting particles, they do things, which we can't very well simulate with classical computers because it's just too hard for all the reasons we talked about earlier, but where you can come up with an effective description which captures what's going on in a way that is intuitively appealing, and that description will involve gravity. Just an ordinary system of atoms or something interacting strongly with one another, evolving dynamically in some way, which we might not understand at first from first principles, but where we can describe what's going on by saying, “Ah, well this is behaving like an emergent black hole.” Or it's an emergent wormhole in space, or something like that. And in the long run, when we imagine using quantum computers to simulate many body physics, which goes beyond what we can simulate classically, we won't be satisfied, at least I won't be, unless we have some way of "understanding” it, of describing what's going on in some language which we think is useful for leading us toward future discoveries.
It would be rather disappointing if all we can say is, “All right, well, we saw how that worked. And I guess we won't learn anything more until we build an even more powerful quantum computer.” We want theory and experiment to advance together, as they always have in the past. And as we learn new things from experiments, we want to be able to integrate that into our understanding, so that we can propose future experiments, and predict what would happen in them, and either find those predictions confirmed, to our credit, or find the delight of being surprised, and therefore, being led to further new discoveries.
Well, on that note, it's been unbelievably fun to spend this time with you. Thank you for allowing me to convince you to do this. I really appreciate it.
Well, I have no regrets. I had a good time.