Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
We encourage researchers to utilize the full-text search on this page to navigate our oral histories or to use our catalog to locate oral history interviews by keyword.
Please contact [email protected] with any feedback.
Credit: Brad Horn, National Public Radio
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of Christopher Monroe by David Zierler on March 29, 2021,
Niels Bohr Library & Archives, American Institute of Physics,
College Park, MD USA,
www.aip.org/history-programs/niels-bohr-library/oral-histories/46948
For multiple citations, "AIP" is the preferred abbreviation for the location.
Interview with Christopher Monroe, Gilhuly Family Distinguished Presidential Professor of Physics and Electrical Computer Engineering at Duke University. Monroe discusses his ongoing affiliation with the University of Maryland, and his position as chief scientist and co-founder of IonQ. He discusses the competition to achieve true quantum computing, and what it will look like without yet knowing what the applications will be. Monroe discusses his childhood in suburban Detroit and his decision to go to MIT for college, where he focused on systems engineering and electronic circuits. He explains his decision to pursue atomic physics at the University of Colorado to work under the direction of Carl Wieman on collecting cold atoms from a vapor cell, which he describes as a “zig zag” path to Bose condensation. Monroe discusses his postdoctoral research at NIST where he learned ion trap techniques from Dave Wineland and where he worked with Eric Cornell. He explains how he became interested in quantum computing from this research and why quantum computing’s gestation period is stretching into its third decade. Monroe explains his decision to join the faculty at the University of Michigan, where he focused on pulsed lasers for quantum control of atoms. He describes his interest to transfer to UMD partly to be closer to federal entities that were supporting quantum research and to become involved in the Joint Quantum Institute. Monroe explains the value of quantum computing to encryption and intelligence work, he describes the “architecture” of quantum computing, and he narrates the origins of IonQ and the nature of venture capitalism. He discusses China’s role in advancing quantum computing, and he describes preparations for IonQ to go public in the summer of 2021. At the end of the interview, Monroe discusses the focus of the Duke Quantum Center, and he asserts that no matter how impressive quantum computing can become, computer simulation can never replace observation of the natural world.
This is David Zierler, oral historian for the American Institute of Physics. It is March 29th, 2021. I'm delighted to be here with Professor Christopher R. Monroe. Chris, great to see you. Thank you for joining me.
Thanks, David. Pleasure to be here and glad to chat.
All right. So, to start, would you please tell me your current titles and institutional affiliations? And you'll noticed I pluralized everything because I know there's more than one.
Yes, it’s a bit messy right now. I am the Gilhuly Family Distinguished Presidential Professor of Physics and Electrical Computer Engineering at Duke University. I'm also a College Park Professor at the University of Maryland, although my main position is at Duke, where I have recently moved. Finally, I am the chief scientist and co-founder of IonQ, a startup company that manufactures quantum computers.
All right. So, we have to unpack all of this.
[laugh]
So, you've retained your professorship in Maryland or what exactly is your affiliation at College Park?
I don't have tenure there officially anymore, but I have a good relationship with my Maryland colleagues and continue to be engaged with the community there. I've been at Maryland since 2007. They have a big community in quantum science and, even though I wanted to start something new down here at Duke, it was mutual in Maryland that we continue our collaborations. Usually, they give this type of position to people that maybe retire, or they're put out to pasture or something. [laugh] I can host students at Maryland, I can run grants through the university. So, I have a good relationship with everybody up there.
What was the opportunity that compelled you to move to Duke? Was there a new project that was starting that you wanted to be a part of?
Yes. I sensed an opportunity to push the field of quantum computing in a somewhat different direction than that of a conventional university department. For lack of a better term, I'll call it a “vertical.” Instead of developing the physics of all the interesting quantum systems from which you can build quantum technology (from silicon, solid-state quantum dots and superconductors to trapped atoms, ions and individual photons), I want to take one particular platform – trapped individual atomic ions – and build computers out of them by engaging engineers to build better ones that are smaller and higher performing. I want to engage computer scientists to learn how to write software for these things, and then applications specialists that can run new algorithms. So, you can see it involves many different fields and well apart from just physics. At Duke they have a history of going “all in” when they want to innovate or move into a particular research direction. They did this with biomedical engineering 30 years ago, having pretty much invented that field in a way that is now copied everywhere. So. it was very compelling for me to join a couple of my long-time colleagues at Duke, including the cofounder of IonQ along with me, Jungsang Kim, and Ken Brown – a physicist/chemist/computer-scientist who knows everything. The three of us are unified on this mission, and we are supported at a very high level from Duke. This looked like an interesting opportunity, and Duke is very good at supporting things like this, so I thought this would be a good way to make the vision happen.
Who are some of the strategic partners at Duke that are making all of this possible, perhaps in either industry or in the government?
Well, in addition to the tremendous support of the leadership at Duke, one obvious partner is Ned Gilhuly, a Duke graduate, private equity investor, and benefactor to the university. On behalf of his family, Ned underwrote my chaired position at Duke, and made it possible for me to move to North Carolina with my partner Svetlana. Svetlana has a PhD in high energy physics, but is also an expert in neuroscience, and almost every branch of the arts; she now works as Asst. VP of Research at Duke, and she is a natural at understanding and organizing almost any research activity on campus. Government partners include IARPA, NSF, and DOD, who have all bankrolled our laboratories throughout the years, and what we are building at Duke will likely be of further interest to those agencies. The industrial partner is easy, that's IonQ. They're located in College Park just inside the Beltway out of Washington. Jungsang Kim and I are cofounders there, and both Maryland and Duke have interesting partnerships with that company in that the intellectual property that is owned by the universities— when students, while doing research, come up with things that are very interesting to the company, IonQ gets the exclusive license rights to that IP with no royalties in exchange for a small amount of equity in the company. This is not typical at all – but both Duke and Maryland have this, so it made the transition easy. But there are other reasons, too. I'm part of two academic departments, ECE and Physics, and chairs of these departments, the deans of their colleges, the research VP, the provost, the president of the university, they are all 100 percent behind pushing this particular brand of quantum science and engineering. And we have an incredible facility: our laboratories are in an old tobacco factory in downtown Durham, the Chesterfield Building. We are part of the university although it doesn't look like one, and we will be branded as something more akin to a national laboratory.
[laugh] What a much better use of American ingenuity.
[laugh] Yeah. Interesting town, Durham. They don't demolish anything: old buildings, railroad crossings for defunct tracks, the old baseball park used in “Bull Durham.” They restore it all, and it's a beautiful city. It has a certain grittiness to it, but now with the Research Triangle, the Biotech Corridor, and anchored by Duke University, it is an amazing place. The students, the grad students and postdocs love it in Chesterfield because it's in the middle of the city action; there’s even a whiskey bar on the ground floor.
Chris, in addition to switching universities, you've also switched home departments. You were in the physics department at Maryland and now you have joint appointments in electrical and computer engineering. Now, obviously, for your field, you traverse all of these disciplines quite easily, but I wonder, substantively, if there are any significant differences in your research agenda as opposed to this departmental switch?
Yes, there are. I will say that I did have an appointment in engineering at Maryland and also previously at Michigan, where I worked in the early 2000s. But those were largely courtesy appointments. This one is more substantial. It's 50/50. And I see in Duke engineering something really different than physics. I think it's broader, they are more entrepreneurial, the students come in wanting to learn about systems. That's sort of a black magic word, systems. It's not really a science. But when you make a jet, when you make an airplane that has a gazillion parts, and the way they behave, there's a science to it that is not based on the individual physics of the components. So, I am very interested in bringing that aspect to quantum computing. I just hope they don't make me teach a digital circuits class, because I'm not so good at that. [laugh] Analog circuits, no problem.
[laugh]
I think the attitude of the departments at Duke in particular is very welcoming. They see that quantum science is somewhere between science (physics) and engineering, and both departments and colleges want to capture that.
Chris, a question we've all been dealing with over this past year: As an experimentalist, but somebody who works so closely with computers, how have you been affected by remote work? Have you been impacted negatively or positively by the mandates of social distancing or has it been more or less the same for you?
It's a little bit of a mixed bag. My real struggle with the pandemic and shutdown is the lack of one-on-one face time with students, just seeing them around, bumping into them, socializing with them. With remote work, of course, we have conversations over a video link but it's just not the same. I'm an experimentalist, so we work with our hands a lot, do things, goof around in the lab, break things, play with new toys. On the other hand, one of our new systems actually performed better during the lockdown. We spent several years building this particular quantum computer system. It’s not just an atomic physics experiment, but a deliberately designed system, packaged it into a 1 or 2 cubic meter isolated box housed in the laboratory. The idea is to leave that box closed for months. We perform all the manipulations we would use for a normal physics experiment except it was designed so that all of the adjustments that we need to make can be done remotely with automated transducers and actuators. It took a long time to design, and it was very expensive. The machine started to come online right at the tail end of 2019. As we were running it, we realized that we didn't need to be in the lab with the machine itself. And then, when the lockdown started in March 2020, the surprise was that this machine started working even better than expected, and it's because there were no vibrations in the building. Nobody was there. The HVAC, the air conditioning system, was rock solid because there was very little variation in the load, doors weren't opening, and nobody was walking around. So, with this system we were gathering great data —with all seven students and postdoc researchers working at home. But the more conventional science experiments were more tricky to manage. We had to turn them off for a little while, do some theory at home, and this was common in the field.
Chris, a snapshot question for now that I think is going to infuse a lot of what we talk about subsequently: For better or worse, as you well know, the media tends to portray quantum computing as a bit of a horse race; who's going to get there first, what's it going to mean? From your vantagepoint, is that necessarily a useful way of thinking about these things? And on an existential level, if it's true that we're still trying to figure out what quantum computing really will be good for, how will we know when we're there?
Very good questions. The field being characterized as a horse race is a little simplistic. I do believe competition in research is very healthy, even when it gets a little chippy. Early on in my career, back in 1990-92, in my field of atomic physics, there was also a horse race to cool the gas of atoms so cold that they condensed into a quantum fluid called a Bose-Einstein condensation (BEC). There was no known application and it didn't seem like there would be some technological spin-off. It still hasn't really happened. But there was new physics there. Each atom got so cold that its de Broglie wavelength was bigger than the spacing between atoms, so they became sort of one. And the laws of physics in this gas or condensate are completely different, like in a superfluid or a superconductor. Now that was a horserace. And, boy, I remember, as a graduate student with Carl Wieman, with Eric Cornell as a postdoc, we were in the middle of it. Carl saw this more than anybody else; I think it drove him. But it just seemed impossible to get our atoms a million times colder than they were to hit BEC. And there were five or six other groups across the world working on this, and it was hyper-competitive. And it was chippy. You'd go to conferences and people would say how far they've gotten, and everybody would roll their eyes. But you know—and I think everybody agrees with this—that it was good for the field. People felt the pressure and, for better or for worse, that made the field move faster. So, let’s fast-forward to now. I think with quantum information technology, it’s different in the sense that there’s a clear push for applications in the commercial space, for real applications, not just science. But we shouldn’t forget that there is science there. A quantum computer exploits the concept of quantum entanglement in a huge system. And that science is really murky. Much misunderstood phenomena in solid state physics is due to entanglement: it is too complicated to simply characterize these systems using that language. So, I don't worry too much about the horse race in the field because it will help to advance the science. Another positive aspect of the horse race is that we have companies big and small pouring money into this field. My corner of this field is doing pretty well in the race. But with a horse race comes some amount of hype. I think many scientists are shying away from quantum computing because there is lots of hype in the field: that quantum computers will replace regular computers and they'll displace Moore's law or the acceleration of conventional computers. I think this will ring true in the long run, but some people worry that the hype may do more harm in the long run. I don't know what to say about that, except that without a horse race and bold predictions, maybe we never get there. I don’t know that many academics see it this way.
What about the issue, to extend that metaphor of a horse race, in terms of those existential questions about what quantum computing is going to be good for? In other words, in a horse race we know who the horses are, and we know what the finish line looks like. So, in this race, it begs the question, race exactly to what?
Another good question! I made the analogy to Bose condensation in my own past. Here there was a race to get to this phase transition whose signature turned out to be as clear as day. The result was obvious, but not obviously useful for anything. It is still beautiful science, and sometimes we do science for its own sake. Quantum computing on the other hand, starts with its potential applications.
We have the mRNA vaccine for that reason so, you know…
There you go. [laugh]
Just for one example. [laugh]
Yes. Quantum computing has that flipped. Let me start over again. When we find an application that uses a quantum computer that we could not do using even very powerful classical computers, it will be obvious. It may not be as obvious as a new phase of matter though, and this also speaks to the difference between academic quantum computers and commercial quantum computers. In academic computer science, we have proofs. We have certain problems that are proven to be exponentially hard. When you increase the size of the input, the problem gets geometrically harder with that size of the input. That means that when you make the problem big enough, computers just can't tackle it. Many problems are called “hard” because of that. It turns out that quantum computers can make some hard problems easy. There are a couple of examples, such as factoring a big number into its primes. It's known that a quantum computer, if large enough, would be able to do that problem exponentially faster than any known algorithm on a classical computer. The problem is that the number of elements in the quantum computer and the size of the program is billions of times bigger than quantum computers available today. So, this application, unfortunately, is not just a few years away, it's probably a decade or more away. I think the more widespread applications in quantum computation will be those that you can't prove are better than any classical computation. That does not mean such an application will not be useful. If I have a device that works and I don't know why it works but it works, I use it – it is useful. In computer science sometimes that's called a heuristic. When quantum computers hit paydirt, it'll probably be for a heuristic problem—oh, say it solves the traveling salesman problem a little better than what we could do using a high-performance computer. Of course, a next generation high-performance classical computer might come around and beat the quantum computer. So, while the transition to quantum computers may not be as clear as a phase transition, people will pay for this. If they get a better answer to their pet optimization problem, like how to combine drugs to make a particular compound or to model some financial portfolio function that's really complicated, what do they care? As long as quantum computer maximized the portfolio better than what you could do conventionally, it'll get used. People will start paying for it. So maybe that's an economic phase transition. I don't know if this is better than a scientific breakthrough; it's different.
Chris, I want to invert a question I posed to John Preskill at Caltech. As a quantum theorist, I asked him what were some of the advances in materials and technologies that compelled him to broaden out the IQI into IQIM? In other words, why was he able to add matter to the enterprise? And so, for you over the past ten or twenty years, what have been some of the advances in quantum theory that have been most relevant for your research?
Well, one very important one was error correction. In the mid '90s when quantum computing was unveiled, there came this factoring algorithm I mentioned previously, invented by Peter Shor. When you look at Shor's algorithm, it's an amazing thing, it is a combination of number theory and quantum physics -- strange bedfellows there. It was clear the complexity of the factoring problem changed from exponential to polynomial when you used quantum information. But while Shor's factoring algorithm was very exciting, it relied on lots of analog computing concepts. The problem is, quantum systems have a continuous variable inside them, the weightings of the superpositions: the quantum version of a single bit can have any weightings of the states 0 and 1 – 80%/20%, 50%/50%, 63.2%/36.8%, and those weightings can undulate like in an analog computer. We don’t use analog computers because they're unstable. Errors accumulate, and it limits how far you can take the computation. At the time, many people thought that the whole field of quantum computing was doomed, because there seemed to be no way to make the system stable. There seemed no way to make an analog system “latch.” On the other hand, if digital or discrete classical systems fluctuates a little bit, they can be corrected through feedback: you measure the state of the system and simply bring it back. If a 5-volt signal droops to 4.8 volts, you boost it back to 5 volts; but if the other logic state is around 0 volts, then there is no ambiguity: this is called “latching.” Well, as everybody knows, when you measure a generic quantum system it changes. It loses its coherence and does not seem to be latch-able. Quantum error correction was discovered in the late '90s, and some people think it was more profound than the idea of quantum computing itself. The idea that you can extend the concept of classical error correction (codified by Claude Shannon in the 1940s) and use the same ideas in quantum information was revolutionary. You need more resources, you need more quantum bits or qubits, but you can, in fact, encode quantum systems so that they effectively “latch.” That, I would say, has been the biggest theoretical development. This was probably around the time that John Preskill got really interested, and he is known for his general ideas in error correction. Now, what about materials? Well, to me the “string theory” of quantum information science is the search for the topological qubit. It's beautiful physics, and at a very high level. A topological qubit is a natural system that does its own error correction, in a way. If error correction codes exist, why not posit that there's some type of material that has the encoding already in there? It seems like that would be a pretty far road along the path of evolution to get to that, but you can't rule it out. And maybe it doesn't exist naturally, but you can make it. Maybe you can synthesize a material that naturally has all the interactions to get this error correction to work automatically. There are hints at how you might make such a topological qubit do exactly that. But then you have to build many of them and get them to interact in controllable ways, and often the usual sensitivity to noise comes back in the game. But it's really beautiful physics. In condensed matter and solid-state theory, this is a very popular subject. Some other theoretical advents are on the application side: new algorithms. None have been as weighty or clean as Shor’s factoring algorithm, but there are new ideas on optimization—these are heuristics, once again, often guesswork. Here you guide a quantum state to do something and maybe it corresponds to a function you're trying to optimize. There's been a lot of work in that direction. Further research is what I would call “horizontal quantum research” that captivates most physics departments and scientific quantum centers: look at every new type of qubit that's out there and push them. Well, there's not a whole lot of such technologies. Right now, there are probably 7-8 physical platforms that are really interesting for one reason or another. Trapped ions, superconducting circuits, neutral atoms, quantum dots, color centers, single atomic-like defects in solids, silicon spins, the topological qubits. These potential platforms emerge slowly and there are new ideas. Materials science is a natural place to look for cool quantum effects, even if they don't amount to a new quantum computer. Quantum effects historically have usually led to interesting discoveries from sensors to our understanding of phenomena like superconductivity – a wonderful material phenomenon that has a very subtle microscopic underpinning that is not well understood at high temperatures. We still don't have a microscopic theory for that, and this is because the entanglement within is too complex. But this is the domain of quantum information – I believe this field will generate new technology, and also give us a better understanding of fundamental materials science.
Well, Chris, let's take it all the way back to the beginning. Let's start with your parents. Tell me a little bit about them and where they're from.
I was raised just outside out of Detroit, Michigan, and my parents were both raised in the city. My father is an actuary. He's also an armchair mathematician and physicist, and I was definitely moved by that. I was always very good with my hands. As a kid I used to take things apart, and I loved working on cars. Once I disassembled a carburetor on one of my parents’ cars: I just took it all apart, and I eventually got it back together, but it was a struggle. My father saw that I was decent at math, and I was good with my hands, and somehow, he knew I was going to be a physicist. I wasn’t pushed into it, but he saw that this was something I should study. My mother is a teacher of art (painting, mostly). They both played piano very well. Between them, I believe they played most of what Frederic Chopin’s wrote. So, when I was a kid my life was defined by cars, baseball, and music. [laugh]
[laugh]
And at an early stage, I knew that I was going to have a career in either music or something involving mathematics.
Chris, looking back, do you feel that you experienced Detroit while it was still in its manufacturing heyday?
Yes. Looking back, I think maybe the nadir of Detroit happened after me, but it was a down and out city in the '70s when I was young. My parents were both raised in the city and they spent lots of time downtown in the '50s. They went to University of Detroit, both of them, a very good Jesuit college. And they saw the city go through its transition in the '60s, the racial tensions and riots. I lived just outside of Detroit and there were auto factories everywhere. In almost every other house, somebody in the household worked for Ford or General Motors. One of my father’s biggest clients was the United Auto Workers. One thing that rubbed off on me is my love of gritty cities.
Chris, when did you get interested in science?
Like most youngsters, I started focusing on mathematics more than science. You're exposed to it every year, each year building on the previous. And I became very good in math but, also, again, I was good with my hands, so it made sense to push into science. So, I took as much chemistry as I could, but only a bit of physics because there wasn’t much offered. It was really when I went to college that things started to gel. Somehow, I got into MIT (in high school —out of 250 kids, I was ranked in the 80s). I think it was because of my coding with my father’s company, that I coached my little sister’s softball teams, worked on cars, and aced math. I loved it at MIT, I had such a great time there. It was intense. It was really hard. I didn't get particularly great grades, but I just loved the challenge of being in the big leagues.
Did you go in expecting to study physics? Was that the plan from the beginning?
No. In fact, I went in wanting to experience everything. So, I went around and around in different departments. I plotted a path in Chemical Engineering, Mathematics. For a full year, I studied aeronautical engineering. MIT Aero-Astro had an intro course called Unified Engineering - this was the main course you took for the whole year. It met two hours every day, killer weekly problem sets, and an exam every Friday, and finally a “systems problem” every quarter. It was a really hard class. I loved it. I took Unified Engineering for my full sophomore year and was even asked to be a TA for the next year. But something happened toward the end of that year. We were working on one of the systems problems—I forgot the details—it was some type of aircraft design with a very complex wing, a very complex aerodynamics problem. We were designing it so that it would support a certain amount of weight. And then, the end of the problem, after all math and modeling, they said that the wing can only support half the weight you designed because we had to apply a “safety factor” of two. And that really got me upset. I thought - don't we trust the physics? Why do we need the safety factor? I understand that you do need a margin of error, and indeed I would rather fly in an airplane where they had a pretty conservative safety factor. But it seemed so inelegant! It occurred to me that if I wanted to apply safety factors, I would do this later on in my career, but not in school. I learned right there, that such a systems engineering concept was very difficult to teach and learn, so I wasn’t sure I wanted to continue in engineering, at least in college. As it happens, that year I also took a course in quantum physics as a “science distribution” class for my presumed engineering degree. It was real quantum physics though, the first level they offered in the physics department. And this course was the pivotal point when I decided that I am going to study physics. I still thought I may well become an engineer later on. But now, entering my third year, I had two years to cram all those advanced physics courses. In my first term as a physics major, I took Jackson. That’s very high-level electromagnetism, as every physics graduate knows. I got a C. And now I do electromagnetism for a living…[laugh]
[laugh]
I sometimes tell my students about that C grade, when they are down on their own studies. When I started taking physics it felt much more rigorous than engineering, and it was very pure and beautiful. And I knew that in the future I may well be doing engineering, very applied, but while I'm in college, I wanted to learn the “real stuff” and then I could always add safety factors later. [laugh]
Did you always know that it was going to be an experimental work, or did you dabble in theory very much as an undergraduate?
I remember when I took a seminar course in mathematics in my freshman year, we met only one hour a week, and we did problems that were on the Putnam exam. The Putnam exam is this incredibly hard math exam, like the famous Moscow Mathematical Olympiad, lots of vexing problems from number theory. There are six problems on the Putnam, and you did really well if you got just one of them right. So, we did these problems all term, and over the entire term I think I only solved one problem – and not very elegantly. That course really slammed me because I saw my peers being so much better than me at it. It didn’t take me down, but I knew that I could not be a theorist and certainly not a pure mathematician - I just didn't have it. But there were so many things out there that I didn't know. Plus, I was always good with my hands and I needed to work with my hands. So, it was natural for me to want to be an experimentalist. In my junior year when I started in physics, I also got involved with a laboratory that does optics and laser science. Why? It was almost random. I knew that it was an interesting field. There was some theory involved but the main attraction was that you would build things. I worked in the basement, the MIT Spectroscopy Laboratory. This was one of those arbitrary decisions in life when you do something almost at random and then it guides the rest of your life. In my case, it really did.
What was so compelling to you about this work?
I don't know. I think I just loved the sense of owning something: you build it and it's yours, even if it is a noisy electronic circuit. I needed to be in control of what I did. Even though I was an undergraduate, low on the totem pole in that lab, there were certain tiny aspects of the experiment that I owned: I did this, I built this power supply, that was mine. Now I realize that it is trivial: you can buy those things. But then I just loved the idea of owning it.
Did you ever think about going into industry out of undergraduate or was graduate school always the goal for you?
Graduate school was not really the goal, especially at a place like MIT where most of my friends were engineers. They were going on to careers right after college. They had a good program where—many colleges do this now—where in five years you get both your bachelor's and master's in engineering, and then you graduate and work for Analog Devices or Intel or some other company. We didn't have that in the sciences, but I still interviewed for engineering jobs, maybe because of peer pressure. But in my fourth year, it was also clear that in physics most people did go to graduate school. Sometimes I think that my choice to go to grad school was out of my immaturity in the sense that it was easy to continue doing the same thing I had been doing – going to school. I was not ready to go out into the world, but at the same time, there was obviously a lot to learn. One thing I appreciate now about a PhD, is that by specializing in something, whatever it is, you learn a whole lot about everything. Students often don't appreciate that. They're afraid of becoming this very sharp needle of knowledge, and the first few years of graduate school can feel like that. But when successful graduate students look around, they can understand how others do their work, based on what they do for their own research. So, I guess I backed into graduate school because it seemed like it's what you were supposed to do in physics. I'm really glad I did. I learned a hell of a lot.
What kind of advice did you get about people to work with, programs to focus on for graduate school?
The one reason I chose optics and the atomic physics field was that I knew that I could hop into an industrial job right away, because optics and lasers are used in a great deal of advanced technologies. There was an engineering aspect of this. Much of the advice I received came from my undergraduate lab. The lead graduate student I worked with, Dan Heinzen, as well as Mike Kash and George Welch, were all veterans in the research world. My academic advisor Mike Feld was a great person who really took care of his students. They all suggested to me where the interesting programs are, so I applied to most of them. One was an up-and-coming place, the University of Colorado at Boulder, where I eventually went. I didn't know much about it – the furthest west I had been was Chicago. But in Colorado there were a couple of folks there that were doing neat things with lasers and atomic physics: Jan Hall, Dana Anderson, Carl Wieman.
This is a powerhouse group, that you understood right away?
Yes. They were better than Harvard, Yale, Stanford, or Caltech, in fact, but I’m sure people would dispute me on that. It’s fun to be in a place like that where you are always the underdog on paper. From the East Coast, CU perhaps wasn’t so well known, so this did seem like a risk. Carl Wieman had just arrived there from Michigan where he was unbelievably denied tenure a few years earlier. [laugh]
[laugh] Not a proud moment for Ann Arbor.
Right. Everybody knew better. Carl had some really interesting ideas. And Jan Hall, legendary already, the consummate laser builder, was there. I got to know Jan later on. He had two boys, and when they each turned 16, he gave them a junker car and their job was to fix it up and make it drivable. That tells you a lot about the guy. [laugh]
Was the presence of JILA and/or NIST sort of attractive to you right out of the box?
Definitely. I didn't really know about the details until I visited. They had a machine shop with eight full-time people. They had a full-time welder who could weld aluminum, and they had all these machinists. And I love machining. I was in the machine shop all the time working with those guys. And I thought that's how all graduate schools in physics would be. Boy, was I shaken when every other place I went to didn't have anything like that. At JILA we had our own glass blower in JILA (and there was another one in Chemistry, even). And with the NIST component, a full US national laboratory right there on the campus, CU atomic and laser physics was a big deal. Of course, that's why Carl Wieman, and Dana Anderson, and Jan Hall were there. But I didn’t know the depth of the place when I applied.
Chris, how did Carl become your mentor? What were the circumstances leading up to that relationship?
A good thing about a school like Colorado is that they don't see a whole lot of applicants from a place like MIT or Harvard, and Carl was an MIT undergraduate, so I think my application caught his eye, and he gave me a tour in the spring before graduate school. I saw his group, and it was a small group that was doing interesting things. The cold atom stuff that got him a Nobel Prize was his hobby.
[laugh]
He was doing precision metrology in atomic physics; very difficult experiments measuring tiny asymmetries in atoms. He was so driven, and he needed to have an almost impossible goal. As I said earlier, he needed the horse race; maybe that's what made him click. That drive was infectious to me.
How did you join your own interests with what the group was doing?
It took pretty much two full years for me to find my own there. That was tricky. In fact, after those two years, I left for the summer. I was going to quit grad school and interview for engineering jobs. I think I have hinted at the reason why: I wanted control of something. I was an immature young graduate student. There was an older grad student, Dave Sesko, and a postdoc, Thad Walker, on this project, they were great people. But I needed to control things, even though I was way too young for that. So, I told Carl in the spring of my second year that I'm leaving to go into industry. And I wanted to travel to Europe with a couple of college friends for the summer. And Carl said, "Ok. Well, when you get back from Europe look me up and we will get you back in the lab." I'm not sure why he said that, but it really struck me. And then, sure enough, in the middle of my Europe trip I'm thinking, God, what am I doing?
[laugh]
I still can't believe Carl would treat me the way he did because in his place I would probably have just let the student go. But when I got back, Dave Sesko had graduated, and Thad was on his way to a professorship at Wisconsin. I should say that Thad was a wonderful person to me, professionally. He had a big hand in “setting me straight,” but in very kind and subtle ways. So, upon returning from Europe, Carl wanted to start a new project and he gave it to me - alone. He gave me the whole lab and I worked with him one on one. Eric Cornell came a year later, but I started that lab on my own, and it was amazing! I loved it!
And what was the project?
It was a method of collecting cold atoms from a vapor cell. The conventional way was to make an atomic “beam” by sublimating atoms from a source and directing them into the chamber like a firehose. They were moving way too fast to capture, though, so you had to slow them down first with a counterpropagating laser “slower.” That is an extra step, very complicated, big vacuum systems, chirped lasers, lots more stuff. Loading directly from a vapor is much easier than from a beam. You just make a glass cell, and the atoms are everywhere but their average speed in any direction is smaller. You get more of them. It was a dramatic simplification in cold atom physics. That became my thesis, and we wrote a paper almost right away that is now one of my most highly cited papers, even after all the quantum computing ones, and it's now basically the source of all cold atom experiments. I also developed some of the theory on how the atoms get captured, the fraction that gets captured and all that, and how it depends on the control parameters like laser power and tuning. So, that third year of grad school was the perfect year for me. I really fell into things, and I got an appreciation of theory and using computational support behind the experiments. And I should also add, in that third year I wasn’t exactly alone. The second half of the year I was working with a JILA visiting fellow, a guy named Hugh Robinson. He was from Duke and he was visiting Leo Hollberg and others at NIST, and he spent time with Carl. So, he and I made the first atomic clock using cold cesium atoms. Hugh was a retired professor, and he pretty much taught me all I know about microwaves and RF. Hugh always said that electronic chips were filled with smoke, and the key was to keep the smoke inside the chip! I always remember that one. And then, Eric Cornell came the fourth year, and we dropped it into high gear and really started moving.
What did Eric bring to the project?
Eric and Carl, they're complementary in a certain way. They're both amazing physicists but in different ways. Eric leaned on me for most of the experimental stuff. He had never run a laser before - and here he was in a laser lab. But he had high-octane ideas. Like the quest for Bose condensation, what are we going to do with it and how do we get there? He is brilliant in that way, and at the time he had a fundamental statistical mechanics view of this phase transition. Once Eric gave an informal group meeting on how to phenomenologically introduce a nonlinear term into the Schrödinger equation to theoretically account for a weakly-interacting Bose gas. This was all off the cuff in 1991, and over the next decade this approach, essentially the Gross-Pitaevski equation, became the norm for studying Bose condensates. Not many know that Eric was the first to even pose this connection, as he didn’t publish it. In contrast, Carl was the consummate experimentalist. He maybe didn't have this innate sense of theoretical physics that Eric did, but when Carl looked at a signal, he would tell you what's wrong and what you must work on. He had this uncanny ability of finding the long pole in the tent; this came from his penchant for precision metrology, I suppose. I always thought, it was the ideal experience for a graduate student to be working with the two of them. These two guys were amazing! For about two years the three of us were a very closely-knit team, all on the same page. There were no ego issues, not a single one. And, yes, officially I was low on the totem pole, but it was still great. Things were happening and I felt ownership. I still had that immaturity, but they knew it and they wanted me to own the apparatus. It was wonderful.
Chris, intellectually, how did you carve out your own contributions in terms of presenting a thesis as your own?
I chose things to work on, and I was given remarkable autonomy to do so. Carl, Eric, and I each had distinct roles on the project, but there was also a symbiosis, in that we were all pushing on each other. This was an ideal team. We needed to get the atoms colder, much colder, and more dense. After dramatically simplifying how to produce cold atoms in the vapor cell through laser cooling and trapping, we had to turn off the light, because the light kept the atoms too hot. So, we turned to magnetic traps, no light there. I spent weeks and months dropping atoms into a magnetic trapping zone, focusing them with permanent magnets I bought at K-Mart, then more sophisticated “current bars,” wires wound on a complex form. One day Carl came up with the inspiration to toss the atoms upward instead of dropping them down – they would naturally get slowed by gravity as they approached their apex. We originally tossed the atoms up by accelerating a linear translation stage with retro-reflecting mirrors on them to create the requisite upward and downward Doppler shift on the trapping beams. Eric’s first few weeks as a postdoc were spent calibrating solenoids to measure their impulse – we need to get the stage up to 2 m/s in a few milliseconds. But the “bang” of the solenoid striking the translation stage caused the lasers to jump out of lock, so we had to send a pre-trigger signal to unlock the lasers. The stages were rough though, and the dragging on the ball bearings heated the atoms as they were tossed. Once when Jan Hall visited, he asked why we didn’t use acousto-optic modulators instead of this Rube Goldberg-like mechanical launcher. It was a very good question, so we took his advice. Getting to Bose condensation required another factor of 100,000 lower temperature. We turned to a bunch of exotic magnetic trap geometries, competing vigorously with the other groups doing magnetic trapping like Dave Pritchard and Wolfgang Ketterle at MIT, Bill Phillips at NIST in Washington. They were pioneers in magnetic traps, but they didn’t have atoms as cold as ours. It’s here that we really paved the road to Bose condensation, by loading atoms into a magnetic trap with great efficiency and maintaining high density. So many magnetic field coil designs and geometries – even a “baseball coil” whose wires were like the seams of a baseball on a sphere. Eric started plotting how we would evaporatively cool the atoms for the final step. The capstone of my thesis was actually a negative result – that cesium wasn’t going to cut it. We measured that the elastic collision rate between ultracold cesium atoms (required for rethermalization and evaporation) was not high enough for easy evaporation. However, we had a plan that Rubidium would be more promising, because there was a simple scheme to laser cool to extremely low temperatures based on some results from Steve Chu’s lab at Stanford. (In the end, Rubidium worked because of its more favorable collisional properties for evaporation, so Carl and Eric had that piece of luck working for them too!) It was an adventure, this zig-zag path toward Bose condensation, and it was among the most intense and fun times I have had, professionally.
Besides Carl, who else was on your thesis committee?
Jan Hall and Dana Anderson. I later became very friendly with Jan Hall, but during graduate school it was tricky. I spent some time in his labs with my graduate student friends, learning techniques. I remember visiting Jan in his office, and between stacks of books and papers, he showed me a circuit – not a diagram but the real thing, op-amps, caps and resistors on a printed circuit board. There was a 3D character to the circuit – the feedback resistor was positioned a certain way above the chip because it gave a particular capacitance that would shape the op-amp gain response in a certain way. Jan was a real artist when it came to circuits. He knew all the chip numbers, the OP-27 had great voltage offset characteristics, but watch out for current biases that should be compensated this way or that. After writing my thesis, I think Jan was reluctant to sign it. He thought that the prose was not up to his standard: I put in a few silly lines about circuits blowing up. Fortunately, Carl helped me get him to sign it. Jan was also on my graduate qualifying oral exam committee, early on in my graduate career. He asked me to write down Maxwell's equations on the board. I did that and proceeded to derive the electromagnetic wave equations near a conductor and so forth, and we got to the point where you should expect that a thicker conductor should shield radiation, but somehow, I was getting the opposite relation. [laugh] I knew it was wrong, but the math wasn’t showing it. And I heard myself saying incorrect things because that’s where the math is taking me. I think he laughed at that… So, this was the beginning of a healthy relationship with a professor when there is both fear and respect. [laugh] I still interact with Dana Anderson – he is a pioneer in bringing engineering to atomic physics. He founded a company called ColdQuanta, and he has been an inspiration in my own attempts to engineer a quantum computer based on atomic physics techniques; more on that later.
How much interaction did you have with Dave Wineland as a graduate student? In other words, when the postdoc opportunity came available, how smooth a transition was that for you?
I didn't know him very well. I certainly knew of his group. They were only a quarter mile away, and I used to go down there maybe every couple of months for a seminar. In fact, I remember hearing Eric Cornell give a seminar down there while I was in grad school. I think Eric originally planned to work with Dave Wineland for his postdoc, and I believe funding issues interfered. Funding in AMO Physics in 1990 was not nearly as good as it is now, but luckily Carl ended up picking up Eric. So, I didn't know Dave very well, certainly not personally. I just knew him by reputation.
What project did you join when you got to NIST?
This was a very hard transition for me because NIST is a government laboratory with more staff than students. So, Dave’s group had no students, but a lot of postdocs, a lot of good ones, through the National Research Council (NRC) Postdoctoral Fellowship. I had one of those fellowships myself because my graduate work was pretty visible. But there were eight other postdocs there, and I was the new one. That was tough, because there were several projects, and they all already had people working on them. So, my first year was a little odd. I did something a bit unconventional, and maybe this showed my immaturity in not wanting to change fields too much. I wanted to combine trapped ions with trapped neutral atoms. Dave Wineland was the ion trap God and his group really refined the art of ion traps for metrology. And I just came out of one of the most visible neutral atom cold atom groups, and this seemed natural. I should say that just as I left Carl Wieman’s group, Eric Cornell took a permanent position at JILA and he asked me to be his postdoc. He was going to push on to achieve Bose condensation. And the reason I declined was I thought they were 10-15 years away from BEC, so I wouldn’t see it during my potential postdoc tenure. This was 1992, and of course they observed BEC in 1995.
[laugh]
So, it was just three years. [laugh] And I'm convinced that, had I stayed on as his postdoc, it would've probably been two years, a year or two ahead of everybody else, including the big dogs at MIT.
Was your sense that Eric thought that this was a 15-year timeline, as well, or in his mind he was thinking this was going to happen much quicker?
I don't know. Probably not three years. But I think, this horse race, that was really what made things just a couple of years instead of 15 years, so the battle between CU and MIT was productive. Eric ended up hiring Mike Anderson as the postdoc, and Mike did very well, along with a phenomenal student of Eric’s, Mike Matthews. Anderson later founded a company called Vescent Optics, and his company has excelled. It’s fun to think about how things may have been different had I stayed with Eric. However, it was very good in the end. I really wanted to move into a different field, that is always a healthy thing.
Chris, some intellectual foreshadowing, where we're headed, of course, is the quantum work. Where is the seed, really? Where is this planted for you? Is it more in graduate school or your postdoc?
We started talking about how I was interested in building quantum computers for algorithms and to do computer science and so forth, but the ground floor of my work in quantum is atomic physics, controlling individual atoms with light. That's what I did in graduate school. Even though it was a different kind of atomic system, I became familiar with all of the machinery of atomic physics, experimentally and theoretically. And that obviously made it very easy to transition to quantum information and quantum computing using individual atoms. And, as it turned out, this was the luckiest thing in my life, that the field just dropped in our laps while I was postdoc at NIST.
So, after that first year, what clicked for you? How did it all come together?
I had my idea of how to succeed in this lab, as I said earlier. I was going to bring in my neutral atom experience and combine neutral atoms and ions. I spent a full year on that, along with a visitor from Brazil, Vanderlei Bagnato, who happened to be interested in the same thing. I got to know Vanderlei very well, and we succeeded: after a year, we juxtaposed cold sodium atoms and sodium ions in a single apparatus. And then we dropped this experiment entirely because it wasn't going anywhere. We should've published something because the idea of combining cold atoms and ions turned into a small cottage industry in the 2000s. However, I learned an incredible amount of technique in that year. We sensed the presence and motion of the ions by measuring their image current in a surrounding circuit – this was incredibly difficult, and I earned my radiofrequency credentials at this time. From Dave Wineland and fellow postdoc Steve Jefferts. They both came from the Hans Dehmelt school (Dave and Steve’s father were both postdocs with Dehmelt), which sees that everything in the world is a tuned LRC circuit oscillator. So now I also consider myself as coming from the Dehmelt school of RF. But after one year of my postdoc, I really had nothing to show on paper, so I went to Dave. He was a great mentor, and he immediately put me on a project that would trap ions very tightly. And when you confine anything really tightly its quantum levels are separated very far. And that means you can cool them to the ground state of motion, cool them to nearly absolute rest. And the foreshadowing here is that when you do that, that's the first ingredient you need to make a quantum logic gate. And nobody had really done that before.
Chris, can you explain the science behind that? Why do you need that?
With trapped ions as quantum bits, you need them to interact to make a gate, such as an AND gate between their two-level states. Trapped ions talk to each other through their motion, like two pendulums connected by a spring. And if you have a lot of thermal energy in these pendulums, the interaction is noisy, it washes out and becomes useless. But if you prepare the motion of the ions in the ground state, you remove all of the entropy so that it's in a pure quantum state. Then you can do pure quantum operations. So, in order to do a quantum gate between multiple atoms you need the atoms to be cold. In 1993, I'd never heard the term "quantum computer;" none of us had. We were making cold atoms because they were useful for atomic clocks. And we had further ideas on how to entangle atoms to make the atomic clocks even better. It turns out if you take even two atoms and put them in an entangled state, and then run them together as an atomic clock, the clock runs twice as fast. It has the same noise, but it runs twice as fast. That means the signal to noise is the square root of 2 better. So, you entangle atoms just to get a square root of 2 improvement in your clock. If you have 100 atoms, you gain by a factor of 10. This was pure physics, but it had an application to atomic clocks and metrology. In fact, a similar effect is now used to enhance the sensitivity of the LIGO gravitational wave detector. So, we were busy trying to make entangled states in '93 and '94, by making a very pure type of “squeezed” state, in the parlance of quantum optics. Steve Jefferts and I made this very tight trap that would allow the future control of trapped ion qubits. Steve had forgotten more about RF than I ever knew, and I was the laser guy, so we were a very productive team, and we got along wonderfully. The lasers, by the way, were much more challenging than what I had used in graduate school. We needed ultraviolet light, and this was produced by frequency-doubling dye lasers. Fortunately, we had Jim Bergquist, Wineland’s right hand man. Jim was one of Jan Hall's early students in the '70s. Amazing laser designer, super-nice guy, great athlete. Jim was a wonderful mentor on lasers and integrating them with the ion traps; I owe so much of my knowledge in laser science to him. So, in my second year of the postdoc, things started looking up. It was just fun, and we started publishing. The year 1994 was the formative year of my career. This is when so many things happened at the same time, between July and December. The International Conference of Atomic Physics (ICAP) meeting happened to be in Boulder that year (Dave and Carl were the local coordinators actually). A fellow named Artur Ekert gave a theoretical talk on quantum computation. At first, I thought this was just some fancy way to simulate quantum dynamics on your laptop, simply performing computations that applied to quantum physics. But no, this was much deeper -- it was about using quantum systems themselves to compute things in a completely different fashion. Ekert had just learned of Shor's algorithm, which was discovered that summer, and he told our community about it. Within weeks, two theorists, Peter Zoller and Ignacio Cirac, showed that you can actually build a quantum computer using trapped ions, and they showed exactly how to do it. It involved cooling ions to the ground state and performing operations. At the same time, Dave and I had figured out how to entangle trapped ions along similar lines as I discussed earlier. So, we had the experiment, and even accumulated data on that first quantum gate before the end of '94. It was so fast; I'd never seen anything like that. Right there at that moment, in December 1994, Dave hired me. He said to me over lunch one day, “I want you to stay on our staff. We're going to play around with this quantum computing stuff. We’re going to build the components of a quantum computer.” It was clear that that endeavor was going to have legs. That was an amazing year in my career, 1994.
Chris, just to freeze this moment in time, this is essentially the scientific and intellectual blueprint for quantum computing, and here we are 27 years later, we don't have to rehash what we said at the beginning of our conversation, but just a very broad question at this 27-year point: This is a pretty long gestation period that we're still in the middle of right now. What explains that overall?
That's a deep question. Silicon transistor technology was not unlike this. The first germanium solid-state transistor was made in 1947, and the first silicon VLSI chip in late '60s. That's only 20 years, and in our case, we're at 27. [laugh]
[laugh]
The comparison is not exactly fair, because silicon transistors allowed the fabrication of a computer whose theoretical underpinnings had been previously known for decades, if not centuries. Quantum computing, on the other hand, is an entirely different mode of computing, that was only discovered in the 1980s or 1990s. We still don't know exactly what the quantum killer applications will be. Industrial approaches to quantum computing have only recently begun, 10 years ago. I think this gestation period is about right -- 20 years of research, poking at things, figuring out what works, what doesn't work. This evolution is just slow. I have read that it is typically 20-30 years for new technologies to take hold – and quantum computing is not just a new technology, it is revolutionary. But I do feel the pressure. If it's already been 27 years, we better have something pretty soon. [laugh]
Chris, a more administrative question: As you were really getting comfortable at NIST and you became a staff member, what was the basic divide, both in terms of research, culture, and project expectations in terms of basic science versus applied science?
We were in a very special place, and this was partly due to the way Dave Wineland ran his group. It was a very academic group. Dave hired the highest-level postdocs that you could get in atomic physics. The graduate student at MIT I started out with, Dan Heinzen, was a postdoc with Dave just before I joined the group. Everybody wanted to be in that group. But this was also made possible by Dave’s boss's boss, the Director of the NIST Physics Laboratory, Katharine Gebbie. Katharine was an incredible woman, who unfortunately passed away just a few years ago. She was also a JILA/Colorado scientist in her day in the '60s; there’s a great poster of her climbing a canyon in the JILA hallway; she flew airplanes and drove fast. At NIST, her philosophy was—she told me this pointblank later when I moved to Maryland in 2007—to “hire the best scientists and then get out of their way." [laugh] For such a high-level government civil servant to say that is really amazing. She made Dave Wineland’s group happen. I think she also made possible the groups of Bill Phillips, Jan Hall and Eric Cornell. With Dave, those are her four Nobel Prizes, and I know that those four will tell you the same thing. I can’t say that in Dave's group at NIST we could do whatever we wanted. It had to be productive—but we could also do pure science. We were the research arm of the atomic clock division at NIST. Everything we did, so long as it involved atomic ions, had some application to atomic clocks. There were other projects. John Bollinger, another of the staff members in Dave’s group, pioneered a world-class experiment that controlled hundreds of atomic ions spinning in a different type of ion trap geometry called a Penning trap. With Steve Jefferts, Brana Jelenkovic, Amy Newbury and others, John figured out how to cool positrons in an atomic ion trap and how to produce a thin single atomic layer of atomic ions. What did this have to do with clocks? I don't know. It didn't matter. It was pure scientific work. Other examples: we can hold onto these trapped ions for a really long time, and we can sensitively probe forces of nature that maybe have no underlying accepted theory. What if the direction of a spin of an electron has an absolute dependence on whether it's up or down in space? What if the quantum wave equation has a nonlinear term in it? There's no theory for why that should happen. But you can test those theories to very high precision with these trapped ions. I guess you could argue that has something to do with clocks, because if clocks are sensitive to that then you're going to need to know. So, it was just such a great situation, a great mix of pure science but also very hard-core experimental technique. That’s the beauty of metrology – you will always find science by measuring the next digit. Our quantum computing project was actually closer to the atomic clock mission, because we were engineering entangled quantum states to improve the clocks. We didn’t call them quantum logic gates, but that is exactly what they were.
Chris, being at NIST for eight years, did you gain an appreciation for how the federal government supports science that's been useful to you in subsequent years?
I should say yes that I appreciated that our group was part of the federal government, and indeed we had to acquire funding from various other federal agencies. But the answer to your question is not really; I wasn’t really paying much attention to the fact this was a federal lab, because of Dave’s (and Katharine's) leadership style. We were doing pure science, and it could have easily been a university group. Of course, our work was also funded by other federal agencies, like the NSA and DOD.
You were sheltered; you didn't really get exposed to all of that?
Those few years were one of the most productive periods of my professional career. Because we had this apparatus and nobody in the world did. So, we just went to town – we had good funding and amazing postdocs and while we had to get funding, there was almost no competition. But also, I rarely had to worry about government bureaucracy. It was an academic group. That said, I asked Dave when he hired me, to be involved with the writing and securing of proposals. This was mainly the NSA and DOD, and in those days, it was not onerous –they sent the cash, and just asked for a 2 page email every year summarizing what we did. Wow, have things changed.
What were the circumstances for you joining University of Michigan?
I was a postdoc for two years, and then six years on staff at NIST, which is roughly the equivalent time one would spend as an assistant professor junior faculty member. I went to Michigan after 8 years at NIST, partly because it was clear that this field had taken off. Funny, when Dave first hired me, he told me, "Well, you know, if this quantum computing stuff doesn't work out, we'll go back to making clocks." [laugh] This would have been fine. It was great to work with him and his group. But obviously, the quantum computing stuff did work out. Nevertheless, it was an extremely risky move, leaving NIST. This reminds me of a time in '96 or '97. We were toying with additional quantum manipulations of our cold atomic ions: we put an atom in two places at the same time, in a state akin to “Schrödinger’s Cat,” and the way we did that turned out to be the way that everybody does gates nowadays. It was a refinement of the Cirac and Zoller scheme that was cleaner and less error-prone. I do remember clearly the day that I walked into Dave's office said, "Dave, if we could only get the laser force to depend on the qubit state in this way, then we make this Cat state." I added, "But I don't know how to do it." And he said, "Oh, but you can do it by controlling the polarization of the laser beam." I mention this to note that we were on the exact same page, and it was amazing! I've never had a scientific interaction like that, and even that day, it's so memorable. I don't think Dave remembers it after I reminded him, but it was just amazing. An incredible eight years! I felt terrible leaving NIST, but the reason I left is that we were the only ion trap group really in the US using these cold ions to do these quantum experiments. There was not a single university group doing this, partly because it was thought to be too expensive. Before Dave hired me, I did interview for a few university posts, and the departments – especially those with a presence in AMO Physics – would pepper me with the question of “how could you afford to compete with Dave Wineland.” That was ridiculous, because there were dozens of cold atom groups, and they should have been asked “how they would compete with Wieman/Cornell, Chu, Phillips, or Pritchard/Ketterle.” So, I had a bit of a chip on my shoulder when trapped ions were ready to stick out, and Michigan was all for it. I went to Michigan with nothing but an empty lab. And the first few months I remember I was buying things, but the lab was empty. I'm thinking, my God, what did I just do? I left this group at the pinnacle of their production, and they kept going on of course and I would have to compete. But I think in retrospect it was good thing, because now there are lots of university groups doing this.
You came on with tenure?
Yes.
And was part of the package that you would be supported with the resources that you felt you needed to build up your own group?
Yes. That's typical. There was a so-called start-up package.
I mean—let me rephrase. It's typical, of course, but as you describe it, you need something pretty good to go to be convinced to leave what you have.
Yes. There were enough resources to do what I want, and indeed this type of research was expensive. But startup packages are in the noise compared with the future opportunities to get funding. (That is a good message for young faculty.) It is important to note that Michigan had an existing community of optical scientists and atomic physicists. Michigan was a center for pulsed lasers, high-energy lasers with very fast pulses. We don't use such ultrafast lasers so much in the field of cold atomic physics. But Michigan had a great history in atomic physics, and also a new building with beautiful new labs. Plus, my wife and I had two young girls at home, and we were moving back to where I was raised and where my parents lived. My daughters became very close to my mother and father, this was a wonderful aspect of the move. Professionally, it was incredibly risky, but I'm glad I did it. I think it opened up the door for more universities and faculty to move in this direction. And nowadays this type of research is not so expensive at all. The lasers are a lot easier, the techniques are out there, it is fertile research ground, so we now have dozens of university groups doing this kind of work.
Chris, the opportunity to build a lab from scratch obviously invites existential questions about what you want to do. So, to what extent was that an opportunity for you to switch things up or did you want to continue essentially what you were doing but do it in a different place?
It was easy to do similar things, because nobody else in the world was doing it. Plus, I was using a different atom, a different suite of lasers, a different approach, or so I thought. My closest colleague at Michigan, Phil Bucksbaum, a luminary in ultrafast lasers and now at Stanford, told me, "Well, in a few years when you're doing something totally different than what Dave Wineland is doing, then...” And I didn't hear anything after “then” because I was thinking, what's he talking about? My group is the first university group doing trapped ion quantum computing, so of course I'm going to do very similar things to what I was doing at NIST—I'm going to compete head-to-head with Dave. I expected that, and that's fine. That's what science is. And if there's no competition, then there’s no progress and no interest. In the end of course, I moved in several different directions, as Phil predicted. It was risky and fun. I said earlier about wanting control of things. Now I had 100-percent control over what was going on, although at NIST I pretty much did as well. If I wanted to do something different at NIST, Dave Wineland would have let it be, I think. And I’ll admit that it was clear that I was in Dave’s shadow somewhat at NIST. In any case, I wanted a culture of more students, maybe not just all postdocs. Looking back, it was indeed a tricky time. I remember thinking, what the hell, just go for it, having this feeling of reckless abandonment in designing your research.
And you were teaching at CU Boulder the whole time you were at NIST, so slotting into a professor's life was not so much of a transition for you.
Actually, I don't think I taught once at CU. I had a lecturer position, so what that means is that I can take students, graduate students, for research. I did take a couple of graduate students, Brian King and then David Kielpinski from the University of Colorado. But I didn't teach classes there, unfortunately.
So, you did have to get into lecture mode?
Oh yes. I love teaching, even though at the time I had never formally done it in the format of a classroom. Research is the pinnacle of teaching, although we don’t get university teaching credit for that. Teaching is so fun, but classroom lecturing is a lot of work. I love the performance part of it. The preparation, not so much. [laugh]
What kind of classes did you teach initially at Michigan?
At Michigan, they had the big classes with recitation sections, those were all taught by faculty, as many research-intensive universities do. I did several of those. That was a lot of face time, twelve hours a week, but not a whole lot of preparation, because different sections of the class were taught by different people and they all were centrally coordinated - the schedule and the plan for the class. This was good for younger faculty like me. But I also taught a few other courses—my favorite class was the physics of music. Oh, that was great! That class did take a lot of preparation, 30 or 40 hours a week outside of lectures. Not enough hours in the week. But I just loved preparing for it; as a classical music junkie, I can find many examples of almost any auditory effect in some obscure symphony or sonata. The class was not for the physics majors; it was open to the whole campus. So, you had to temper the math down quite a bit. But it involved perception of music, the scale system, how instruments work. And physics department had a great demonstration lab staff. They had the most amazing demos, and we would spend time assembling some type of instrument and measuring equipment, like wine glasses and oscilloscopes and a time-lapse camera to view the vibrations. Beyond that, I also taught introductory quantum physics for undergraduates, and introductory courses on modern physics and waves.
Now, were you the inaugural director of the focus group, the NSF FOCUS Group?
No, no. That was Phil Bucksbaum. He and Gerard Mourou were the initial movers behind the NSF FOCUS Center. When Phil left for Stanford, I took over for the last couple of years.
Now, was that wrapped up in your decision to switch departments to EE in computer science?
Partly it was because a lot of the laser technology was in the college of engineering at Michigan, but also some of the Michigan computer scientists, like John Hayes, Igor Markov and Yaoyun Shi, were getting involved in quantum computing, and they wanted a little more voice in quantum. But that was a courtesy appointment, I wasn't really active in the engineering department. One thing that did come out of Michigan in my work is the use of pulsed lasers for quantum control of atoms. Nobody was doing that, and if I hadn't been at Michigan, I probably never would have done it either. A pulsed laser is now the main laser we use these days in trapped ion quantum computing. Such lasers are very stable and don’t demand much attention, unlike the conventional continuous-wave lasers used for cold atomic physics. This is because such pulsed lasers are made for the silicon industry for lithography and semiconductor fabrication. At Michigan I got up to speed with what these lasers do and how they can be used, just by hanging out with pulsed laser jocks.
Again, to go back to this foreshadowing question, was it at Michigan where you found yourself increasingly devoted to quantum computer questions in and of themselves, and not the sort of research areas that were peripheral and formative to later interests?
We were still refining the platform of trapped ions. I had a close collaborator there, the theorist Lu-Ming Duan, who has since moved to Beijing. I had a big hand in recruiting him in 2004. Immediately after he started at Michigan, he came to my office and asked me what I was doing. I had known him for a few years. And I said, well, we're doing this, this, and that. And he came back two weeks later and—he didn't say it this way; he's a very polite guy—but he basically said, you should be doing this, this, and that other thing. And he was right. There was one idea in particular he had of linking atoms with photons. Instead of coupling ions based on the pendulum motion I talked about before, if you had two separate groups of atoms you can couple them with single photons. So, this is probably what our lab is best known for in Michigan, for demonstrating the first protocol to link an atomic qubit to a photonic qubit and entangling atoms over a distance through the photons. But the entanglement rate was very slow, and the fidelities weren't very high. So, you can see that these things take time between demonstration and product—we were still refining the art, trying to find maybe a better atom or protocol. At the end of my Michigan days, we switched to a different atomic system, ytterbium, and that was the one that could use the pulsed laser (and also support longer wavelength photons that could get through an optical fiber). So, in 2007, high-level quantum computer architectural ideas were starting to come together - on how to scale up this trapped ion system. Back with Dave Wineland in '96 or '97 when we were doing these first gates, we realized that these were demonstrations, they were just sort of proof of principle demonstration experiments. But to scale, to build a quantum computer with millions of qubits, not just two or five, we thought maybe you will need to do that in a solid state, somehow find an effective solid-state system like the ion, but can be scaled like silicon integrated circuits. Back then I didn't know much about computers or silicon, but I knew all transistors were made of silicon. And it sort of made sense that this would be the ultimate way to build a quantum computer. In those 20 years or so, 25 years, I've completely flipped my thinking on that, by 180 degrees: I don't see how solid-state systems will ever scale without major physics breakthroughs. Maybe this topological qubit thing is going to work out; that would be amazing. But without major breakthroughs I think the atomic systems are the ones that will scale. It's not going to be easy to scale atomic systems, but I think they are the winners because we don’t need breakthroughs. This is where industry and engineering can really help. So yes, these ideas started becoming serious when I was at Michigan.
Chris, on that point, it's a transition, it's multidisciplinary, it's cutting edge. So, to give a window into where you saw all of this fitting in more broadly, what would be the conferences that you would go to? What are the best journals for you to be publishing in in terms of getting your research out to the audience that you care about most?
The glossy journals Science and Nature remain standards in this field, but at the time there were also new quantum information journals, specialty journals. Even Nature spawned Nature Physics and then more recently Nature Quantum Information. Like others in my field, I tended to publish in the conventional physics journals like the Physics Review family and the glossies, plus maybe a quarter of our papers went into quantum journals, so that was a new thing. Of course, the physics arXiv depository started in the mid '90s or so and everything just went there. The arXiv is fantastic: when you read something that appears there, even though it has not been peer-refereed, it is the real thing – especially if you know the group or organization behind the paper. They may have to correct a few things, a few words, but who cares about that; the science is out there from the beginning. I don't want to diminish the refereed publishing word, but the arXiv has been a great advent in the business of publishing. We started writing articles in the 2000s decade about architecture, very high-level things well beyond simple atomic physics and trapped ions; this is when I began collaborating with Jungsang Kim. One example is the linking of atomic qubits to photonic qubits for communication. In the context of quantum computers, if you really want to scale, you better have some modular approach, like how a data center is wired together today, and that would be photons. This direction was more in line with computer science, information networks, and so forth. That's pretty far astray from atomic physics.
What was the opportunity that compelled you to switch to College Park?
This would have been 2007. I knew that the University of Maryland was starting up a NIST laboratory on campus just like JILA (we sometimes called it JILA-East actually). This quantum physics laboratory became the Joint Quantum Institute or JQI. That started in '06, and I was already in discussions with Maryland when it was forming. They were going to have a new building by 2010.
And you were in discussions as a consultant or because you were considering being a part of this from the beginning?
I was interested in moving there. I often joke that partly it was because I was tired of flying to Washington, so why not live there? Quantum information and quantum computing was gaining traction. All of the usual science agencies, plus those from defense and intelligence communities, even NASA, were expanding their interest in this field. With Jungsang at Duke, I was leading a charge on behalf of universities on developing trapped ion technology. We had a vision of building scalable and modular quantum computer systems, and that was going to be very expensive. I’m sure this could have happened at Michigan, but I knew it would be easier at Maryland because they were better connected and had a federal laboratory right on campus. So, at this point I was about 40 years old – a good time to put it into the next high gear, so this was the right move. I had great colleagues at Maryland, and was very well connected to the Washington science scene. College Park is the main research university inside the Washington Beltway, and it was almost unfair the advantage this provides. When there's a DARPA program review on Thursday and they need a speaker to fill in, it's just a train ride or a short car ride down there. The State Department needs a panelist for some international science workshop, 10 miles away. And, of course, with the NIST component and the new Joint Quantum Institute at U. Maryland, and also the Laboratory for Physical Sciences near campus (basically an NSA laboratory deeply engrained in quantum computing), and Army Research Laboratory, this was a great place to pick up the tempo. If not for working in the Washington area, I probably would not have gotten roped into leading what became the 2018 U.S. National Quantum Initiative, to jump ahead for a bit. Working closely with people the National Photonics Initiative (and the optical societies OSA and SPIE) and Mike Raymer, a friend and optical physicist at the University of Oregon, we were given access to professional lobbyists, and the snowball just kept rolling downhill. I testified for both the House and Senate on quantum technology a few times in 2017-2018. At one point in 2017, the House Science Committee Staff asked for a white paper on what the U.S. government could/should do to foster development of quantum technology, so Mike Raymer and I, with great help from the OSA GR office and the lobbyists from The Podesta Group, we basically penned the blueprint of the NQI bill that passed at the end of 2018. The whole process was lightning fast, and it wouldn’t have been possible had I not lived near Washington.
And you came to Maryland in 2007 with an endowed chair. Who was or is Bice Zorn?
Gus Zorn and Bice Sechi-Zorn, they were a couple, both in high-energy physics experimentalists at the University of Maryland. Bice passed away in the 1980s, and Gus about 20 years later. Bice Sechi was Italian (her first name rhymes with “peachy” not “rice”) and met Gus at the University of Padua doing physics in the 1950s. Unfortunately, I never met them. But I've seen many pictures of them, heard their stories; they were wonderful citizens to the university and the department. They left an endowment for two chaired professorships in the physics department: I have the one named after Bice and Hassan Jawahery after Gus. By all accounts, Gus and Bice were just great people. I wish I would've been able to meet them.
Any relation to Jens? Did he have any comment about that?
No. I asked him. Jens is an atomic physicist and friend of mine at Michigan, but no relation to Gus. Also, no relation to the left-handed quarterback, Jim Zorn. [laugh]
[laugh] Ok. Once JQI got started and you were part of that from the beginning, what was sort of its mission statement? What were, like, the topline long-term goals that it set out for itself?
JILA was formed in the '60s, and it used to stand for the Joint Institute for Laboratory Astrophysics. They were measuring atomic collisions as it related to the upper atmosphere and space, astrophysics. Unlike JILA, JQI was more focused on quantum physics, on the physics of different quantum systems, on entanglement, many-body physics. I said this earlier that condensed matter is complicated because of entanglement. We often replace the concept with other words and talk about “correlated matter” or “many-body quantum systems.” But now we had all these tools from quantum information to study such complex matter in new ways. JQI was also formed in the shadow of a strong core of researchers studying cold (neutral) atoms, led by Bill Phillips, Steve Rolston, Trey Porto, Paul Lett, Kris Helmerson -- just a powerhouse in cold neutral atoms, and then Luis Orozco, well-known in quantum optics. They were starting to get the tools to get these atoms to talk to each other in a many-body way. My work brought trapped ions into the fray. There was also other work involving photons, linear optics. There was a lot of condensed matter work that involved spins, using spins in silicon to mediate entanglement. Sankar Das Sarma is a top-flight theorist in condensed matter who is deeply involved with that, and several others, Victor Galitski, Victor Yakovenko, and others. The JQI mandate was to study many-body quantum physics. Now, it wasn’t a quantum information center, even though much of the research had applications to quantum information. They were a more scientific, broader physics center where quantum information was a subset or an application or a spinoff of what they were doing. So, they found a lot of resonance at NIST to form such an entity, but partly based on the great success of JILA.
And was this a different arrangement in the sense that at Michigan there wasn't this out-of-the-box institute for you to be a part of, and there was a lab for you to build? How did that play out at Maryland? Did you also build a lab or was this essentially all of the lab that you would've needed on your own?
After a couple of years, they had the new building, and it was one of the highest quality lab spaces I have ever seen. When I moved from Michigan, I moved the lab, so I had equipment and all. But in the move to Maryland I had to double down. My research group got about twice the size as there was at Michigan. So yes, I had to build my lab. It was fine because we were changing atomic species, which means you have to change lots of the infrastructure anyways, like the suite of lasers needed. But I knew Bill Phillips and most of the Maryland folks for many years, and it was a much bigger, much richer community right in my field. And there was now a community of students and postdocs doing very similar things, so I think it was a great move right as I wanted to expand. Maryland is a big department in physics, and we attracted great graduate students. Michigan is a great university overall, but I think Maryland/JQI with their physics department and the assets in greater Washington, DC, was a step up for me.
Who were some of the people on the faculty at Maryland that you immediately recognized it would be exciting to have as colleagues?
I mentioned most of them already. All of my JQI colleagues from Bill Phillips to Sankar Das Sarma, and the neutral atom people, there's five or six others. We hired Gretchen Campbell shortly after that and Vlad Manucharyan on superconducting circuits. The hiring was advancing rapidly. Some more condensed matter folks like Maissam Barkeshli, a theorist, and Jimmy Williams an experimentalist. Many people were coming onboard. But I think probably the core were the atomic physicists from NIST, Paul Lett, Trey Porto, Gretchen Campbell, Steve Rolston, and Bill Phillips. Ian Spielman came right when I did as a younger guy, real high-flying atomic physicist who switched over from condensed matter experiment. Having them down the hall from the laboratory, that was really great. I became quite close with Luis Orozco, who had an especially interested and pedagogical approach to all things quantum (and with me shared a deep interest in music). Luis didn’t come from NIST, but he was one of the first atomic physicists at Maryland, and we had very strong overlap. So, it was a huge community. And all their students and postdocs, there was just a society there.
Chris, as you envisioned it, it would be a lot easier being local to all of these government agencies. How did that play out in actuality, in terms of the most important federal agencies to work with on a funding basis, but then also to work with on an intellectual partnership basis?
There's one agency in particular that I worked with a little bit at Michigan and that certainly could have continued. It is IARPA, which is the intelligence community version of DARPA. They're smaller, but they have had the biggest quantum computing program in the world for many years. In 2011 they started a new program, and this is when Jungsang Kim and I started working together more heavily. They wanted an architecture for dealing with many qubits. The great thing about IARPA is when they funded something, they funded it seriously and to the hilt. And it wasn't just the money, it was the ability to reel in industrial partners who could build things for you—custom components are very expensive. We had many ideas on using commoditized industrial components that needed to be somewhat modified for our purposes so we could integrate into our systems. With IARPA, we were able to start this path toward systematizing trapped ions. And IARPA was about one mile away from my laboratory and it was very easy for them to visit the lab anytime they wanted, and we went to the same seminars. I would just see them a lot.
I'll frame my subsequent questions based on how you answer this one, but for intelligence work, did you ever need a clearance?
No.
So then, this is an easy one. Can you explain why quantum computing and the related fields would be of interest to intelligence agencies?
The main reason stems from the use of quantum information for communication. And there are two sides to this. The first is what I mentioned before: Shor's quantum factoring algorithm. The inability to factor large numbers is, of course, why many of our cryptosystems are secure. When you buy something on the internet, you actually release a very big number in your software that only you know how to factor because you formed that number by multiplying two numbers together. You keep those two small numbers as your key. And then you let that number go out in the public domain. Now, if you want to send a secret message to somebody else, they've also released their own large number in the public domain. You don't know their factors, but you know your factors. So, in a very interesting scheme called RSA encryption (RSA stands for Rivest-Shamir-Adleman), you can use your keys and the receiver's big number in a way that the receiver can crack it by using your big number and its own keys. It's called public key or symmetric encryption. The only way to break it is to factor that big number. Quantum computers can factor numbers fast, at least if you have enough qubits and enough fidelity. So, the intelligence agencies, NSA in particular, had to pay attention to this field. I think Shor's algorithm shook the NSA to its core because factoring is assumed to be exponentially hard. Now, it still is hard because we don't have a big enough quantum computer, but NSA is in a position where they have to know, what are the impediments to a big quantum computer? What will it take to get there? Even if we don't have one now, when will we get there? It's very important to know because that impacts when and how you encrypt things. If we're going to have one next year, we should probably stop encrypting right now using RSA, but if it's 50 years away maybe it's not such a big deal. So - decryption is one big application. The second side is encryption instead of decryption above. You can use quantum information to encode data in a way that nobody can eavesdrop, because if they do, they alter it. That's a very cool fundamental aspect of quantum information that is loosely called quantum cryptography. Those two reasons, breaking codes and making codes, are the fundamental reasons that intelligence agencies are interested. But they also are interested in general high-performance computing if there are new opportunities for optimization or new types of algorithms. The intelligence agencies always need to be on top of the latest and greatest forms of computer.
I'll ask a Hollywood version of the same question, which is: With the ability to break codes and make codes, how will the intel agencies know what the bad guys are doing before they do it?
[laugh] Oh, boy. That's a good question. I believe you just need to be ahead of this technology. It is going to change and advance, and the most important thing is to be in the lead. You shouldn't be a luddite and put your head in the sand and say, we're not going to do quantum computing because it could be used for bad things. You just have to be ahead of it. And the US government is one entity in the world that can ensure they will be ahead of things.
And I assume all of this applies for cybersecurity, as well?
Absolutely, yes.
Yeah. In what ways was the development—I assume it's QuICS, Center for Quantum Information and Computer Science—where was the overlap and where was the mutually beneficial relationship with JQI?
As I said, JQI was more scientific in nature and not aimed toward quantum computing in particular. But it was felt by people at NIST, such as Carl Williams, an atomic and quantum information theorist who was one of the founders of JQI, that academic computer scientists weren't getting into this field enough. And I agree, it seems a little strange – here we have a new form of computing, yet computer scientists, academic computer scientists, are not taking risks and going into this field. It’s not like they have a product to peddle or quarterly revenue reports, so why aren't they pushing on quantum computer science? But with the formation of QuICS at U. Maryland, housed in the computer science department, we were ahead of the game in bringing quantum to computer science. We have a good college structure at Maryland because the computer science department is in the college of science and mathematics, including physics and chemistry of course. At Maryland, computer science and physics departments are closer, at least organizationally, than even engineers. So, Maryland was a great place for this, and our computer science department was very interested in hosting QuICS. There are three faculty in computer science at Maryland that specialize in quantum – big names too like Andrew Childs and recently Daniel Gottesman. So QuICS was meant to be the needed bridge between science, physics in particular, and computer science. The nearby NSA (through LPS) also plays a small role in the QuICS Center.
The origins of IonQ raises all kinds of questions with regard to intellectual property, how you might define an arrangement where you're doing research at the university, but it has commercial value. So, the first question is: What was the factor that led you to create IonQ in the sense that it was going to allow you to do things that you couldn't do simply as a university professor?
I mentioned earlier my work with Jungsang Kim that was really starting to pick up when we started performing for IARPA in their multi-qubit program that was superseded by the LogiQ program, making error-corrected qubits. These programs allowed us to think of quantum computers from a systems point of view. Jungsang is more of an entrepreneur than I am and more of a real engineer. He spent many years at Bell Labs in the early 2000s after the internet “dot-com” spike. He worked on photonic networks, switches, wireless, and he knew how to develop a product. Jungsang took a position at Duke in 2005 in the electrical and computer engineering department. He wanted to pick up some revolutionary technology, concentrate from a systems level view, and develop it. He picked trapped ion quantum computing. The reason was that when at Bell Labs he had worked with a fellow named Dick Slusher, very well known in quantum optics, and they together put the blueprint on how to build an ion trap on a chip where you could print the electrodes and wire them up. And this will sound a little esoteric, but I had been building ion traps by hand since day one with wires and glue and tape (figuratively). The reason Dick and Jungsang liked this surface trap where all the electrodes were on the surface and the ions are floating above it was based on systems engineering. When you scale up, with thousands of electrodes needed, you can't wire all the electrodes on the surface because you would run out of space. These electrodes are pretty small, perhaps just tens of microns across, but the wires and bonding pads are much bigger. How do you get all those wires in there? According to Dick and Jungsang’s chip trap design, you could drill down using a so-called “via” to bring the lead through the chip onto the back side. Standard procedure for computer chip design but not for ion traps. Now nobody really uses those vias now, yet, and you certainly didn't have to 15 years ago. But they were sort of envisioning in the long-term future that you would need to do that simply to wire up the electrodes. So that's a very far-reaching thought, and it's not exactly scientific. It has to do more with the system. Jungsang and I got along so well because we both recognized our complementarity in science and engineering. We started collaborating very heavily in '06- '07, and it worked wonderfully. We had different takes on things, but we were of the same mind in the big picture. Boy, I've been blessed with amazing collaborations throughout my whole career, from Eric and Carl to Dave Wineland, Phil Bucksbaum, and now Jungsang. Now, why did we form IonQ? Well, we got to the level of experimental performance and needs in about 2012 where we recognized that an industrial approach was the only way to scale ion trap quantum computers. Our little secret was that we knew ion trap systems would have the best performance and would not rely on scientific breakthroughs. This is not to say there are no challenges; but they are of engineering, and nobody had attempted engineering ion trap systems before IonQ.
Define the distinction, Chris, "an industrial approach." What's the difference? An industrial approach versus an academic approach. What does industrial offer that academic doesn't?
An industrial approach allows you to build things that can be used by a third party. An industrial approach involves building reliability into the system, understanding and beating down risks, and usually this comes with a system design that does not have so much flexibility like we have in research experiments. It’s not research that can follow any direction of interest, but it is highly directed so that the systems work as designed. If you build a cell phone at a university based on graduate student research, it would probably not be usable by the general public. You could try but it would inevitably come with defects and nobody would want to use it because it would be breaking all the time. The beautiful thing about your well-engineered cell phone is that you can use it without knowing or caring what's inside. It will ultimately be possible to build quantum computers that way, but it probably won’t happen at a university because that's just not the mission of a university. At the university you are training students, investigating with very high risk. You don't build a helicopter at a university that somebody would actually use. I mean, nobody is going to want to ride in that helicopter if they know a bunch of grad students built it.
[laugh]
There's also money. It is capital intensive to build reliable equipment, especially of a new type. The first cell phone off the line might cost $10 million. But after building a few billion of them, now they're only a few hundred bucks, because you can scale, you can manufacture and stamp them out. There are specifications and managing fabrication risks, and this is the art of the engineering that I'm still learning. In 2012 or 2013 we started hitting certain architectural issues with quantum computers like the photonic interconnects between collections of trapped ions. We sort of knew how we were going to scale this thing. We had a blueprint for it, at least on paper. Now, to turn that into manufacturing is another thing. A lot of research is needed to bring the risks down. I can’t say we were ready for a company back then, but we knew that at some point our activities would probably evolve into a company. Here we would hire engineers, professional engineers not just academics, lots of software people and control engineers, but certainly not just atomic physicists. They would design, de-risk, and build a system that would be reliable. It would have specifications and tolerances, things that we don't usually do in scientific research. But make no mistake, when we build these things, they will be deployed for an entirely new type of science – just like regular computers were decades ago.
So, what would that end-use product be that would never be buildable or viable commercially if it were created in an academic environment?
It's a programmable quantum computer where the programmer doesn’t need to tune a laser, move a mirror, or change the temperature of the housing, things like that. A programmable device that somebody who knows nothing about quantum mechanics could use. Just like today’s computer programmers don’t need to know anything about the physics of semiconductor transistors.
What was the learning curve in terms of best practices with intellectual property, with your responsibilities, your time requirement as a professor? What were some of the things that you picked up along the way as you retained all of your affiliations at UMD, but you became more deeply involved in this commercial enterprise?
Well, the good news from the get-go is that anything done at a company to industrialize or commercialize a trapped ion quantum computer is going to be of great interest to my research lab at the university, and vice versa. So, there was no conflict in terms of use or in terms of the interest in research and the mission of the university. The conflicts were mostly administrative. There are lots of people maintaining companies while having an academic standing. For IonQ, we had to make sure that the students at the university were totally separate from the employees at the company. They could (and do) talk to each other as needed, but neither group could work at the other place. We would especially not have a student working at the company on a graduate student stipend. Another important conflict has to do with intellectual property, which is usually a really sticky point at universities. On this, I must say that the University of Maryland and Duke together did the genius move. They decided that they would license exclusively to IonQ all IP in our area established and owned by the universities, without royalties. Well, what does the university get out of that? Instead of royalties they acquired a small equity stake in the company. It was a bold move, the universities shared some of the risk of the company going forward. I don't think this is standard -- if we were at MIT or Stanford, the company would probably never have been possible. I should also note that there's an interesting player in the mix here, a very colorful venture capitalist by the name of Harry Weller. Usually when you form a startup, you give many dozens of pitches to many venture capitalists and hopefully one of them strikes some resonance. Well, we didn't visit VCs, they came to us. Harry Weller visited me at Maryland. He read the paper Jungsang and I wrote, the architecture paper about how to scale an ion trap quantum computer that we published in Phys. Rev. A. I don't know what a VC was doing reading Phys. Rev. A, but I learned that Harry had a physics degree from Duke as it turns out. When Harry walked into my office, he waved this paper in my face, and said, "You know, this reads like a business plan." [laugh] Jungsang and I thought it was much too early to start a company—this was 2014. But I liked the guy and got to know him. Jungsang got to know him. We also got to know that he is a high-flying venture capitalist, nothing typical. He's probably the top VC on the East Coast. And I got this knowledge through another Kim. His name is Jeong Kim. Jeong's name is on the engineering building at Maryland, he's on important boards of UMD, and I would run into him every once in a while. And he told me, he said, "So I heard that you're talking to Harry Weller." And I was surprised that he knew this. But he looked at me closely he said, "You must be really important." So, I looked up this guy, Harry Weller, and found out that indeed he was a hotshot. He really wanted this to happen.
Chris, what might Harry have known about what you were doing that you didn't know yourself?
Gosh, I don't know. We did have this business plan in a sense, but he called it out. In that paper we put the roadmap out for what later became IonQ’s roadmap. It seemed to be based on a lot of research, but every individual component had been demonstrated already. So, I think he took it more seriously than I did from an industrial standpoint, he had a hunch that I just don’t know how to calibrate. Interestingly, it turns out the two Kims know each other. When Jungsang Kim was at Bell Labs around 2000-2005, Jeong Kim was the director of the entire institute, so he was Jungsang’s boss's boss's boss's boss. But they were both Korean Americans, and they sort of knew each other. And so that triangulation really gelled us all. And so Jungsang started to talk to Harry, more influentially than me. Harry became our champion, and Harry's the one that convinced both Duke and Maryland to go after this exchange of IP without royalties. And so, yeah, he convinced us to form the company before we would've naturally formed it.
Mm-hmm.
And this was in the fall of 2016. He seeded the company with $2 million, and that was to hire a CEO – David Moehring, a former student of mine who was the leader of the quantum program at IARPA I mentioned earlier – and to start prosecuting IP based on the university work, which is very expensive. But then… Harry tragically passed away in late 2016; a young vibrant guy in his mid-40s. This was such a shock to everybody, and for us, we were sort of left holding the bag. His company was NEA, New Enterprise Associates. And we had to become quick studies of pitching and understanding the VC game. So, in the winter of 2017, Jungsang, David, and I spent a lot of time in California working with NEA and other companies. Fortunately, we were able to hook NEA and Google Ventures to invest $20 million in IonQ in 2017, then we got started operationally. We were able to hire people and start to build systems. But we didn’t have the champion in Harry anymore, so we had to grow up fast, business-wise. As to your question about the timing and why form IonQ, it was very smooth with the IP and the universities. The chief legal counsel at Maryland, Mike Poterala, was totally behind this and also his counterpart at Duke. Jungsang and I each have conflict management plans that ensures that students understand that they might invent IP that gets licensed exclusively to IonQ, so these students can't go out and form their own company and use that IP. On the other hand, all of that IP is paid for by IonQ. Many of my students have a lot of patents, and that's not typical in physics graduate school, I guess. Universities generally don't prosecute patents because they don't have the expertise to make the call on what is a worthy investment. But anything in my field gets paid for by IonQ to patent, and all the license rights go over to IonQ, and the students have to know that. So that's one thing I have to tell them, that I want you to come up with patent ideas. It's great. You'll be inventors for patents. You'll be the recorded inventor, but you won't be able to license it to a third party.
Chris, grabbing the attention of the largest venture capitalist on the East Coast indicates clearly there's big money here. What is that? What is that path to a big breakthrough?
Well, as you will read, venture capitalists, they make big bets that are very speculative. I'm told that the success rate of a venture investment is, like, 10 percent or something.
Yeah, I've heard that. Yep.
Yeah. And so therefore, the wins better pay better than 10 to 1. Quantum computing is one of those. You might say it's speculative what quantum computers will do in the future, but whatever it is, it's going to be huge because it has the potential to revolutionize the way we compute, and it'll allow us to tackle certain problems that we couldn't otherwise. And so, it's not like we're producing a new type of scooter where we have to not only make a cool device, but also make sure that it's better than all other scooters out there. Quantum computing will have no competition. When you build a quantum computer, people will use it. It will be used by all sectors of the economy. So, bets like these are what—I think venture capitalists live for stuff like that. And it’s true, six, seven years ago there weren’t very many VC bets in quantum. Ours, you might argue, was maybe the most solid. We had a very concrete plan to build our system. It wasn’t based on physics. It wasn’t based on finding that mythical topological qubit, even though some companies do bet on that. It was based on things that had been demonstrated in laboratory environments. So, I think that moved some venture capitalists to think that this won't be ten times, this will be a thousand times the investment.
You were CEO for, what, two years?
No. Less than a year. Our first CEO David Moehring knew our tech well. He knows the government very well. So, he was the founding CEO back in 2017 for a couple of years. He left in the Fall of 2018. And so, I took over for almost a year. I knew it wasn't going to be a permanent thing. I didn't want it to be. But we were building systems and starting to look up, and I could hold the fort while we had a search for the current CEO, Peter Chapman, who came in around spring of 2019.
How'd you do as CEO? How were your business chops?
Oh, brother. [laugh]
[laugh]
Yeah, maybe the nadir of my career. Well, not exactly that. We weren't a revenue company, so I didn't have to oversee the books and so forth. We were spending money, that's for sure.
Yeah.
But we were building machines, hiring good people. And so, I guess I shouldn't be so negative on that year. I'm not sure I have very good business chops. But I certainly held the fort and started pushing the company to be more outward-looking, to show our results to the world.
What other supporters came in subsequent? You mentioned Google, there was Harry initially. Who else is now involved?
Well, when I was CEO, we had to start raising more money. We were running out of that 20 million, with 35 employees on the payroll and lots of capital equipment expenses. We ended up raising another $65 million, I think. And that was really architected by the new CEO, Peter Chapman. I kind of started the things going for that raise, but Peter definitely closed the deal. So, after NEA and Google Ventures, we had a few other big players like Samsung and Mubadala, which is the sovereign fund of the United Arab Emirates. They have a very big venture fund in San Francisco. There were several others, Bosch, Airbus, other strategic investors who not only invest in the company, but their parent companies had lots of interest in quantum computing themselves. Bosch and Samsung, for instance, make certain hardware, optics hardware that we could use. And there are several other smaller groups that came in to add up to that $65M. This funding was going to give us a three-or-four-year runway with 60 or 70 people, so we were growing steadily, and now we're at about 75.
I appreciated the way that you contextualized the 27-year gestation period that we're still in with quantum computing. I wonder if you can transfer that answer to the world of venture capital in terms of expectations, in terms of turning a corner so that this endeavor is one of the 10 percent and not one of the 90 percent?
Well, I think the one thing we didn't talk about, and this happened around the same time we formed IonQ, was that IBM and Google, Microsoft, and then, a little later, Intel and Amazon, these behemoth companies, they invested big time in the field with their own teams. They started building experimental efforts in quantum computing. Intel obviously based on silicon and silicon spins. Google and IBM really led the charge using superconducting circuits. Google’s play was very interesting – they basically invested in a particular laboratory at the University of California at Santa Barbara led by John Martinis, who is one of the top superconducting circuit physicist/engineers. And IBM, for their part, they did it all in house, but they have a record in superconducting circuits. They've been doing this ever since the '80s for regular Josephson junction computers. And so, IBM spun up a big group, and they are owed a whole lot of credit for putting some of their early systems on the cloud so people could use them. And so, it wasn't just the VCs here. I think the VCs appreciated that these big companies were already investing, so in a sense it vetted the bets the VCs were making. If Google and Intel and Microsoft are doing this, well, maybe it's not 10 percent, maybe it's 13 percent or something. Maybe there's a higher probability this will work. It's still speculative and the VCs funded a superconducting group, Rigetti, a photonics group called PsiQuantum to a high level, and they funded us and some other groups. So, I think they feel much more comfortable with that type of investment. I shouldn't speak for the VCs, but I think if the big companies didn't step in the field it may have delayed things for VCs to get in, if at all.
This all begs the question, Chris, the extent to which the goals of quantum computing are singular, given that there's all of these different industries that now have an interest in this. And maybe the best way to frame that question is to look at, for example, Tesla and now what Ford and BMW, everybody's in this rush to recreate what Tesla has done. But the singularity there is that there is this shared expectation across the industry that an electric car is the way of the future, and who's going to do it in the best and most profitable way. Would that also apply in the world of quantum computing in the sense that, are there different industries that might define what a quantum computer is differently, and so that further blows up this notion of a horse race, that there's some singular notion of what a quantum computer is and then it's the question of who's going to get there first. Is Amazon's concept of a quantum computer very different from Pfizer's quantum computer and so on and so forth?
That's a really good point. That's a very good way to say it, because I think that to me is what makes the field so broad. It's not going to be a singularity. It's almost certainly true that quantum computers will be used in wildly different ways, just like regular computers are now. We have them in our watches and cellphones, we use them in our cars and dishwashers. An electric car, I guess, can be used in many different ways, but the device is really the same. So, I totally agree that, yeah, Amazon and Pfizer, they look at quantum computing in totally different ways. And it will probably involve, in the long run, different types of hardware. There's going to be room for a huge ecosystem when this stuff hits paydirt.
So, to what extent is this entire endeavor a zero-sum game as it so often is in the world of venture capital. In other words, whoever is smart enough and lucky enough to invest in this thing first, when it’s viable, they’re going to get the payoff and their competitors won’t. So, let's say best case scenario, there's some breakthrough, IonQ is at the center of it, and all of the industrial supports and partners you have get all of the benefits of that breakthrough. What does that mean for all of the people that were not part of this from the beginning? How much of this technology is proprietary and how much of it is—well, what did Stalin see when he saw the atomic bomb in Japan? He saw, oh, this is viable technology. We can do this, too. How do you think about these things as this will play out?
Maybe it's related to my business chops but— [laugh]
[laugh]
—I tend to be an anti-zero-sum game person.
Yeah.
I tend to think that if any one segment of this field dramatically advances, it's just going to be good for absolutely everybody. Even the topological qubit. If Microsoft's investment in the topological qubit, if they have a breakthrough next year, I'm not sure exactly how but I know that's good for IonQ. [laugh] Maybe we make a type of an ion trap quantum simulator that allows us to figure out generic interactions needed to make a topological material. I don't know. So, I don’t think these things are zero sum. I just don’t—especially something this speculative, we don’t really know where it’s going to hit. So, how's that for an answer? It's maybe simple-minded, but...
One name we haven't mentioned yet that might make it more zero sum, of course, is China. Where is China in all of this in terms of the broader story there if it is full-steam ahead in basic science, it's full-steam ahead in engineering and computers. Where are China's contributions in quantum and where might you see those advances as China itself very much might see quantum computing in a zero-sum game in the way that you might not?
Yeah. The Chinese scientific enterprise is very interesting, and it's very hard to appreciate, certainly as an American. I tend to think of the Chinese government, Chinese industry, the Chinese laboratory system, the Chinese universities, I think of all of them as different parts of the same thing. They're intertwined in ways that you would never see in the US. Europe too, but to a lesser extent. And that can be good for certain things. They can all move together with lightning speed and efficiency. And money is the least of their concerns. When I was in graduate school, I think Chinese research was definitely inferior to that of the US, pretty much across the spectrum. In the last 20 or 30 years it's just truly remarkable what's happening, all the advents and discoveries over there. They've really come up—as a country, their scientific talent is through the roof. However, I don't see a whole lot of leadership there. Maybe that's an American thing. In the US to do science, you get funding from any one of a dozen organizations or agencies, and they all have their own mission, their own style. NSF is Blue Skies, very democratic. They take all comers with a good proposal, in any field. And then you have IARPA, very directed—or DARPA, very directed in certain ways. And by having that diversity of portfolios, I think that's amazing for the country no matter what the size the money pot is. I think this is a field you can't simply throw money at, and China has thrown money at things like beaming up single photons to a satellite. It makes a buzz, it gets noticed. I'm not sure how much the satellite demonstration really advances the science of quantum information. But China as a country sees quantum computing as one of their mechanisms for passing the Western world in high tech. And so, this makes everybody take notice. But I'm not sure where the zero sum is between US and China in this field. Again, it's a horse race in a sense, and we just have to stay ahead of it. But look, the scientist in me applauds this. I mentioned by close colleague, Lu-Ming Duan before, who's now at Tsinghua University in Beijing. He's a wonderful scientist. And I go over there whenever I can and spend a little time—I usually go over there every year, but I didn't go over in 2020 for obvious reasons. And it's good for science that people like Lu-Ming Duan are doing well. And he's doing ion trap quantum computing (as well as neutral atoms, superconductors, and photonics). He was a theorist and now he's over there doing all kinds of experiments, and I think that's great.
Chris, just to bring the narrative up to the present, we talked about your move to Duke, but just in the recent months, what are some of the things you're working on both on the professorial side and on the business side?
Hmm. Well, it's funny you ask that. The last few months have been incredibly busy on the business side because IonQ is slated to become a public company in summer 2021. We're not sure exactly when, but it's been announced. And this will give us access to about $600 or $700 million in the bank. The great thing about that is that we have this concrete roadmap and now we're going to execute on it. The harrowing thing is that now we have to execute on it.
[laugh]
This is it, right? [laugh] The moment has come. It's all in our control, and I love that. This is what we wanted. We get to produce or else. Being a public company means you have a public stock price, and you have stockholders, and you have quarterly reports and so forth. And while I won’t be taking the lead on those things obviously, I'm going to continue to be involved with the technology at IonQ. So, the last many months I've learned a lot more about the SEC, I've learned a lot more about roadmaps and business planning, accounting, so many things that I kinda wish I didn't know. [laugh]
[laugh]
And it had to happen. It's all for a good cause. I think we're going to now not have to worry about capital when we're building. We can do things in parallel. We can do research as a company. So, that's been marvelous. And Jungsang Kim, my colleague at Duke, we talk to each other all the time about both the university side of things and the company. And you're wondering, well, what's left to do at the university? The company is going to do all this great stuff; why bother with the university? Well, that's a good question… Building quantum computers of the type that people can use without thinking, that's great. However, quantum computers will continue to be used for science. Again, I go back to materials research, high-level entanglement studies, many-body quantum physics. But machines that you can't tune, that you can't tweak, you might not be able to do certain scientific applications. So, we’re going to build research systems at Duke for scientific applications. In the Duke Quantum Center, we already have a couple of quantum computer systems. We're going to build more of them, but they're going to be based on scientific applications. It'll be a user facility where people come to campus for weeks or months at a time and they get under the hood. They actually get under the hood and open up things, and things are more flexible. And this is actually important, even for the future of IonQ and other quantum computing businesses. This is because it may be a number of years before commercial quantum computer applications are able to pay the bills, but scientific quantum computers are already hitting stride. The community is performing wonderful research not in just new devices but running quantum computer programs on IBM's cloud systems, on our cloud systems. And it may be that these scientific applications will allow us to continue the momentum until we get to commercial value. I think the scientific agencies really understand their role here. The US National Quantum Initiative that became law a few years ago was largely based on exactly this, that the government can help transform university research into commercial activity. This field in particular has challenges because, as I said before, at universities we don't build things for general use. But the companies that do so, have very few seasoned engineers that are comfortable or even knowledgeable of fundamental quantum physics. Why would they be? So, there's a workforce issue in getting people to work at these companies. And that was really, I think, what bent Congress's ear to fund the NQI. They really understood that, and of course they're thinking about China and they're thinking about our competitive advantage economically. And then there's the national security angle. Quantum computing has all the right ingredients for Congress to act deliberately, decisively, and in a bipartisan fashion. DOE has long avoided this field until about five years ago. Now they're really going strong, establishing several large centers anchored at their labs. NSF has always been in the field on the research side, but they are now flexing a technology angle that will expand their efforts in quantum and other areas. NIST has also always been in the field, as we’ve discussed. And the intelligence and defense agencies are going at it, as well. I forgot where we were on that question. Oh, yeah. We were talking about IonQ and the university involvement. Well, it’s somewhat of a blur, and the federal science and tech agencies are interested in all of it. It's a great time in the field right now.
It's the best of both worlds, as it sounds like.
Yes, might be.
You get to have all of the flexibility in the world of business but retain that vigor in an intellectual academic environment.
Yep.
Chris, for the last part of our talk, I want to ask one broadly retrospective question about your career, and then invariably we're going to look to the future, and I have one specific question on that. I picked up on 1994 for you as being such a formative year in terms of lightbulbs going off, this is when things are really coming together. Obviously, you can only make that analysis of your intellectual trajectory retrospectively. It all sort of happens in real time. You can only think about that. But I wonder, as you think back to 1994, what are some of the things that stick out for you that should always remain relevant, no matter what it is you're working on, no matter who it is you're working with, so that you remain alive to, like, the new frontiers in all of the things that you're involved in?
Hmm. Well, I've always thought one thing attractive about research is related to a human vice, like playing the lottery. You know, you have a scratch lottery ticket. It's just fun, having the blank ticket, not having been scratched yet. We must be wired to love the sense of risk or something.
[laugh]
Research is like that. You mostly lose, or don’t realize the grandiose goals of your research. [laugh] But, boy, you never know where it's going to come from. You never know. In my case, before '94 my head was kind of down, I was a little down and out, had just had a bad year, wasn't sure what I was going to do next. And it just comes out of nowhere, the things we were working on happened to be useful for something completely different. In my field I also saw the combination of disparate fields which was super fun – I recall having to pick up some texts on number theory to understand Shor’s factoring algorithm. People like Peter Shor and David Deutsch, another luminary in quantum information theory, knew some quantum and some number theory or cryptography, and it was the magic combination. Having great knowledge in different fields always seems to help. But you never know where it's going to come from.
Yeah.
It's a lottery ticket.
Yep.
And we were very lucky. Dave Wineland and I, for five or ten years from 1995-2005 or so, we were ten years ahead of the entire world.
Yeah.
Everything we did was original and interesting. We had all those postdocs, which was a tricky thing to manage. You've got five postdocs on a single project, all hungry, they all need to get famous in two years, get on papers. I always have this picture of a coffee percolator. The postdoc comes up, writes a paper, goes back down, the next one, the next one. And those few years, it was just amazing.
Chris, last question, looking to the future. Let's bring it back to physics. So, one of the big through-lines that I've learned this past year in the pandemic is the increasing reliance on computer simulations to do experimental research in physics across the board. So, the question for you, as we look to the potential applications of quantum computing in physics, and it's possibly a heretical proposition because it sort of upends the whole basis of deductive logic in physics. And yet, and yet, some of the most ongoing mysteries in physics for which no good answers are either currently available or seem anywhere close to being on the horizon, things like what's beyond the standard model? How do we integrate the forces? What might exist at higher energies beyond the Higgs absent an SSC? After all of the excitement in inflation, what's really going on at T=0, right? Without even a notion or a bureaucratic ability of building experiments that might yield answers to these questions, in what way might quantum computers allow for simulations of experiments that might get us something close to an experimental verification of theory?
Oh, that's a very deep question. But, like I said just now, quantum computing—it's not just the next generation of high-performance computer that will be installed at Oak Ridge, it's so radically different at the foundational level that we're still coming to grips with what it can and cannot do. You mentioned high-energy physics. The process of computing the structure of the proton, nuclear physics, quantum chromodynamics, we use something called lattice gauge theory, which has an obscene scaling problem. It gets double-exponentially hard. But there are approaches to using quantum computers to do that type of problem. This direction is really still in its infant stage, and we need much better machines to do something useful; but it’s terribly exciting. What's cool about quantum computer based QCD is that it's a totally different approach, and I think everybody should want to play the lottery here. We should try as many possible models, even at the small level on these early quantum computers, and I find it hard to believe that not more than a few of them will eventually score. I think the systems we're building have to get bigger and less error-prone, but it just seems odd to me that there would be nothing there because the quantum computer is so different at its core. We tend to ignore problems we can't solve, because our models of compute don't match the problem very well. Some people say, well, if it's a quantum problem, then forcing your digital classical computer execute that problem is like having to learn a foreign language. This is sort of how Richard Feynman considered the problem of quantum simulation. He advocated that we use a quantum system itself. It speaks the same language, in a sense. So, I do think that, apart from the commercial activities in quantum, there will be scientific breakthroughs that allow us to use quantum computers to solve some of these really hard problems. I don't know when. I thought you were going to ask me to predict when.
No. I knew enough not to ask you to predict that.
[laugh]
It's more of an intellectual question about the ability—because, as you emphasized, quantum computing really is going to change the nature of inquiry in a way that, as powerful as classical computers get, it's still going to be incremental in what they're able to do. And as you're saying it, quantum computers are really going to just change the game itself. So on that basis, given that it looks like we're never going to build an SSC, and given the fact that it looks like the LHC is going to continue to find good stuff but it's limited by its physical limitations, if a quantum computing simulation finds supersymmetry at some high enough energy, will quantum computing change the game sufficiently that theorists will accept what the simulation has to say without ever worrying about, does that need to be verified whenever we get around to building the SSC? That's my question.
Ah. That's a very deep question. To me, it gets to the definition of science. And as such, I think if the quantum computer is a simulator, it's still not doing the experiment.
Yeah. Yeah.
You might argue that, well...
Chris, that's what I wanted to pin you down on. That's the answer I wanted to—not that I wanted to hear, but I wanted to hear how you felt about that.
So, I guess I would answer that one in the negative, that a scientist still needs to observe it somehow in a real system. Now, the quantum computer might get closer to the real system behavior. You know what I mean? Instead of having discrete transistors, now we have quantum systems but we’re tuning every little piece of the Hamiltonian as best we can, but the simulation is only as good as the model – it’s still the simulator, it’s not the real system.
Yeah.
So, I think there's still going to be that divide. So, yeah, unfortunately, I don't think it's going to change science.
That's, of course, assuming that it's still humans that are making these decisions and not AI.
[laugh]
They might not care so much.
Yeah, that's right. That's right. [laugh] Oh, boy.
Chris, it's been so fun spending this time with you. I'm so glad we were able to do this. Thank you so much.
Oh, thank you. Thanks for your patience.