Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
We encourage researchers to utilize the full-text search on this page to navigate our oral histories or to use our catalog to locate oral history interviews by keyword.
Please contact [email protected] with any feedback.
Photo courtesy of IBM
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of Jay Gambetta by Will Thomas on September 11, 2024,
Niels Bohr Library & Archives, American Institute of Physics,
College Park, MD USA,
www.aip.org/history-programs/niels-bohr-library/oral-histories/48493
For multiple citations, "AIP" is the preferred abbreviation for the location.
Interview with Jay Gambetta, Vice President for Quantum Computing at IBM. Gambetta recalls his childhood in Queensland, Australia, and his early inclinations toward science and math. He discusses his doctoral work at Griffith University under Howard Wiseman, where his thesis dealt with the possibility of distinguishing different interpretations of quantum mechanics. Gambetta describes his introduction to quantum computing and his move to Yale. He reflects on his time at the Institute for Quantum Computing at the University of Waterloo in Canada, as well as the formation of the quantum team at IBM, which he joined in 2011. Gambetta provides an overview of the history of IBM’s interest in quantum computing, the continuing growth of their efforts, and the impact of the leadership of IBM Quantum under Dario Gil, Senior Vice President and Director of Research at IBM. Other topics include Gambetta and the IBM team's work to put a quantum computer on the cloud, the development of Qiskit, IBM’s quantum software platform, and his thoughts on benchmarking quantum computing capabilities. The interview concludes with Gambetta’s thoughts on the impact of the National Quantum Initiative and the changing needs of education in quantum computing.
This is Will Thomas with the American Institute of Physics. It’s Wednesday, September 11th, 2024, and I’m at the Quantum World Congress in Tysons Corner, Virginia, and I’m with Jay Gambetta from IBM. Thanks very much for talking with us today. Our goal today is to have about an hour-long talk about your career and also, through your career, to understand a little bit about the evolution in the field of quantum computing and quantum science and technology more generally. First of all, for the record, can you say what your current title is at IBM, and what that entails?
Sure. My title is the VP for Quantum Computing, and I’m responsible for IBM Quantum, so everything that we do in quantum computing comes into me.
Terrific. Let’s go back to much earlier in your life. I’d be interested to know how you got interested in science and technology, your experience with that before ultimately you went off to college and grad school.
[laughs] That’s going back.
It is a traditional question.
I grew up in Australia, in Queensland. I was into tinkering. I didn’t know what a scientist was, so I had no objective to become a scientist. But I was into building things. I even started on my holidays, working in mechanics, and working on fixing cars and things like this. I grew up surfing. I always found an interest in the weather and weather patterns and things like this. Why do waves break? How do they break? I found myself watching the swell on the horizon and predicting where the best waves would be. At school, I was always naturally good at math. So, when I put all these together, I ended up at university. I went to university to really do more of an engineering degree, well, it was science and engineering. It was laser science because, back then, lasers sounded cool. [laughs]
When about are we?
That would’ve been 1995. At university I discovered that the courses that I found the most difficult were quantum mechanics and these were the ones I liked best. I got an interest in them because they were very hard to do. I had a few lecturers that took time to explain it but I did not fully understand why. So, I said, why don’t I do my PhD on interpretations of quantum mechanics? So, not quantum computing; it was more like how different models of quantum mechanics—like, understanding the Copenhagen, the modal, the hidden variable and which one explains nature.
That’s a very interesting choice of a dissertation to go with. Say more about it, if you could.
It was trying to understand if there was a better interpretation of quantum mechanics, when you considered a quantum system interacting with its environment—an open quantum system—and could this lead to you predicting certain things like which interpretation of quantum mechanics was correct or more realistic. Technically, the math was something we called stochastic Schrödinger equations, so they were a way in which we updated the Schrödinger equation to be non-linear, representing the back action from the measurement. These equations, when you average over them give the average dynamics or average to the standard master equations. The question was does the unravelling have physical meaning? Is this the correct interpretation of quantum mechanics or is one equation more likely than the others? In the case where the system-bath interaction was Markovian, it was already answered. The family of stochastic Schrödinger equations were all physical and could be explained by the standard interpretation of quantum mechanics and just represented different ways the bath was measured. For my PhD the question was if you gave the bath some memory—and we call it in physics non-Markovian—was there a single non-Markovian stochastic Schrödinger equation with just one interpretation or could we generalize non-Markovian stochastic Schrödinger equations? The output of my PhD was inconclusive. You could use all interpretations to describe this equation and it could be extended to have a family of non-Markovian stochastic Schrödinger equations; it was just the same math. That’s when I decided, rather than to keep doing foundations I would move to quantum computing and see if we could build a quantum computer to test if quantum mechanics worked—
Were you working with somebody who was also interested in foundations?
Yeah, I was working with Howard Wiseman in Australia. Then I decided that after the PhD, if you want to understand which interpretation is correct… [laughs] and quantum computing, I was reading on the side… I convinced myself the best way to understand what interpretation was the correct one is to answer the question: could we build a quantum computer? Then I found myself going down the line of: “All right, I want to build it. If you want to understand nature, let’s see if we can build a quantum computer.” I had experience with quantum optics but I did not believe this to be the best path and as I started to think about this I wanted to move to solid state and that’s when I came to Yale, where I met Steve Girvin.
Did you have a sense that quantum computation… that you were really getting in just as the field was beginning to expand and develop pretty rapidly?
Actually, I think, ’95 or ’94, with Shor’s algorithm—I forget exactly— was when quantum was established. I finished my PhD in 2004, so quantum computing was already established, but it was early demonstrations. It was at the end of my PhD, I was: “All right, I’m going to link these.” I really wanted to understand what it meant to understand nature. What did these equations say? That’s when I decided that I really wanted to get into “can we build it?” Because before I started my PhD on interpretations of quantum mechanics, I was doing my degree more in engineering topics. I was programming lasers to shoot at atoms and measure cross sections. I had a pretty good experimental background, but then I went completely theory in my PhD. Then I found myself, “Well, if we’re going to build it, I want to learn about a different type of computing.” I remember Hideo Mabuchi from—I think he was at Caltech then—had worked with my supervisor on quantum optics feedback experiments, but, since I did not want to do quantum optics, he introduced me to Steve Girvin. He had not done much on quantum optics, but he was realizing that a lot of the math developed in quantum optics could be used to explain their system. I’d done nothing in solid state physics, so why not try and see if I can learn about these systems? Superconducting qubits, I’d read a couple of papers, but I knew nothing about them when I started. Then I went to Yale. I worked with Steve Girvin, Rob Schoelkopf, and Michel Devoret, and various other postdocs and PhD students like Andreas Wallraff, Andrew Houck, David Schuster, Alex Blais, Jerry Chow who’s now at IBM, Lev Bishop who’s at IBM, and Blake Johnson who’s at IBM.
That’s when I learned a lot about the superconducting system. Because Australia had a lot of knowledge on quantum optics, and I’d used a lot of quantum optics equations to develop the ideas in my PhD, it was really easy for me to apply all my knowledge to these systems. That’s when we started doing things like understanding circuit QED— how a superconducting qubit interacts with a microwave cavity. It’s also how we started to understand how we can build new qubits, so we started designing the qubit that everyone is using now, the transmon qubit. It was designed to be exponentially insensitive to charge noise but still have strong coupling and strong interactions. It was the perfect testbed for quantum optics with strong interactions. This allowed us to study single qubits, two-qubit gates, microwave photons, and more. One important paper showed the quantum nature of the microwave field where you could actually see number splitting. We also showed that we could create single photons by coupling the transmon to a cavity. We showed that we could couple two qubits together using the cavity as a quantum bus and showed early quantum algorithms. In the period from 2004 to 2007, a lot of things came out that showed these were a great testbed for quantum computing.
Can I ask, what alternatives would there have been to a superconducting qubit at that time?
Superconducting qubit is actually the newest technology. Ions were already established with Wineland and Blatt showing some great multiqubit experiments, linear optics had shown CNOT gates and small quantum computing demonstrations. Spins, had shown spin couplings and things like that for a while. Superconducting qubits were the new kid on the block. So, no one thought they had a chance. Nakamura and Michel Devoret, when he was in Paris, had shown the existence of two levels, and Yale had demonstrated some initial Cooper-pair box coupling to cavity, but no one had really shown this working as a quantum computing system. That’s when… in that period when I was at Yale, it was a fantastic environment—everything started to work, what you predicted would happen could be measured experimentally, we were starting to be able to do it. I think, making the transmon with the team there was one of the biggest inventions for the field. It created a qubit architecture that worked, and we could scale it. After that, because I wanted to build a quantum computer, I wanted to get more into how we benchmark it. How do we prove something’s quantum? I found myself then going to Waterloo in Canada, because they’d put an Institute for Quantum Computing together. That institute really had put a lot of people there that were starting to think, like, not just single-qubit, two-qubit gates. What are some early quantum circuits that could be demonstrated? There I focused on benchmarking, and how do you come up with ways that you could reliably prove that the thing you’re building was using quantum circuits. I continued to work with the Yale team because they were doing experiments, but then I worked with the theoretical quantum information scientists at Waterloo. So, it gave me this funnel into both quantum information theory whilst continuing to have the relationship I had with the circuit QED team at Yale.
How many people had you been working with at Waterloo? I’m trying to get a sense of the growth of the field in the various centers.
Waterloo then, there was quite a few people. They’d put an institute together. There was at least 10 or so full time professors and many students and postdocs. It was another great environment. They’d created this Institute for Quantum Computing with the funding from the CEO of RIM and the federal government. They created both the Institute of Quantum Computing as well as the Perimeter Institute. They had a lot of people come to that area that were talking about quantum information science. So, anywhere you went, you could talk to someone. Ray Laflamme was the director, and I used to talk to him a lot. He really was a thought leader, a vision, he really wanted that institute focused on quantum computing before it was even popular. If you compare to what we are today, it was still in the domain of research, but it still created an environment where many people that were interested in research could come together to build a quantum computer.
Did you lead pretty naturally from one place to another, like, to Yale and then onto Waterloo, or did you consider other options? I mean, they had quite a bit going on out in California.
I really wanted to work with people that understood quantum error correction and benchmarking, and Waterloo was the best option. So, I didn’t actually consider anywhere else. I went straight there. I loved my time at Waterloo. A bit cold [laughs], but it was great. After Waterloo, we’d done enough research and demonstrations, with Yale, we’d shown robust benchmarks of quantum circuits using randomized benchmarking. We’d come up with optimal gates, optimal shapes, so it was really starting to work. I had a relationship with Jerry Chow who had just gone to IBM to join Matthias Steffen’s group. Matthias was from John Martinis’s group, which was in Santa Barbara. Jerry convinced me that to continue the goal of building a quantum computer I should join IBM. IBM was not known for experiments at that time, but the team Matthias had put together was formed out of the combination of the Santa Barbara team and the Yale team and a small team from Roger Koch that worked on flux qubits. I started there in 2011. It was all new, but IBM had a strong theory team because IBM was always doing theory, it had greats like Charlie Bennett, John Smolin, and David DiVincenzo. I remember reading Charlie’s papers when I was way back in graduate school. They had a strong theory team, but the experimental team was just getting formed.
I talked to Charlie Bennett and Gilles Brassard yesterday, actually. It was a good conversation. But I do get the sense that what IBM did later is very, very different from what came before.
Yeah. There were two teams at IBM at the start. There was the theory team, and there was a team that was working on flux qubits for detectors. Charlie Bennett was part of David DiVincenzo’s team, and Roger Koch was using the flux detectors, doing flux qubits. Matthias Steffen had the vision, “we’re going to do quantum computing,” and set up the current team. As I said, he came from Santa Barbara, but he was already at IBM. He convinced Jerry, myself, Antonio Córcoles, and even Chad Rigetti—which is probably another name you’ve seen—from Rigetti Computing, to join his new team. It was a small team with some very good engineers like Mary Beth Rothwell and George Keefe and Jim Rozen. We had a few scientists and a few engineers, and we said, all right, let’s build some qubits. So, we started out building qubits.
Can I ask you, do you know a little bit about the backstory, about how IBM decided to develop this area? It’s become a little bit of an interest of mine because—completely by coincidence—I’ve done oral histories on a much earlier period with Ralph Gomory and, just a few weeks ago, with Jim McGroddy from the ’90s. McGroddy in particular was in that crisis era of the early ’90s, and was very focused on how can IBM Research help the immediate business and that sort of thing. Now, you’re about 10, 15 years removed from that, and so it must be very different, but I don’t know much about it.
IBM has always kept, at its heart, fundamental research on computing, and this has never disappeared. So, the reason Charlie Bennett and Rolf Landauer started investigating… well, Charlie got into quantum, one of the reasons—and you can talk to Charlie to a get better sense—is they were actually studying reversible classical computing. Then quantum emerged as a way of having the combination of reversible and quantum information science. What they were doing was asking fundamental questions about the computing. Then, like always, once the technology started to work, and Matthias had shown that he could make a good qubit. I would say 2012 was an important year for IBM. In 2012, if you went to the March Meeting then, everyone expected new results from Yale, new results from Santa Barbara, and then IBM emerged with new results. They were the largest coherence times and the lowest two qubit gate error rates—that superconducting qubits had at that time. It was a fun time because we emerged out of nowhere as, “actually, we can make really, really good qubits.” That was 2012. Since then, we have just been growing, and growing, and growing. From those results in 2012, as we went up to 2016, it was mainly research on gates and small multiqubit demonstrations. In 2014 or around then, I remember—I don’t know who the theorist was, but a few theorists would come to me, and they would say, “Can you test our latest paper on devices?” This happened a few times, where “a theorist has some novel way of doing a new gate, or a novel way of doing a measurement and asked if we could actually test it?” Then myself, Jerry Chow, and Antonio Córcoles came together and asked why can’t we actually put a quantum computer on the cloud? [laughs]
I had never done anything like that. I remember telling upper management that we could put a quantum computer on the cloud. In hindsight, I didn’t really know what we were going to do. But I said, all right, everyone wants us to test things. Why can’t we put our quantum computers on the cloud? Then for about a year, we worked on making the calibrations more reliable. That was actually probably the hardest, because, before we put the quantum computer on the cloud, you had an idea, you calibrated the system, you did your idea and measured the data, and then sometimes repeated if it did not work. How did you actually get to a system that is always calibrated, always stable? And ready to try new ideas. That required quite a lot of work. [laughs] I also had to learn a lot about cloud security and this is when I met Ismael Faro, who had joined IBM for something else. He was an expert cloud developer and security person. We quickly said, all right, let’s put this quantum computer on the cloud. I still remember the date because it was Star Wars Day, May the 4th, 2016 when we went live. We released the Quantum Experience, which was five qubits, on the cloud. It went live at midnight. We had the terminal there, and we were watching the jobs come in. We’re like, is a job going to come? Is anyone going to use it? Is it going to break? We were all on a conference call, watching the terminal. The first one went, and ran, and we’re like, “Phew, okay, that worked.” [laughs] Then it went from a few jobs to a few thousand. The number of users just exponentially increased. The first interface was a drag and drop quantum circuit. Users were dragging gates and dropping gates and making small quantum circuits.
Do you have the sense that these were people who wanted to understand how to be users of quantum computing, or did they have actual problems that they wanted to work out?
As I said, the reason, the real inspiration that got me thinking was, because of my history before this, I was able to exist in theory land and experimental land at the same time. I took a unique path that gave me the ability to understand experiments deeply, and, because I jumped into interpretations, I could understand the theory quite deeply. So, I was always able to speak both languages. Many theorists would come to me and say, “Is this possible to run on your experiment?” Or, “I got a new pulse shape. Can you run this?” Or, “I’ve got this set of sequences. I want to look at these correlators, or I want to look at this type of inequalities.” So, before the Quantum Experience you had a lot of scientists that the only way they could test their paper would be to go find an experimental team, convince them how to do it—well, sorry, convince them it was a good idea. Then they would work together on how to do it. Then the experimentalists would take the data, and then they would analyze the data together, and publish the paper. This just felt too slow. So, I kind of knew people were going to use our cloud quantum computer—I thought it would be a small community of people coming up with new ideas to test quantum information science on these devices would use it. But because the community grew so fast and they had to struggle with drag-and-dropping circuit designing, we knew we had to create a programming environment, otherwise you can’t use this beyond small demonstrations. I had known how to program in C++, but I never knew how to program in Python. If you go back and look at the old code, you’ll see why. [laughs] We said, all right, we’ve got to create Qiskit, and so we’re going to create an open source project to program quantum computers. In 2017, we released the first version. I wouldn’t send a computer scientist to learn from them [laughs], but they worked. We chose Python because Python was becoming the scientific language—you could just see it, looking at where AI was going with TensorFlow, looking where SciPy, NumPy—it was becoming the scientific library. I think it was just becoming the easiest way to prototype and do science. So, we chose Python for quantum computing. Qiskit has changed a lot since the early days as we have also learnt a lot. My talk today was about Qiskit 1.0 and that it is now stable with good benchmarks. It took a long time to go from Qiskit 0.x to Qiskit 1.x, 2017 until this year, 2024, Qiskit kind of had—what’s that?—seven years of growing up. [laughs]
Were you bringing in people who are more on the computer science side?
Well, in the first version of Qiskit, it was a very small team. It was still myself, Ismael, Andrew Cross, and a couple of other people who wrote the code. But once Qiskit launched, that’s when we started to see that actually these quantum computers can be used for real research work. They can be used, and scientists can do things. That’s when we started growing the team. Now, I am proud of the fact that most of the code in Qiskit is not my code. Honestly, I think we’ve hired some brilliant people. The current people doing Qiskit, like Matthew Treinish and Jake Lishman and the large set of developers around them, they’ve really made it good. It’s fast. It’s performant. It’s really simple to use. This is why today, I think, over three trillion circuits have been run on our hardware.
Another dimension is the system R&D has become a large part engineering. Before 2019, the quantum computers were like laboratories. The Quantum Experience early photos you can see its just in one of our research labs. We said, “Well, if we’re going to take the quantum computer, and make it a system that we can install, have it reliable, have good uptime—how do we make the IBM Quantum System One?” That was 2019. We said, all right, it’s time to now go from a lab to a system. Maybe the physicist in me didn’t appreciate it early on, but the engineering side of me thinks this was another major milestone when we look back. We had to make sure it fits within a space; it’s reliable; it’s backed up; it doesn’t have to be touched; it just works. That was a big deal to us. We succeeded in the System One.
Now we’ve installed a few around the world and we have many in our data center in Poughkeepsie. With both a system platform and programing platform we also decided it was time to articulate a roadmap of how we would proceed in increasing the QPUs and the capabilities of Qiskit. We said, “Look, if we’re going to build a large-scale quantum computer”—and we’ve always had the vision to have an error-corrected quantum computer—“we need to solve different problems.” First in the hardware we need to solve the packaging problem. IBM’s got a history in solving packaging in classical computing. How do we solve packaging in quantum computing? So, the first few chips that we made we focused on getting rid of things like all these wire bonds. If you look at old photos of chips, it’s all in 2D. There’s wire bonds everywhere and it just does not look like a scalable technology. Basically, the chips were silicon with superconducting metal put on top placed inside a printed circuit board, and wire bonds everywhere to connect the circuit board to the chip and remove all the grounding loops. You look at it, and you’re like, that’s a nightmare. We asked how do we build the qubits on one chip, and then the readouts on another, and you connect them together. How do you get bump-bonds working? How do we get multi-level wiring? How do we get through-substrate vias? How do we solve all the packaging problems?
That’s what we laid out in our roadmap. From 2019 to 2023 we focus on solving packaging, solving scaling. We were able to execute on all of this. Now as we go forward we are focusing on coherence, gates and scaling the number of operations in the quantum circuits that can be executed on our hardware with error correction coming in 2029. At the same time with the hardware we believe the future of computing is not just quantum but quantum and classical and we are also working on software to make this vision I call quantum-centric supercomputing become real. All this time, the IBM team has just grown in size because we’ve continued to hit everything we’ve said we would.
In terms of the growth and the increasing prominence within the company of the quantum efforts, can you tell me a little bit about where Darío Gil fits into this?
Darío is my boss. He’s the director of research. He is a big reason why we have the quantum team and why we were able to put the quantum computer on the cloud. I remember the first time I met him, he was quick to realize, oh, you’re trying to put a quantum computer on the cloud —are you sure you can do that? He was the one that actually brought the developers like Ismael to the team. He was the one that realized that “I’m going to bring to you, the best set of skills to add to quantum experts. We would not have got it on the cloud without Darío. After we achieved that, he and Arvind our CEO have always and continued to be the best supporters of us building this technology. Darío continues to be a champion. I don’t think you would’ve had IBM Quantum without Darío. I’m sure you’ve seen him talk at some point. You can talk to him. He’s passionate about the future of computing is—and I agree with him—is qubits plus bits plus neurons, or essentially QPUs, CPUs, and AI accelerators.
If I can go back to the period before you were working to put it on the cloud, and just the development of the hardware, the computer itself, were you following any sort of milestone-based approach? I was talking to Sergio Boixo yesterday, and so he was telling me a little bit about how Google did that in that period, and how they were benchmarking and so forth.
Anytime you do science, you have milestones. I actually have to give the credit for a lot of the initial development—and today still—to the US government, through its various programs. One of the programs that we were working with at that time, which was ahead of its time, was this MQCO, (Multi-Qubit Coherent Operations)—it was a program from IARPA. It had us, it had the ion performers like Chris Monroe and that team, it had Rainer Blatt’s team, and I believe it might’ve even had the neutral atoms team. They were pushing directions, and saying, “Show that you can create qubits that couple and do multi-qubit operations.”
That framework, that vision of these programs, really, set us good milestones and the behavior we continue to use today with OKRs. They were pushing us to push to a limit of how to do large-scale multi-qubit demonstrations at a fast pace with milestones that are very hard to hit. The culture of setting hard objectives at a fast pace is key to our success. We have a very agile culture that focuses on accelerating our cycles of learning. I believe that if you succeed in more than 70% of your objectives then they were not good objectives.
Of course, one of the things I was talking about with Sergio yesterday was that Google had set quantum supremacy as one of their key milestones. I know that IBM has been critical of the concept, and was critical of the particular claims that were made in 2019. My understanding is that it’s about more than semantics; that there are principles in how you go about benchmarking. I’m wondering if you can just explain the thinking behind that.
We’re much more focused on engineering, and solving engineering problems, and cycles of learning. If you’re going to build large-scale quantum computing, the rate at which you can make progress is much more important. The problem I have with claiming quantum advantage or quantum supremacy, it’s more of a fundamental one that I don’t think of it as a flagpole or milestone that a builder should claim. It needs to be claimed by a domain expert. From someone that is building the platform or building the hardware, I want to show that I can build bigger, more reliable devices, and I want to put those devices in the hands of experts that know algorithms, and see if they can test those algorithms. To me, supremacy should be claimed by the quantum computational chemist, the optimization expert, because supremacy or advantage—I prefer the word “advantage”—is the idea that you can do a computational problem cheaper, more accurate, or faster using quantum than the way you do it today. The person that should be deciding that is the expert in all the other classical methods, and so we have not achieved that. To achieve that, we need to get more domain experts in quantum computing that do research with quantum not research for quantum. I do think we’ll get quantum advantage over the next couple of years, but it won’t be IBM that claims quantum advantage; it will be an expert in their domain using quantum computers that say, this is now cheaper, faster, or more accurate. When you look at benchmarking quantum computers, benchmarking is a hard question. When you benchmark something you need something to compare it to. This is the hard questions—and this goes back to when I was at Waterloo—how do you know it’s quantum mechanical? If you are going to treat advantage as a benchmark, and you’re going to say, am I beating every possible classical method? That’s almost impossible. You’ve got to test every possible classical method; does it ever end. This is why I don’t like advantage as a benchmark. A better benchmark is to ask how do I compare against an exact circuit simulation? This is what led us to define the concept we call quantum utility. It’s a weaker concept than advantage. It’s not that we are saying that it’s cheaper, better, faster to use a quantum computer; we’re saying that we want to get you to a point where we can run on a quantum computer, an accurate quantum circuit, where the quantum circuit is larger than you would’ve been to do an exact simulation of that quantum circuit using a classical computer. It’s a much weaker claim. This turns out to be possible to answer and last year we showed that we could run an accurate quantum circuit on a quantum computer that is beyond the ability of simulating this circuit on classical computers.
To me, what I hope this achieved was a signal to the domain experts to go use this as a scientific tool to demonstrate quantum advantage. That’s because now this resource exists and I hope that the chemists can use it to claim quantum advantage. This is why we’ve always had a partner focus. I don’t like marketing or hype. I like having engineering things that I can measure, and I like getting things in the hands of experts. I know how fast my knowledge of a field decays when I get outside of the areas I know. When we spend all our time doing something, we get very good at it. I go a little bit across and I’m not so good… I’m not a quantum chemist. I can read a textbook and understand it. But to get to the expertise that the quantum chemists have, that’s a lot of time, a lot of education, a lot of investment. So, I’ve always preferred, how do we actually put our quantum computers in the hands of the experts? From our perspective, we’re at the point of utility. Now it’s the goal of the scientists that know their domains to demonstrate advantage. We will need to work with them because we are the best at understanding our hardware and quantum circuit complexity. I do think there’ll be many examples of quantum advantage. Some of them will stick. Some of them will fail. But, ultimately, it will be decided by the users of this tool.
That was a very thoughtful answer. Thank you. Going back to the federal aspect of this, of course, we have the National Quantum Initiative that came along in 2018. I’m wondering if I can get your perspective on what changes have come as a result of that. Of course, it’s mainly in what federal agencies can do, and you’re on the industry side. How does that affect the field? How does it affect you?
The National Quantum Initiative is great. I think everyone can probably find flaws with different things. But if you look at it collectively, the DOE getting involved in quantum computing, and quantum becoming part of their mission is only good for the field. If we think of, like… Coming back to my utility statement, the scientists, some of the best high-energy physics scientists, or some of the best materials scientists, the people that use the supercomputers… We have built the biggest supercomputers in the world, and the best people that know how to use them are part of the DOE. If you really are going to advance science in quantum computing, we all hope it is a tool to advance science. The creation of National Quantum Initiative and the centers—like, the centers that are driving the science, and then the centers driving the workforce, is the only way to do it. I forget exactly how much usage even of our systems, but the DOE uses them a lot. They’re exploring quantum algorithms. That’s what they should do. We’re working with the C2QA in Brookhaven, the QSC in Oak Ridge and we just joined SQMS in Fermi. At Fermi they are doing some really nice stuff with pushing coherence and cryogenics and thinking about larger systems, things we will need to execute our roadmap. They’re doing some fundamental science that if it works it is going to impact how we build it into our larger systems.
Even the Q-NEXT center and communication. Everyone says, at a high level, there’s three pillars of quantum: sensing, communication, and computing. I actually think that’s wrong. I think there’s quantum information science, and if we’re going to build a large quantum computer, I’m going to need to connect those quantum computers together. So, I’m going to need to build a quantum intranet. At some point, it’s probably going to be more efficient to optically connect them. Then we must create a thing we call transduction. We have to convert the microwave photons to optical and get them out of the fridge. What I see Q-NEXT doing is fundamental science to make this vision happen. You can envision in the future a quantum network of quantum computers. Even from the sensing side I can see a future where we can inject quantum signals directly into the quantum computer. If you can do things like this, blind quantum computing becomes possible, and from a business side I can now offer cloud computing where even the owner of the quantum computer doesn’t know what the user is computing. I could also imagine if you can inject a quantum signal directly into the quantum computer we could use quantum information science to actually get a better SNR … so all of these will come together.
Collectively, we need to do research, fundamental research. I would like to see more algorithm research in the centers as I want to see quantum computers in the hands of more experts. If I do an analogy to classical computing, numerical methods were fundamental for us to make progress in heuristic algorithms for optimization, the field of AI, which we’re seeing explode right now, is built on numerical methods. We need rigorous numerical methods, algorithm discovery happening on quantum computers. Our quantum computers today are big enough for this to start and they are only going to get better. If we can get more researchers coming up with algorithms, some of them will claim quantum advantage. Let that debate happen. It’s actually a healthy scientific debate, and the National Quantum Initiative is one of those vehicles that allows us to fund that type of research.
From your perspective, looking at the workforce, it seems to me that when you were starting out that a lot of people were learning about this as researchers, as graduate students. To what degree do you think people are now being essentially trained in it such that they can go out into industry perhaps as researchers?
This is a great question. If I look at my team—I have a PhD but many of my team don’t. I don’t think we would have put the quantum computer on the cloud without expert developers like Ismael. Yes, we need the science training. I’m not saying it should go to zero, but we need developers. We need technicians. Some of our engineers that have come up with ways of improving the cycles of learning are trained in electrical engineering. We need all these skills. I would like to see the intuition of quantum computing change from details of the qubits and gates to quantum circuits and algorithms. Today many quantum courses teach this is a qubit, this is a Bloch sphere, and then expect the student to jump to algorithms. I don’t know how many students learn classical algorithms that go back and learn AND and NAND gates. So how do we develop the intuition of using quantum computing as a tool for algorithm discovery? That content, I think, doesn’t exist. We have to rethink how we teach quantum computing. So, how do we develop both ways of teaching to the people that want to go deep into the device physics, and teaching to the people that want to look at this in the equation space and say, how do I map it? I don’t think we have enough material on the second and this is holding us back as a field.
Well, it looks like we’ve got a hard stop coming up here. So, unless you have any final remarks, I think we’ll leave it at that.
No. I think the only remark I’d say is we shouldn’t think it’s over. [laughs] The best is still to come.
[END]