Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
Please contact [email protected] with any feedback.
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of William Gelbart by David Zierler on June 16, 2021,
Niels Bohr Library & Archives, American Institute of Physics,
College Park, MD USA,
For multiple citations, "AIP" is the preferred abbreviation for the location.
Interview with William Gelbart, Distinguished Professor of Chemistry and Biochemistry at UCLA. The interview begins with Gelbart discussing his research pertaining to COVID-19 and creating a vaccine. Gelbart then recounts his childhood in New York and describes growing up with a mathematician father. He takes us through his undergraduate years at Harvard where he majored in chemistry and physics. Gelbart speaks about his grad school experience at the University of Chicago and the trends in chemical physics at the time. He describes working under the mentorship of Stuart Rice, Karl Freed, and Joshua Jortner. Gelbart then details the factors that led him to a postdoctoral fellowship in Paris, followed by a postdoctoral position at UC Berkeley, at which time he transitioned into physical chemistry. Gelbart also discusses his subsequent move to UCLA and his collaborations with Avi Ben-Shaul. He explains his shift into biology and virus research, and his recent work on RNA gene expression and cancer vaccine research.
This is David Zierler, oral historian for the American Institute of Physics. It is June 16, 2021. I am delighted to be here with Professor William M. Gelbart. Bill, it’s great to see you. Thank you so much for joining me today.
Thank you, David.
To start, will you please tell me your title and institutional affiliation?
Distinguished Professor of Chemistry and Biochemistry at UCLA.
Now, is that a dual appointment between two departments?
No. I’m glad you asked that question. We were one of the first chemistry departments to rename ourselves to include biological science. This goes back to the 1980s. We started having biochemists with the birth of modern molecular biology and biochemistry several decades before that, in the 1940s, and by the mid-1960s UCLA established one of the first Molecular Biology Institutes in the country/world, with our department’s Paul Boyer at its helm. By the early 1980s we realized that our canonical divisions of organic, inorganic, and physical chemistry needed to be joined by a biochemistry division and that we should rename ourselves the Department of Chemistry and Biochemistry. Forty years later we still have these four divisions, and still remain a single department. We have two separate graduate programs, one in biochemistry and one in chemistry, and the one in chemistry, until recently, had been organized according to specializations in organic, inorganic, and physical. Now, we have several more Ph.D. specializations, and several more undergraduate majors. But we are a single department of chemistry and biochemistry since the 1980s, and the biochemistry division has been for years the biggest division, almost half of the department. Plus, we have many people in the chemistry divisions pursuing research in the life sciences, like myself. At the same time we are exceptional in having several faculty who come from formal trainings in physics.
This is all to say that UCLA has been very forward-thinking in its recognition that there needs to be breaking down of barriers between the scientific disciplines.
That’s right. But we have the same issues to deal with. People are tribal, and they like to be surrounded by people who were trained the same way, who think the same way, who have, so to speak, the same credentials. We found that it wasn’t an easy challenge to get rid of the canonical divisions. So, what we did — the little bit of progress we’ve made towards getting rid of divisions — is to create more divisions [laughs]. I’m giving away our secret — it’s a reductio ad absurdum argument. We now have a large number of Ph.D. specializations, which include, still, organic chemistry, inorganic chemistry, and physical chemistry, but now enough additional specializations — materials chemistry, instrumentation, theoretical and computational chemistry, biophysics, chemical biology — so that by the time we’ve added a few more people will realize this is ridiculous. Why don’t we just consider ourselves a department of molecular sciences, or whatever?
Bill, do you still retain an affiliation with the California NanoSystems Institute?
Yes, I do.
In what ways is that relevant for your broader research?
It’s very relevant for my broader research because, if nothing else, the California NanoSystems Institute is a set of all-important core facilities for people working in all the sciences, including engineering. But it’s also, similarly, a unifying intellectual resource by being an institute that spans all the disciplines on what we call “south campus.” UCLA doesn’t have a wall between north and south campus, but it happens that the southern end of campus is devoted to the sciences and the school of engineering and the school of medicine and the hospitals. So, the CNSI, the California NanoSystems Institute, provides crucial core facilities, and I would say intellectual resources, as a clearing-house for workshops and to promote exchanges of ideas in all these areas. On a more practical level, we all depend on CNSI to do our electron microscopy, advanced light microscopy, high-throughput screening, and to have clean rooms and BSL3 facilities, and so on.
Bill, for most people when I ask this question, “How has your science fared during the pandemic?” the premise of the question is, “How has it been not being around your colleagues?” But for you, I have the special opportunity to ask specifically, “When the pandemic hit, how much of a pivot did you need to make in order to make your research at that time relevant for COVID, or was that a very easy transition because of what you were doing in the world of mRNA to begin with?
It’s a great question, and I think the simple answer is that we didn’t have to make many changes. I’ll tell you the story of how we transitioned, but another response to your question is to remark that it has always been a great time to be working on viruses – after all, ever since their discovery just over a hundred years ago they’ve been appreciated as a huge part of molecular biology and medicine. But there’s no better time to be working on viruses than during a pandemic, in terms of interest in them, support for work on them, and cooperation amongst all kinds of scientists who are working on one aspect or another of them. So, we had actually done some work on viral vaccines for a few years before the coronavirus pandemic, and we had also been working on cancer vaccines. And we were able to work on both because we have a general platform for delivering mRNA, independent of what it’s coding for, to targeted cells. Vaccines always involve trying to get viral antigens or cancer antigens to antigen-presenting cells. So, if you have a general platform for doing that, it doesn’t really matter what virus or what cancer you’re targeting.
And that’s why in March of 2020, when it was clear that essentially all labs would be shutting down except for the hospital complex on campus, it was easy to respond to one of the first calls for proposals to work on COVID-19 related research. It came from the UCLA AIDS Institute, and they were offering SEED grant funding for COVID-19 related work. And all they asked was that the proposers relate their project to HIV and AIDS research (in addition to COVID-19). And that’s an easy thing to do, because we’re dealing with RNA viruses, whether it’s HIV or coronavirus. One, HIV, is a retrovirus so it’s very complicated for that reason. The other, COVID-19, could in principle be a simple virus, because it’s a positive-sense, RNA genome virus. We can talk more about that – the simplicity of positive-sense of RNA viruses – if you like, and what’s so important about it. But coronaviruses happen to be among the most complicated of all positive-sense RNA viruses. One of the things that attracted me so powerfully to viruses is that they are the simplest, by far, of all evolving organisms. In that sense, they are the simplest disease agents by many, many, many orders of magnitude. And I was looking for a simple biological system to devote myself to as I turned to biology. And of viruses, the simplest ones by far are the ones whose genomes are positive-sense messenger RNA molecules. Along with all the other such viruses, coronaviruses have ready-to-translate RNA genomes, but they have by far the biggest genomes with the most complicated life cycles. So, that’s why I wasn’t working on them, but it was nevertheless an easy switch to take our general platform for viral and cancer vaccines and start working immediately on a SARS-2 vaccine, because all we had to do was choose a different gene to add to our mRNA that we were delivering as a general vaccine.
What information did you need that was not available at your fingertips as soon as you made the commitment to jump in on COVID-19 vaccine research?
That’s another great question. You’ve done your homework. What’s your background, by the way? Am I allowed to ask questions?
I’m a historian of science.
Yes? As is my wife.
She’s a historian of 18th-century France, history of medicine, and history of women in early modern Europe.
And she just finished a book on six eighteenth-century women scientists who managed to do science against all odds…
Oh, my. What’s the name of the book?
Minerva's French Sisters [Women of Science in Enlightenment France].
[laughs] I love it.
It’s Yale University Press. In any case, she has educated me importantly from the beginning. We got married very early, over 50 years ago, and she has taught me to have a great appreciation for what scientists can learn from looking at the history of science.
Oh, that’s great. I love it.
So, we resonate that way, for sure. You were asking: what did we need to know right away to make the switch?
That was not available to you.
Yeah. Well, it wasn’t available to anyone, and that was the issue of what are the best antigens to take from SARS-2 to make an effective vaccine? You know, vaccines are all about getting the right immune response to a noninfectious form of the virus. It can be an attenuated virus, it can be a “killed” (deactivated) virus, or it can be a protein subunit of the virus, or it can be DNA or mRNA coding for some viral protein, like the spike protein of SARS-2. But you need to know: What are the viral antigens that will trigger an immune response which will be most effective at clearing an infection that comes in the future? And that wasn’t available to anybody, because this was a new virus. Okay?
Similarly, it’s related to the challenge that flu vaccine developers have every year, when they ask: How do we have to change the vaccine because of the evolution of the virus since last year? What are the new antigens we should be considering as eliciting the most effective immune response? So, the best people could do was to look at what is now called SARS-1, namely, the SARS coronavirus from 15 years earlier, which was an epidemic, not a pandemic — and to look at MERS, a closely related coronavirus, also a recent epidemic, not a pandemic. And then, to do some quick bioinformatics kind of thinking, to look at sequences in the new virus that had a lot of overlap with — homology – with SARS-1. And in the end, the most famous vaccines, the ones that have made such a big difference from early on, the Moderna and the Pfizer vaccines — which happen to both be mRNA vaccines — they essentially take mRNA coding for the spike protein. So, the spike protein is the viral antigen that is the consensus protein to concentrate on. But there are several other structural proteins, like the nucleocapsid protein, and membrane and envelope proteins, which are important antigens also. So, to answer your question, we had to quickly make a decision: what viral antigens we wanted to include: we took big chunks of the spike protein and big chunks of the nucleocapsid protein of the freshly sequenced SARS-2 virus.
Bill, this is definitely a scientific, and not a political, question: when you first started thinking about this, was the question of whether this was zoonotic, or lab-originated — was that scientifically interesting and relevant to the important questions you were asking at that moment?
No, I wasn’t thinking about that at all. Of course, it’s an interesting scientific and political question.
But it doesn’t matter, in terms of attacking the problem at hand.
It doesn’t matter at all. I was also, of course, intellectually interested in every aspect of the new virus and its epidemiology, and so on. But I knew I had limited time, never enough time, to work on our idea. So, I did put on blinders. Many friends, both personal and scientific colleagues, started asking me all kinds of questions about SARS-2, its epidemiology, new strains, treatments, and so on. And I told them, “Sorry, I don’t know anything more than you do about it, because I’m not allowing myself to take time to learn about it, because it isn’t absolutely essential for the work we’re doing. We’ve made a decision about what viral antigens we’re including in our mRNA vaccine, and we want to concentrate on the new aspects of our vaccine, which differs qualitatively from the other mRNA vaccines. Again, we can talk about that if you like. And it took all of my time. I wish I had time to follow my intellectual curiosity about all these other things. If I were retired, I would do that.
Who were the major collaborators in your efforts, both on an individual and institutional level, circa March 2020?
The first main collaborator from 2020, and still the main collaborator, is an infectious disease expert, immunologist, HIV doctor, in the school of medicine here, Otto Yang. In fact, very wisely, the UCLA AIDS Institute, when they announced their SEED grant funding, explained: you have to have a good idea, of course, but you also have to agree to work on it with one of the following immunologists in the school of medicine…. They were encouraging people from the physical sciences, people from outside the school of medicine, people from outside the field of vaccinology and immunology, to come up with new ideas. And that’s great. They were smart about that, but they were smart also to realize that if you don’t connect people like me, who are relatively new to the field – and hence outsiders – with insiders, you’re not going to get your money’s worth, and more importantly, you’re not going to have as good science being done.
So, I quickly learned to be impatient with labels. Well, in fact, when I switched from physical science to life science, from theory to experiment, you can imagine I was impatient with labels. I had worn a label for 25 years: physical science theorist. Physical chemist theorist. Statistical physicist. Soft matter theorist, and so on. That was fine. But when I started working on viruses, it was natural to ask: So, what am I? I’m not a virologist: I wasn’t trained as a virologist, and I haven’t done any virology research. But I’m working on viruses. Okay? For many years, it was natural, as we started a new field, physical virology, to wear the hat or label “physical virologist.” Fine. That’s a transition to that field, and it’s a field that’s alive and well. It’s great. It’s very exciting. Now part of me is leaving the field because I’m excited about translational medicine and cell biology – problems related to viruses “doing their thing” in vivo. So, naturally, I’m impatient too with labels like “vaccinology” and “immunology” to characterize people working in the field. I don’t see how you can work on vaccines and not worry about immunology, or work on immunology without asking: What is a vaccine? How does it work? And so on. Vaccines, by definition, implicate immunology in an absolutely fundamental way.
So, good for the UCLA AIDS Institute, providing some SEED COVID-19-related grant funds, and – more importantly – good for them for insisting that I connect with “one of the following list of immunologists”. I recognized the name of someone on their list whom I had met and talked with constructively over the years in this huge intellectual community/research establishment at UCLA. And I had remembered he was a kindred spirit. He was a medical doctor eager to talk to physical scientists. So we started working together, meeting every week to develop our ideas for this vaccine. Now we’re working on ideas that have nothing to do with viral vaccines or even cancer vaccines, but rather cancer immunotherapy. And it came from our each being determined — not just eager, but determined — to pick the other guy’s brain. And new ideas came out of it, so I would say he has been a major collaborator post-pandemic.
And then to the NSF’s credit, when they announced rapid funding opportunities related to COVID-19, we submitted an idea, and they got back to me right away asking: Would I be willing to talk to someone else who had submitted a related, complementary, idea? In particular, they — the relevant program officer at the NSF — thought that it would be interesting for the two of us to connect and work together. And it was someone working on 3-D cell culture, specifically specializing in lung 3-D cell culture (the sexy word for it is lung “organoids”). And at first I thought: No, I’m spread so thin already, I don’t need to get spread still thinner. But partly because I’m perverse, and partly because I think my perversity stems from the conviction that the best science when you’re working in biology is interdisciplinary science, I realized that maybe I’d better check out what this person is suggesting [laughs]. And we’ve started a great collaboration, and I’m learning about cell and tissue biology from him, John Mellnik, and his fledgling biotech company, and he’s learning about physical virology and simple viruses from us. So, those are the important new collaborators that came about because of the pandemic, and not coincidentally, they are about as far afield from what I was doing in physical virology as one could imagine. So, they’re both extremely welcome new directions for me.
Bill, now that we can look at the time span between March, when you first started thinking about these things, and early 2021, where the vaccines started to become approved for general use, what surprised you and what did not surprise you about that time scale?
You heard the question being asked of everybody in the early days of the pandemic. “When will we have a vaccine?” You have ignorant and uninformed people asking for a vaccine tomorrow (“as a personal political favor, please”). No names mentioned. And then you have serious questions asked of Anthony Fauci and other allergic disease experts. And the only serious answer then was: We have no idea how long it’ll take. There are many viruses for which we still have no effective vaccine, and it’s not because a lot of effort didn’t go into working on them. Think about HIV, from the 1980s. We’ve had very serious effort devoted to a vaccine for AIDS, for HIV. Instead, we’ve got some good treatments that save lots of lives. But we don’t have a vaccine. In the case of Hepatitis C, there has been enormous progress in understanding the molecular cell biology of Hepatitis C. In fact, this last year’s Nobel Prize in medicine and physiology went to Charles Rice and to Alter and Houghton for the discovery and characterization of this virus. I mention Charlie Rice in particular because he’s someone I’ve exchanged ideas with for many years and with whom I’ve been in close touch throughout the pandemic, as he too switched to coronavirus research. Here again you have a virus (Hepatitis C) for which there’s a very effective treatment, but no vaccine, and not for want of trying.
So, it raises a very challenging question. Why are there no vaccines for some viruses? Why is it so hard to do what a vaccine needs to do with the immune system? It’s an absolutely fundamental question. I don’t say vaccinology or immunology question, because it’s vaccinology and immunology. It’s also molecular and cell biology of the virus and of the immune system. Again, if I had another lifetime, it’s the big question I would start with. But it involves everything we don’t know about the immune system. The immune system is like the brain in that we make enormous progress on understanding it all the time, but we still know essentially nothing about how the brain works, except to describe it in more and more detail. You know, we have very sophisticated NMR and X-ray imaging. We determine that when somebody is anxious, certain brain cells are active. Okay, but how does that help us understand how to treat anxiety on a basic rather than empirical level? And so on. Same thing with immunology. We have 21st century immunotherapies and so on. But we don’t really understand what’s going on. So another way of asking the question is: Was I surprised that the vaccine worked so well?
That it went to clinical trials so quickly is because of the pandemic pressure, and so on. That it worked so well, regardless of what anybody says, is another matter. The vaccine wasn’t really designed on a fundamental mechanistic immunological level. Obviously, Moderna and Pfizer weren’t working on COVID-19 vaccine before the pandemic. They didn’t know about COVID-19. But importantly they were working on mRNA therapeutics and on other vaccines. Certainly, Moderna was all about that. I had started to consult for them just before the pandemic because of the connection between what we do and their general vision of mRNA therapeutics. And in the case of Pfizer, the collaboration with BioNTech was spurred by BioNTech’s work on mRNA therapeutics. So, we were — we, the world — and of course Moderna and Pfizer, but still more so the world — very lucky that the vaccine was developed that worked as early and as well as it did.
Oh, my. Lucky indeed.
Along those lines I hope to find time soon to write an overview on RNA vaccines, and on vaccines in general. There are prophylactic vaccines. There are therapeutic vaccines. There are viral vaccines, and even … cancer vaccines. And I want to comment on just this question in that article, which will be written for scientists of all kinds who work on anything related to these things, but who often don’t think outside of their particular silo, because they work on viruses, or on vaccines, or on T-cells, or on antibodies, or whatever. We need to be clearer about the question, this fundamental challenge to all of us, working on anything related to vaccines: How do vaccines really work, when they work? [laughs] And why don’t they work when they don’t work? How do we understand why we sometimes need to wait 10 years, 20 years, or longer, for a vaccine, or why other times we need to live with treatments instead of vaccines, the way we’ve done with HIV and Hepatitis C.
Bill, on that note, given the breakneck speed of applied research to achieve a COVID-19 vaccine, what gains have been made simply in basic science along the way?
Tremendous gains. I think — I don’t know if it’s an appropriate analogy, but I think everyone appreciates the basic science gains that were made during the Manhattan Project in World War II. The goal was an atomic bomb. Instead of a pandemic, we had a world war that threatened to go on many more years, and the threat of Nazi Germany developing a weapon of mass destruction before we did. So, that science had to be done. It changed how science is done in terms of national labs and government investing in science. But it also involved a long list of basic physics advances.
I think you have the same thing happening now with how the government is funding work on coronavirus vaccines and therapeutics and how they will be funding science differently even after the pandemic because of all the good things they learned about encouraging collaborative research, and so on. Recall the two personal examples I gave you of where on a small scale the funding was dependent upon people connecting from different disciplines, not to have a token outsider involved in your work but because there is compelling scientific basis for merging disciplines. So, I think we are learning a lot about RNA replication by working on coronavirus vaccines and therapeutics. And RNA replication is an absolutely fundamental biological phenomenon. I don’t think I need to explain why. Just as in the early days of molecular biology, it was DNA replication that was recognized to be a fundamental phenomenon that had to be understood. What’s the enzyme that replicates DNA? How does DNA get replicated? And so on. Given that most viruses have RNA genomes rather than DNA, and given that most of those RNA viruses have positive-sense RNA molecules as their genomes — namely, those molecules get translated directly to make an enzyme that replicates them — it’s clear that RNA replication enzymes and the phenomenon of RNA replication is an important molecular and cell biology phenomenon, and it has the enormous practical consequences of being able to understand and treat viral infection better. For example, with Hepatitis C, the reason we have an effective treatment is because Charlie Rice and others figured out how the RNA genome gets replicated in that viral life cycle, which is very similar to how it gets replicated in other viruses. Therefore, you have a target for your treatment of the disease. You have to interfere with RNA replication, and you know how to do it, because you’ve done basic molecular and cell biological studies of the phenomenon.
Bill, in terms of your current work and looking forward, to what extent is your contemporary research interest directly informed by COVID-19, and to what extent are you sort of picking up where you left off in March, 2020?
You’re asking great questions. Thank you. You’re a professional. You’ve done your homework! It’s an important question to be asking, as we begin to come out of the pandemic. It’s a practical thing. Ever since we could start bringing people back to the lab without needing to argue that they will be working on COVID-19 related projects, it’s clear we want to ask: What do we want to get back to? We can’t get back to everything. So, which of the projects from before are most important to us, and how is our answer to the question — what work is most important to get back to — affected by the COVID-19 work we started and are continuing to do? Another way to ask the question is: When the pandemic is really over and we’re “back to normal,” what will we do differently?
First we have to realize that – just as after World War II – after this pandemic the world won’t be back to the same normal. Ideally, we will be taking many things we’ve learned and continuing to apply them to how we live, how we fund science, how we do science, and so on. So, the natural question is: Do I want to keep working on COVID-19 related things? We’ve already gone back to working on cancer vaccines, and because I’ve been learning about cancer and working on cancer vaccines, and because I’m working more and more with immunologists, I’m working on cancer immunology. It’s not clear I will continue to work on COVID-19 related things. We’ve got a better vaccine delivery system because of our last year’s work on COVID-19 vaccine, but we may go back to our cancer antigens being used in that platform rather than continuing to work with SARS-2 antigens. Do you see what I’m saying?
We’ve picked up work again with Boehringer-Ingelheim, who had approached us as early as 2014, asking how we could collaborate on our general vaccine platform being used for cancer vaccines. That’s what the company wanted to do, and that’s what we did with them right up to the pandemic. We’ve gone back to working with them now, and we’re going to be using things we learned about our platform as we worked on COVID-19. We hadn’t started that work to develop a better viral vaccine in the short term, especially as more and more big companies —like Moderna, Pfizer, Johnson & Johnson, AstraZeneca, a good long list — continue to come out with more vaccines. We were never competing. We were trying to get back to the lab, do things that might be helpful for COVID-19, but that surely would be helpful for the next pandemic down the road, if it comes long enough from now so that we’ve had a few intervening years of basic science.
Bill, we’ve been talking about 2020. Let’s engage in some history, now. Let’s go back to your childhood and even before. Let’s start first with your parents. Tell me about them.
First, I was super lucky. You don’t get to choose who your parents are.
You don’t get to choose who your children are, either.
I was lucky there, too. So much luck involved in the family you’re born into. My parents were children of Jewish immigrants from Eastern Europe. My grandparents on both sides came from Poland, not in the 1930s, in flight from Hitler, but rather just before World War I. It was never a good time for Jews in Eastern Europe. There was little hint at that time of Nazi Germany, but there were centuries of pogroms and antisemitism and Jews living in ghettoes and so on. And so my grandparents on both sides were able to get to the United States just before World War I. My father was born the first year after the family arrived here. My mother was actually born in Poland, but came as a 1-year-old infant; she remembers nothing about the old country. So they were both born, effectively, in Patterson, New Jersey, which was a city that many Jews came to if they didn’t come to New York City. My grandparents on both sides were garment workers, and Patterson, like New York, had a big textile industry, where they worked in clothes factories as weavers. They were also active in the early trade union movement.
And in both families, the children in the new generation born in America were encouraged to go to school as long as they could. So, my mother became a high school math teacher, and my father got a Ph.D. in mathematics and became a research mathematician. My parents had four children, so with a young family — this was the 1940s — my mother became a full-time mother, and it was my father who continued professionally, and I was lucky that they were both absolutely wonderful people, great parents. I grew up in a household where my father talked about how much he loved his work, how beautiful math and science were. Never complained about having to go off to work!
That left an impression on you.
Yes. Subconsciously. If you had asked me as a kid, “What do you want to be when you grow up?” I would have said, “I don’t know.” “Oh, you haven’t thought about it at all?” “Not really, but maybe I’ll be a scientist.” “Why?” “Well, my father’s a scientist, and he loves it.” I don’t think I ever thought about it then that explicitly, but it’s clear that it played a huge role. I got excited about science only halfway through college, which was much later than my father had gotten excited about it. He actually never finished high school, because he went to work at age 16 to help support his family. They were very poor. And his father didn’t work as a weaver, but rather as a poet publishing in Yiddish. You don’t earn a living that way [laughs]. It’s hard enough earning a living publishing poetry in English, let alone in Yiddish. So, my father did unskilled work to help pay the family bills. But at age 10, he was already a mathematician, because he was teaching himself…
…and went to the local library, got math books, and was fascinated by geometry and knew he was going to be a mathematician. Didn’t have the slightest idea of how he’d ever get there. But he did go back to school in his mid-20s. He wasn’t able to go to an American university. He hadn’t finished high school. But friends of his who had grown up with him in Patterson and who had finished high school and gone to college, City College of New York (CCNY), that sort of thing, had gone to medical school in Canada because there were still quotas on Jews in medical schools in the U.S. And one of these friends who had gone to Halifax, Nova Scotia, to Dalhousie University told my father he had talked to someone who’d taught him a math class. He talked to that professor about his friend who had never finished high school, who was in his 20s, who was publishing papers in math [laughs] under a pseudonym. I asked my father why he did that (publish under a made-up name). He said: I don’t know. I was a stupid kid [laughs]. But with the encouragement of that professor he visited Dalhousie, and when the math professor saw what my father had done by himself, he arranged for him to be accepted at Dalhousie, and my father finished up very quickly. And the professor at Dalhousie had been a Ph.D. student of Norbert Wiener’s at MIT, so he arranged for my father to go to MIT as a Ph.D. student of Wiener’s. That was in the late ’30s, when my father was already in his late 20s, but he wrote his Ph.D. thesis in two years and started then as a professional mathematician.
Unlike this story, I wasn’t at an early age fired up about math or science. Neither was my identical twin brother. I mentioned there were four children, but as a kid the only family I was really tuned into was my identical twin brother. We didn’t interact much with my sisters or parents because we were complete in our little world – best friends all day long, all night long. And while my twin brother became a pure mathematician like our father, he was concerned that it was only halfway through college that he realized he was excited about math. He wondered: Could he be a mathematician, given that he was already halfway through college and hadn’t been serious about math early on? He had heard so many stories about math prodigies. I don’t know that my father was a prodigy, but he sure was a mathematician at age 10 in the sense that he wanted to think seriously about math, could think seriously about it by himself, and did think seriously about it, even though nobody was encouraging him to do so. But my brother did just fine as a pure mathematician, even though he got a “late start” – he earned his PhD at Princeton and went on to tenured and Chair professorships at Cornell University and at the Weizmann Institute.
Bill, where did you grow up? Where did you spend most of your childhood?
My father was a professor at Syracuse University. He had been at Princeton before that – if you look behind me, you’ll see some pictures on the wall, in particular one with two of my heroes talking to one another.
You recognize only one of them, though. Yes?
Yeah. [It’s Einstein.]
Okay, good. The other one is my father.
In the mid-1940s, in fact, around the time I was born, he was at the Institute for Advanced Study in Princeton, where Einstein had come to stay more than 10 years earlier. My father’s field was partial differential equations and complex function theory. Einstein was very interested in talking to mathematicians about both partial differential equations and complex function theory, because they were important for his work. So, I have the thrilling distinction of having met Einstein, but I surely didn’t have an intelligent conversation with him [laughs] because I was all of 1 or 2 years old. My father went to Syracuse after that and was there for 10 years as a professor, and those were the years I was growing up. And then in the late 1950s, he left Syracuse to go to New York City to start an institute of mathematics at Yeshiva University, which you may or may not have heard of.
Of course. Sure.
Yes? But until the ’50s, Yeshiva University wasn’t known by many people who weren’t Orthodox Jews, because for example, Albert Einstein Medical School — which is the medical school of Yeshiva University — was only founded in the mid-’50s. And there was no graduate school of science. There was a school of law. And my father was asked to start first an institute of mathematics and then, post-Sputnik, with the opportunity of government funding for new science institutions, he was able to grow it into a graduate school of science. And so I went to junior and senior high school in the New York City area, and that’s my growing up. I wasn’t serious about science. I was a good student, but I was equally interested in playing ball. I always loved sports and spent a lot of time playing ball and wrestling and so on. It was only halfway through college that I “saw the light”.
Was your inseparable relationship with your twin brother — did that advance, mutually, your academic interests? Was there a friendly competition there?
We — well, if we competed, it was always friendly. I mean, competing at sprinting to the corner, or wrestling, whatever. We were so lucky. We really always were, and still are, best friends. It was a challenge for us to make friends when we were on our own. When we went to university, we were separated for the first time in our lives. We had spent every minute of the first 17 years of our lives together, day and night — at school, out of school, and of course at home. And we never really competed. Fortunately, we each got seriously interested in math and science at the same time. Otherwise, there would have been some disconnect between us, staying close, because one of us would be undergoing a serious new experience that the other wasn’t. We were also very lucky to meet our life partners at the same time, even though we were hundreds of miles apart, and so on.
Did your father include you in his work? Did you know what he did on a daily basis?
He talked often about the beauty of mathematics. And when we were old enough, he started — according to my older sister, who remembers more about our childhood than we do, because she was older during my childhood [laughs] —my father started reading bedtime stories to us that took the form of reading from books in the history of mathematics or telling us about transfinite numbers. It’s certainly a distinct and important memory for my brother and me — a transformative one, when we learned about how a set of numbers, or a set of anything, can be infinite. It was the first real glimpse of pure math for both of us, and it had a truly profound effect on my brother. He says it’s what triggered things for him going into pure mathematics.
And even though it didn’t have as dramatic an effect on me, it’s still something that’s really important to me. When I talk, for example, with any interested layperson who’s really eager to learn what pure mathematics is I will ask them: Well, have you ever thought of what infinity is, why you can count some sets of things with counting numbers – 1, 2, 3, 4, … -- and never run out of counting numbers? They say: Yeah, that’s infinity. And I ask them: Do you think you understand it? They say: Yeah, it’s when things go on forever. And I ask them: Is there more than one way in which things can go on forever? Is it possible that one infinite set could be bigger than another? And that stops them. And I ask them: If you took half of an infinite set away — for example, if you look at all the counting numbers, the integers — and you take half of them away, would you have a smaller set of numbers? They ask: What do you mean? I say: Take away all the even numbers, aren’t you left with a smaller set? Aren’t you taking away half the numbers, because every other number is even? Yes. Aren’t you taking away an infinite number of numbers? Don’t the even numbers go on forever? 2, 4, 6, 8…? Don’t they never stop? Yes. Aren’t you left with an infinite set of numbers? Yes. You think there’s something you can still learn about infinity, now that you know that much? And so on. That’s the kind of thing our father talked to us about, and he started teaching us about how you can have a bigger infinity than the set of counting numbers, and how, in fact, there is an infinite hierarchy of orders of infinity. So, in my brother’s case, it blew his mind, and he became a mathematician. In my case, I just thought: wow, that’s one of the most beautiful things I ever learned. But halfway through college I was (finally) beginning to be excited about physics.
Were you thinking about science specifically for undergraduate?
Yeah, but not seriously. In fact, I started as a pre-med student, but that made no sense because I had no thought of becoming a doctor and had no interest in biology. I had managed to avoid taking a high school biology class that was required. I still, to this day, don’t know how I did it. But I sensed already that I had a problem with biology because it was too complicated and there were too many facts, and for sure we were asked to memorize an outrageous, totally acceptable, number of facts and names. And even though I wasn’t yet serious about math or physics, I was comfortable with them because you didn’t have to memorize anything.
So, without thinking about it, I was powerfully drawn to learning science where you don’t have to memorize anything. Of course, you’re asked to memorize things in school because it’s the easiest way to teach large classes, especially for a teacher who doesn’t really understand science. You’re told: Here are the things you have to “know”. You’re going to be tested on them. This, instead of teaching basic concepts, which requires a much more fundamental and deeper understanding of the subject. Same thing for teaching history. You can teach lots of facts and insist on students regurgitating them, or you can teach students about how history is studied and what primary sources are, what secondary sources are. Who writes history? When we read historical accounts of the French Revolution, of the American Revolution — when we read accounts of Nazi Germany — when we read accounts of the January 6 insurrection — whose history (account) are we reading?
These are hard ideas to teach to an oversized class, especially one that doesn’t want to be there in the first place. And how do you examine/test students on these things? So of all the things that are taught badly in high school, I think history is taught, in many ways, worst of all — along with biology – because it’s most tempting to teach lots of facts there, whereas with physics and math — geometry, algebra — there are “only” basic, general, ideas and concepts. So, that’s one reason I was attracted to science and to physical science, and then the other was the subconscious appreciation of how much profound satisfaction it gave to our father.
Bill, was there a particular professor or class at Harvard that clarified your thinking?
Not really, even though I had some truly outstanding and inspiring professors like Norman Ramsey, George Kistiakowsky, E. Bright Wilson, Dudley Herschbach, and William Klemperer. No, I think in the end, it was the example of our father.
And was the program chemical physics — was that a way for you to hedge your interests between chemistry and physics?
Yes. It was a way to minimize the amount of organic and inorganic chemistry I had to take, because again, there was memorization involved there, and that was a problem. And I liked physical chemistry because there was math involved, and there was physics involved. So, it’s not clear why I ended up majoring in chemistry and physics — that’s what the major was called — rather than physics. But maybe it’s because I was evolving out of my misbegotten, or whatever, start as a pre-med student, and I quickly realized: I don’t want to go to medical school. I don’t want to study biology. I want to study physical science. So, the joint chemistry and physics major was a little less of a jump from what I had started in.
And between theory and experimentation, did you have a good idea of where you wanted to head as a result of your undergraduate experience?
Oh, as a result of my undergraduate experience, absolutely, because I started working on an experiment in Bill Klemperer’s physical chemistry lab and quickly realized that I was interested in the theory behind the experiment. It happened to be molecular spectroscopy. Okay? I found I was interested in quantum mechanics and understanding what quantum mechanics had to do with absorption and emission of light and how you go from atoms to molecules with quantum mechanics. So, I had a great experience working in the research group of an experimentalist, but there were a couple of people in the group doing purely theoretical work in molecular spectroscopy and quantum mechanics. And by the time I finished a year or two of undergraduate research, it was clear to me: I wanted to do theoretical chemistry and physics.
When you graduated, was the draft something you needed to contend with?
It was, especially when my draft board made it clear that they would not be giving student deferments for graduate school in the sciences (or in anything else). So, I thought either I would need to be a conscientious objector or go to Canada or start teaching high school, which they would have given a deferment for, or get a job doing research related to Department of Defense work. So, I was super lucky, and my twin brother was super lucky too. In my case, my Ph.D. supervisor, Stuart Rice — I had just finished one year of graduate school at the University of Chicago, and it was 1968 – was able to hire me as a full-time staff scientist paid by an Air Force Office of Science Research grant that he had. It was a grant to work on fundamental aspects of molecular photophysics. So, I was able to keep doing the research I would have done as a Ph.D. student and yet be deferred. I was very lucky. Two years later, I drew a good lottery number when the lottery went into effect. And so, I was able to re-enroll as a student for one quarter and have the work I had done in the last two years be accepted as a Ph.D. thesis. My brother was able to do a similar thing at Princeton: he stopped being a registered student for those same two years and taught math to part-time students at one of the New Jersey state college campuses.
Did you go to Chicago mostly by reputation? Was the program in chemical physics particularly strong?
Absolutely. It was possibly the strongest place at that time in chemical physics and chemical physics theory. I had been accepted also to the graduate program at Harvard, where I had done my undergraduate work and where my wife-to-be would be staying for another year, but I was eager to expand my horizons and start as a PhD student at the University of Chicago.
Who were some of the major professors at that time?
Well, the ones I worked with were Stuart Rice and Karl Freed and there was a regular visitor from Israel, Joshua Jortner, who worked very closely with those two. And so, I worked with all three as a Ph.D. student. I was only there for three years but did so much collaborative work with those three guys that when I had a chance to leave and do postdoctoral work I was determined to work in a field that none of them had worked in, which was hard because they happened to be intellectual vacuum cleaners. They worked in all fields as theoretical physical scientists — statistical mechanics, quantum mechanics, molecular spectroscopy, polymer physics, solid-state physics, and so on. I wanted to work in a field they weren’t working in. I had great relationships with them, but I wanted to see if I could work by myself.
And I had a chance to do that when I had an NSF/NATO postdoctoral fellowship and could spend a year in Paris accompanying my wife, who was finishing up her Chicago Ph.D. thesis research working in French archives in Paris. So, I switched fields from molecular spectrosocpy and photochemistry and started working on critical phenomena and, in particular, how you can probe critical phenomena using what was then a rather new experimental technique: laser light scattering. This was 1970, not long after the laser had been developed, when people were just beginning to do dynamic light scattering, inelastic light scattering with lasers, to learn about critical phenomena, e.g., about divergence of correlation lengths and of compressibilities and susceptibilities, and so on. The renormalization group was transforming our understanding of these phenomena, scaling arguments and critical exponent relations were being incisively tested with radiation and particle scattering techniques. So, I started asking about laser light scattering near the critical points of fluids and how you learn about dynamics and structure – specifically higher-order/many-body dynamics and structure – from analysis of multiple scattering contributions. And that led me almost immediately into light scattering from not-so-simple fluids, like liquid crystals and colloidal suspensions of anisotropic and self-assembling particles – which ultimately led me, decades later, to biology. You’ve heard these stories many times.
Before we leave Chicago too quickly, being there in 1968 must have been pretty interesting. Were you politically active at all? Did you get involved in any of the protest movements?
No. I am embarrassed to say that I have never been politically active until the last few years. I always thought: I just want to do my work. It’s the most important contribution I can make to society. Plus, it is so overwhelmingly satisfying for me. But I also believe that basic science research is good for the world. We’re not going to have a vaccine without it. We’re not going to have the internet without it. We’re not going to have medicine, technology, without it, and so on. Plus, on a deeper level, it’s something, again, that I learned from my father, but I feel it even more strongly, possibly, than he did. I think it is positively good for people, for civilization, to understand things better.
Life is hard. It’s better if you understand some things about it. And as scientists, we get to understand more every day, every year, for centuries. We only make progress — sometimes lots of progress, sometimes little progress. And we never go backward in science. We have tough challenges. We have revolutions, whether it’s Darwin’s thinking about evolution or Einstein’s thinking about relativity or the founders of quantum mechanics, or the founders of molecular biology and so on. We have revolutions in thinking, and so on. But we also have incremental progress all the time. It’s the nature of the scientific discipline. It’s not shared by other intellectual disciplines. Have you ever studied Karl Popper’s philosophy of science…
Of course. Of course.
Yes, you have, as a historian of science, of course. Have you read his essay on how he came to develop, as a young man, his understanding of what makes science different from other intellectual disciplines?
He was deeply fascinated by the contemporary theories of Marx, of Freud, and of Einstein. These were the magnets of his generation of young intellectuals who were excited about big ideas and who were meeting in cafes, fired up, learning about these things. And it was natural for him to start asking: What differences are there in the fundamental natures of these theories?
And ultimately, he developed his theory of demarcation – what makes science theory different from other theories? And it has to do with being able to prove ideas are wrong. After all, in a real sense – and this is an extremely subtle, deep, fact that people have real trouble grasping – you can’t prove any particular idea or set of ideas are right.
But uniquely, in science, you can prove that they are wrong. By the way, that latter point is something that many of our professional scientist colleagues don’t appreciate. They so often say: We did this experiment, and it proved such-and-such theory. No. You can only do an experiment that will prove there’s a problem with the theory, an inconsistency between what you’re measuring and the unavoidable implications – predictions – of a theory. If you get observations consistent with the theory, it means that you haven’t yet found a problem with this theory. And we know – sorry, as a professional historian of science, you could be telling me this – that the history of science is the history of progress related to establishing what’s limited or not fully correct about the best theories, what we call the laws. For example, we had for three centuries the classical physics theory of Newton that we now know, but only in the last hundred years, must be generalized so that it appears as a limit of quantum mechanics. We have the laws of thermodynamics for the past two centuries and don’t yet know how we will have to generalize them, but the history of science suggests we will have to. Newton wasn’t wrong: classical mechanics simply wasn’t a complete theory, applying to all size and mass scales. And the same with nonrelativistic quantum mechanics, and so on. Science is so deeply satisfying because you always understand better the object of your study – the natural world that exists independently of people. On the other hand we will always have the pendulum swinging between Marxist and non-Marxist thinking for understanding economics and the organization of society, and we will always have Freudian and non-Freudian theories vying with each other to explain personal behavior. And that’s because you can’t prove any theory wrong in these domains.
Did you have these notions, even as a graduate student, or you started thinking about these things later on?
No. Not at all. This kind of thinking came to me only later in my career.
But again, I didn’t really understand science, I think. I’m understanding it better all the time, but it’s only when I started doing biology seriously, and by doing experimental work. I’m not an experimentalist. I’ve never done an experiment in my life with my own hands. But I’m not a theorist now, either, because I’m not actively doing theory, where you formulate, write down, and solve equations, analytically or computationally. Okay? I’m designing experiments. And now I understand that a really good experimentalist is somebody who designs a really good experiment, one that will teach us something new.
Gregor Mendel was a great experimentalist, because he chose the right system for testing his theory of genetics. He didn’t arbitrarily pick peas to study. He asked: What system do I need to test my ideas about inherited characteristics? What system satisfies the criteria I would need to test my theory, i.e., that has the potential to prove it wrong? And when Sydney Brenner started the modern field of developmental biology, he did it by picking the right system, namely the flatworm, C. elegans. He asked: how complicated a system do I have to choose so that it’s just complicated enough that we can understand the development of a multicellular organism from a single cell, and yet simple enough to understand?
That is the fundamental problem in developmental biology. You could argue it’s the fundamental problem in biology. We start as a single cell. The cell differentiates into many different kinds, each with the same DNA but each doing different things – performing different functions. That’s developmental biology. Brenner picked a system – animal – that had about 1,000 cells, that had a nervous system. You can’t study developmental biology seriously without dealing with a nervous system. You can look at gene regulation in a bacterium, or a one-celled organism, and at its growth and ultimate division to produce daughter cells, but you’re not going to be looking at cell differentiation, and so on.
Max Delbrück – a physicist! – started molecular biology. He’s credited with starting molecular biology as much as anyone is, because he was determined to chose the right system for investigating life processes on a molecular, quantitative, level. He realized that we have to use viruses to understand genome replication and transcription, the fundamental set of molecular biology processes. And why did he pick viruses? Because they are overwhelmingly the simplest evolving organisms. Okay? And they get their genomes replicated, and then get those genes expressed. So if we can understand how it works in viruses, we’ll understand it in us. Okay? Why did he pick bacterial viruses, viruses that infect bacteria? Because he knew that bacteriology was the only modern biology — was the only quantitative biology. Delbrück was known — he was notorious, in fact – for insisting on biology experiments being quantitative. And you could be uniquely quantitative when you studied bacteria. Why is that? Because a hundred years earlier, a quantitative assay had been developed that allowed you to count individual bacterial cells. If you wanted to understand quantitatively the growth of bacteria that are infected by virus, you could count bacterial cells, as well as counting the virus progeny from each round of infection. And so, he knew to choose a bacterial virus as his model system. This was in the 1930s and 40s, just decades after viruses of any kind had been discovered. Remember that during the great influenza pandemic of 1918, nobody knew it was caused by a virus; indeed, medical experts spent most of their time trying (mistakenly) to identify the bacterium that was the disease agent!
Can you imagine…
…scientists trying, and doctors trying, to treat this killer disease, this pandemic, without knowing whether it was bacterial or not?
It was only in the 1930s that it became clear that it (the causative agent of influenza) was a virus. So, Delbrück had just barely the benefit of all these things, and then he picked the right system for the breakthrough in molecular biology. So, … please remind me why we’re talking about this [laughs].
Well, we were actually back at Chicago when you weren’t thinking about these things.
Oh, yes. Yes.
Just more broadly: what were —
But please, before we go back to my Chicago days, there was some way we got to the business of picking the right system.
Well, I think part of that might have been choosing a thesis topic. And so, my question there was: in chemical physics, you’re at one of the best places to be. What were the big questions in chemical physics at that time?
Chemical physics, at that time, was based on being able to do high-resolution spectroscopy, laser spectroscopy, and molecular beam chemistry and physics — with “molecular beam chemistry” meaning that you’re looking at one molecule collide and react with another, without the involvement of any other molecules. You have isolated molecules in a molecular beam, in an evacuated chamber, the way in the 1920s physicists had molecular beams of atoms so they could study nuclear and electron spin and so on. You were able to cross a beam of molecules – which weren’t interacting with each other or with any other molecules, because they’re going through an effective vacuum – with a laser beam that had high enough intensity so that you could excite single molecules. And you could prepare the laser excitation and the conditions of the molecular beams so you knew the initial quantum mechanical state -- the electronic state, vibrational state, and rotational state -- of the small molecules. You knew the intensity, polarization, and frequency of your coherent laser light, and you measured cross-sections for exciting this molecule to some particular quantum state, without collisions playing a role. Or you looked at one beam of molecules colliding with another beam of molecules, instead of with a laser beam. That was a revolution in chemical physics.
Everybody was working in the low-pressure gas phase or with molecular beams and lasers of increasing intensity. Okay? And so, one of the hot problems was: can you begin to understand how a laser-excited molecule evolves in time, without undergoing collisions with other molecules, i.e., without transferring its energy to another molecule through collision. In particular, is it possible for this molecule that you’ve electronically excited to return to its ground electronic state without emitting radiation? Is it possible to have non-radiative decay of the excited electronic state of the molecule? And it was appreciated that the only way that can happen is through the coupling between electronic and nuclear degrees of freedom, a coupling that’s neglected when you make the fundamental approximation of molecular spectroscopy due to Born and Oppenheimer in 1926. You assume that the electrons are moving so much faster than the nuclei that the electrons essentially “do their thing” for each of different fixed nuclear configurations. You separate electronic and nuclear motions. In a practical, technical level, what it means is, since these motions (these degrees of freedom) are decoupled, the wave function for the molecule can be factored into a wave function for electronic degrees of freedom and one for vibrational degrees of freedom and rotational degrees of freedom. That means you have a molecular wave function whose square is a product of probability distributions, one for the electrons and one for the nuclei: you can talk about probability distributions for electrons without needing to worry about what the nuclei are doing. You just take advantage of the fact that the nuclei are moving much more slowly than the electrons. In that approximation, there is no such thing as a nonradiative electronic relaxation process from an excited molecule to a ground-state with the same spin multiplicity, because there’s no term in the molecular Hamiltonian that’s coupling those two states.
How did you see your research at the time, to the extent that you thought about such things, as responsive to those bigger questions?
Oh, there I was all fired up about it. But I was working with blinders on, insisting on doing chemical physics with single isolated molecules, not trying to understand real chemistry where these molecules are photo excited in aqueous solution and interact with all their surrounding molecules. Life – at the molecular biological level – goes on in aqueous solution. I put all of that aside, and proclaimed: we’re doing fundamental chemical physics, trying to understand collision-free, isolated-molecule, photophysics in terms of the breakdown of the Born-Oppenheimer approximation. In fact, my first paper, in 1969, had a grandiose title like “Radiationless transitions due to the breakdown of the Born-Oppenheimer approximation: theory of photoisomerism in isolated molecules.” So, I was all excited about how fundamental quantum dynamics theory was going to account of all of photochemistry.
How did the opportunity at Paris come about, and what were your motivations for going?
It was so that I could be with my wife rather than spend a year apart [laughs]. I had to be in Paris because she needed to be there. So I applied for and was offered an NSF/NATO postdoctoral fellowship that allowed me to work anywhere in Western Europe and do something independent by myself, where I’d have my own funding and didn’t have to work with a particular person. I really wanted to try to do something by myself. To that end I had also applied to the Harvard Society of Fellows, where you get support for several years so that you don’t have to work with a PI. You can work by yourself, especially if you’re a theorist. And there was a version of it at UC Berkeley called the Miller Institute Fellowship, where you got similar funding to do a few years of independent postdoctoral work. I was a runner-up, whatever, an alternate for the Harvard Society of Fellows, and was offered a Miller Fellowship at Berkeley, but that didn’t get me to France to be with my wife.
So, I accepted the NSF/NATO postdoctoral fellowship and I sat in Paris working by myself for a year, learning about light scattering, learning about critical phenomena, and learning statistical mechanics. And by the end of the year — more important than having a paper or two to publish by myself – I realized: yes, I can work by myself. That was hugely important because, while I felt good finishing at Chicago and realizing that I could work with and keep up with theorists as powerful as those I mentioned, I was concerned: Can I really work by myself, develop ideas by myself, and choose the requisite theoretical approaches and techniques, and so on.
You self-consciously took this as an opportunity to learn beyond your thesis research.
Oh, yes. You know, there are many, many kinds of good scientists, but if you group them into two categories you could say there are ones who do their best work by working on one thing for their whole career — that doesn’t have to be a bad thing, to stick with one problem. It’s a very good thing if the problem is important enough, challenging enough, and if you’re doing basic and original enough work in it. I would say Ben Widom is a wonderful example of somebody who never left the field of simple statistical mechanical models for understanding phase transitions and critical phenomena. And he made truly enormous contributions. In his 90s now, retired for years, he’s still doing very elegant theory in that field. In his case, it was the right thing for him to think deeply, largely by himself, about an absolutely fundamental physical problem.
I had a different example in Stuart Rice. I referred to him, Joshua Jortner and Karl Freed as intellectual vacuum cleaners. I use the term admiringly. They were highly discriminating and powerful intellectual vacuum cleaners. They worked on just about every problem in theoretical physical science, with great physical insight and prowess. So, I had the example of people who did their best work by working on many things at the same time, and moving from one field to another. What I learned quickly about myself, though, is that I can only work on one thing at a time, namely what I’m most excited about, because it didn’t/doesn’t make sense to me to work on something that I’m even a little less excited about. You only live once. Science is so deeply satisfying. It’s all the more satisfying if you’re really turned on by it.
Okay. So, I work on one thing at a time, but over time I also realized that every 10, 15 years, I personally needed to ramp up the slope of my learning curve and start working on a new set of problems. The biggest change was when I started working on viruses and began to embrace rather than reject biology as something I wanted to think seriously about. And, ultimately, the biggest aftershock from that big change was essentially giving up physical thinking about viruses and instead doing whatever I had to do to understand how viruses work in cells and in whole organisms, namely, how they engage the immune system.
Now, when you get to Berkeley and your field becomes physical chemistry more broadly, in what ways was it a continuation of your work in chemical physics, and in what ways was it a shift?
It was both. In the 1970s and certainly in the early ’70s when I was at Berkeley, physical chemists were still stuck on the collision-free gas phase and doing as high-resolution spectroscopy and dynamics as they could, e.g., state-to-state quantum chemistry. But because I had begun to work in Paris on optical properties of simple fluids, I was beginning to think about liquids. And a phenomenon that arose in my looking at light scattering from simple fluids near their critical point is that multiple light scattering becomes important. When a fluid gets near its critical point, the scattering cross-section is so large that the scattered light doesn’t have a chance to get out of your sample before it’s scattered again and scattered again. And yet all of the light scattering, X-ray scattering, neutron scattering theory that we learn about in school explicitly assumes that the radiation is scattered only once, so that the angle- and frequency- dependent autocorrelated intensity is the space-and-time Fourier transform of a two-body distribution function involving pairs of positions at different times. As soon as the radiation is scattered more than once, you don’t know how many times it’s scattered, and you no longer have any simple relation between what you’re measuring and the structural and temporal correlation functions of the fluid. So this is the problem I tackled with my very first Ph.D. student, who — I have to be careful, since I’ve had many other really good Ph.D. students – was my best [laughs].
You had many of the best.
Yes! And I was lucky that my first Ph.D. student was an absolutely outstanding young scientist, David Oxtoby, who went on to become professor at the University of Chicago and dean of physical science there, then president of Pomona College because he got interested in undergraduate education after writing a bestselling first-year chemistry textbook, and now the president of the American Academy of Arts and Sciences, doing fantastic things. He’s a very impressive guy. But first, he started out as a brilliant physical theorist, and I learned statistical mechanics with him. We looked at multiple light scattering near critical points, and we set out to understand how an atomic fluid like argon near its gas-liquid critical point — even though it’s made up of atoms, and atoms are uncontestably spherical objects – somehow depolarizes light as if there’s anisotropy in the system. How do they do it? Because of higher-order light scattering. We showed in particular how multiple light scattering by a system of atoms could lead to depolarization of light, and we started thinking more generally about light scattering as a probe of investigating exotic sources of anisotropy in liquids.
So, I began to think about colloidal suspensions of long rod-like particles and neat liquids of anisotropic molecules, and about how they spontaneously align when you crowd them or when you cool them. That was an exotic phase transition, and I started working on the theory of phase transitions in liquids of molecules that had an intrinsic anisotropy. That was not something that physical chemists were looking at in the 1970s. As we were just discussing, most theoretical physical chemists were instead doing state-of-the-art quantum chemistry computations of potential energy surfaces, trying to understand what was going on in crossed molecular beam-laser experiments. And here I was, talking about liquid crystals, which physical chemists thought of as a 19th century phenomenon, because they were first studied in the 19th century, and which nobody in modern physical chemistry had looked at since. Essentially only one person had turned the attention of modern physical scientists to them by the 1970s, and that was a very special French theorist, Pierre-Gilles de Gennes.
So in the ’60s and ’70s, physical chemists — they’re no more or less ignorant of other fields than other scientists – could dismiss his work on liquid crystals as “French science”, and therefore [laughs] backward science. But, it turned out that there were several people at UCLA in the chemistry department, physical chemists like Howard Reiss, Dan Kivelson, Bob Scott, and Chuck Knobler, who were already looking at how you can learn about novel critical phenomena in simple liquids and polymer mixtures using light scattering and basic statistical physics ideas. There were also people like John McTague using neutrons and light scattering to learn about phase transitions in two dimensions, in particular phase transitions of adsorbed particles on clean, well-characterized surfaces, to test renormalization-group theories of 2D melting and freezing. Surface science was beginning its modern reincarnation — ultra-high-vacuum surface science. But at that time, the mid ’70s, surface science and liquid state science were not yet the focus elsewhere of physical chemists, or chemists. Materials science certainly was not.
So, I found that I was being invited down to UCLA by the people here who saw me beginning to use light scattering theory to learn about exotic properties of liquids, and it was actually just after two years on the faculty at Berkeley – I had accepted a position there in 1972 as Assistant Professor after just a few months of the Miller Institute fellowship – that I was made an offer to join UCLA as an Associate Professor of Chemistry. Within 10 years, I’d say by the 1980s, far fewer people were working on isolated molecules in the gas phase. They weren’t talking about chemical physics any longer. They were talking about surface science, materials science, and beginning to talk about biophysics…
What was your decision to move over to UCLA?
I was really impressed with, and liked, the exceptional group of UCLA physical chemists who were ahead of their time – already working on liquids and solutions or surfaces, well before the rest of physical chemistry moved in this direction from the high-resolution collision-free gas phase. They had also established strong and productive collaborations with a stimulating group of “soft-matter”/condensed-matter theorists and experimentalists in the UCLA Physics Department, like Fyl Pincus, Ray Orbach and Paul Chaikin, fostering a highly interdisciplinary and mutually reinforcing community of researchers. So, it was the fact that there was a critical cluster of people working in the new field I was moving into. Remember, any new field I’m working on is the thing that excites me most.
So, I thought: wouldn’t it be exciting to go to a place where my immediate colleagues are reinforcing this excitement?
When did you first meet Avinoam Ben-Shaul?
I met him in the late ’70s at a spectroscopy meeting in Munich, ironically when each of us was still going to meetings in our former field, namely in molecular photo-physics. That’s how he started, too. In his case, it was chemical lasers and laser physics and so on. But in the late ’70s, we were both thinking about leaving that field. There was something in each of us that wasn’t satisfying us. Namely, we wanted to learn totally new things. I met him again when I visited Israel for the first time shortly after the Munich meeting in 1978. At that time my twin brother was spending the first of a couple sabbaticals in Israel that led to his moving to Israel in the 1980s, leaving Cornell, where he was a professor. So, the second half of his life and most of his professional life has been in Israel. I’ve visited there now probably 40 times over the past 40 years, because I’ve essentially gone every year to visit with my brother and his family, and to visit and work with the person who quickly became my closest friend – Avinoam (“Avi”) Ben-Shaul.
In my first visit to Israel, in spring of ’78, I gave a talk in Avi’s department at the Hebrew University of Jerusalem on “local modes” in highly-vibrationally-excited polyatomic molecules. I was still doing some molecular spectroscopy work, and because most physical chemists were still working on that, that’s what I talked about when I gave talks. A year later, when Avi was thinking about where he’d go on a sabbatical year that would start in 1980, he wrote to me and wrote to a bunch of other people, hoping to move into either the protein folding field or lipid bilayers and phase transitions in membranes and so on. He wanted to move into biophysics. In any case, he wanted to move, at the very least, into complex fluids. I wasn’t thinking about biophysics yet, but I was working on liquid crystals. Because we had liked each other personally and hit it off professionally from our meetings in Munich and Jerusalem, I was delighted when he decided to accept my offer of a sabbatical visit at UCLA. We started working on a liquid crystal theory problem, and he stayed a second year, By the end of that year we saw we simply liked exchanging ideas of all kinds and doing serious collaborative work, and so, we have evolved together in our complex fluids and biophysics interests over the years in order to keep working together.
Bill, obviously you were working then in a theoretical framework, but I’m curious: what advantages, either in instrumentation, materials science, computation, might have been most relevant for you during this collaboration with Avinoam?
Well, again, really good question. You would think that we might have complemented one another if he was a good computational theorist, the way I wasn’t and am not — or if he were a really good mathematical physicist who could prove theorems the way I might have been able to do if I had gone the way my brother did, but I didn’t. But neither of us was a computational or a mathematical physicist. What kind of theorist am I? I’m what I call a “phenomenological theorist.” I sit with an interesting physical or biological phenomenon for as long as I have to, in order to come up with a simple enough model to do phenomenological theory. Not computational, not proving theorems, but trying to understand a phenomenon. So, it’s paper-and-pencil theory. You’re not predicting things to many decimal places. You’re essentially predicting whether something will happen or not, and giving a reason for why it happens or not. It turns out that Avi, as I call him — Israelis know him as Avinoam, but many people call him Avi too, and that’s how I call him — he was just that kind of theorist too. That’s why we hit it off so well! In our very first collaboration we developed a nice simple theory for understanding the elastic constants of simple liquid crystals — twist, bend, and splay elastic moduli. And we realized: okay, now we just have to evaluate these many dimensional integrals that come up, and so on. We weren’t doing computational theory, but even with “paper-and-pencil theory” you always get to a point where you have to diagonalize numerically some matrix that arises, or evaluate some integral numerically, and we realized [laughs] that neither of us could do it. So, we were going to have to start having students who are interested in and able to do phenomenological theory but who can also do computation the way we can’t!
Again, we didn’t complement each other with different skills, as you would have thought might be the basis for a good collaboration. In a good partnership in personal relationships, you can appreciate that it’s good to have people with complementary skills, say one who is more emotional, one who is less emotional, so that when showing more emotion naturally is the way to make progress, dealing with the problem, one person in the couple can do it, and so that when not reacting too quickly is the right way to go, the other one can contribute. And similarly with computational and mathematical prowesses complementing one another. But in our case we brought many of the same strengths and weaknesses to our collaboration, and – because we were having such a seriously good time working together – it became a high priority for each of us that we co-evolve in our research interests and become better and closer friends, as indeed we have done these past 40 years.
Bill, was your interest in viruses — did it happen dramatically, or did you see a natural progression of the things that you were working on, specifically because you saw these connections between soft-matter physics, stat mech and biophysics?
Another really great question. I think you understand how in the 1980s and ’90s, it was natural for me to go from liquid crystals to micellar solutions. In fact, in our work on liquid crystals, we had asked: what if [walks over to a big wood spherocylinder model at the other end of his office] — [laughs] by the way, this for me is a liquid crystal forming particle, either as a colloidal rod or a rod-shaped molecule, like two fused or conjugated benzene rings that are sort of planar. I didn’t look at any atomic detail, because I wanted a phenomenological theory. I wanted a model to simply characterize the anisotropy of the particle, independent of whether it was a 300-nanometer long colloidal rod made up of thousands of proteins like a tobacco mosaic virus particle (Okay?), or a single biphenyl molecule. One way or the other it’s just a rod-like/pencil-like particle or molecule.
I didn’t want to be concerned with any atomic detail, but rather only in getting the symmetry right. In fact, liquid-crystal-forming particles don’t generally have rod-like symmetry. They look more like a book. Because liquid-crystal-forming molecules often involve two conjugated benzene rings they’re planar things. They’re not sticklike things but rather look more like this (picks up a book). This thing has a long axis: but it has a medium axis and a short axis. It is not a uniaxial particle, but rather a biaxial particle. Most liquid-crystal-forming molecules and particles have lower than uniaxial symmetry, but they form orientationally ordered phases where only the long axis is ordered. They form microscopic phases with uniaxial symmetry. We began to ask: why don’t you have biaxial liquid crystalline phases? Okay? There was no example of it in any neat liquid crystal or colloidal liquid crystal. We predicted that as you cool the uniaxial liquid crystal phase of biaxial particles, or crowded it further, you should get a biaxial phase and then after that get the full crystalline phase. But it seemed that these systems froze before they ordered the other axis, so that they never exhibit biaxial ordering as a liquid. But I remembered Al Saupe, another pioneer (like de Gennes) in the modern era of liquid crystals, had pointed out to me at a recent Liquid Crystals Gordon Conference that solutions of micelles — self-assembling aggregates of many surfactant molecules — where the micelles had biaxial symmetry, actually did form biaxial phases.
Now, this torocylinder I’ve just picked up is a uniaxial particle – like the spherocylinder I was holding a moment ago, but it’s oblate. Most things are in between and are biaxially symmetric. So, Avi and I found it natural to ask: what’s different about a self-assembling colloidal system? Why should it show a biaxial liquid crystal phase? We realized: ah, there’s a new degree of freedom, involving self-assembly, and we started working as early as his first sabbatical year at UCLA (1980) on how the size of self-assembled micelles would change when they spontaneously ordered. So, you have a new phenomenon, a coupling between an association/aggregation/microphase-separation phenomenon and a long-range-orientational ordering phenomenon. That got us started thinking about micelles and we began working on statistical mechanical theories of how they form, and about self-assembling phospholipid bilayers, liposomes that form from lipids and so on. And that had us thinking about cell membranes, which are phospholipid bilayers. We started thinking about how polymers interact with micelles, with colloidal particles, and so on.
So by the 1990s, we were looking at how polymers and colloidal particles interact. We didn’t know any biology, but knew enough to know that life, a cell, is just a concentrated suspension of polymers and colloidal particles. Without exception. Proteins, nucleic acids – whether you think about them as polymers or not (they are, of course) – along with macromolecular assemblies and intracellular compartments, are colloidal particles. But proteins also self-organize into very special folded states that account for their unique enzymatic and structural functions. So, in the late 1990s, friends whom I’d become acquainted with in the statistical mechanics theory community were trying to pull me into protein folding. They said: this is the big, hot problem in biophysics and biology. But two things turned me off: one, that it was the big, hot problem, and I didn’t want to start working on something that was a big, hot problem, where you don’t have enough time to really think about what you’re doing. And also, it was clearly a computational problem. I didn’t see interesting phenomenological theory to do, or what’s possible had been done.
So, I saw a sabbatical coming up in 1999, and I thought: this is when I want to start thinking about what might be an interesting way to move into biology. I was already working on biomolecules as polyelectrolytes, stiff-charged polymers, and thinking in particular about unique, exotic, fundamental, statistical mechanical phenomena associated with molecules like DNA, which even though they’re highly charged, appear to attract each other and undergo a condensation transition if you add polyvalent cations. And we realized right away — so did many others in the field — that this is due to a breakdown of mean field theory for treating electrolyte solutions. The Poisson–Boltzmann theory, the Debye-Hückel theory, those are the classical theories, serving largely as the only theories of electrolyte and polyelectrolyte solutions. They are mean field approximations. And it turns out that with them you’re neglecting correlations between counterions that can give rise to exotic phenomena like highly charged colloidal particles attracting each other. So, I thought: maybe this is a way to get into biology, along with some friends working on similar problems, like Phil Pincus (who, after leaving UCLA to work in the 1980s at the Exxon Clinton lab, was back in Southern California at UC Santa Barbara). Have you interviewed him for this archive series? The only interview I’ve read of yours so far is the one with Dave Pine, because of your sending it to me to familiarize myself with the interviews, and because I was happy to learn more about Dave. We, of course, knew each other from the ’80s and our working in these areas. I’m sure that Phil Pincus was one of the reasons he went to Santa Barbara. And of course, Phil Pincus was a major player making Exxon “the place” in complex fluids in the 1980s.
So it was with Phil Pincus and a biophysicist at the NIH, Adrian Parsegian, who was also thinking about electrostatic effects in biological systems, that we organized a many-month Santa Barbara Institute for Theoretical Physics (ITP) – not yet the Kavli ITP at that time – workshop on electrostatics in biology, and with whom (along with Robin Bruinsma) we wrote a Physics Today feature on electrostatics in biology. We thought we’d hang out with people interested in learning more about electrostatics in biology, and that’s how we would move into biology.
To zoom out in terms of your broader motivations, how much of it was just, “This is the exciting science to pursue?” and how much of it was, “There’s obvious human social benefit to this research”?
I think that if your motivation for working on basic science, or for that matter, translational science, is that you find it deeply satisfying, it’s going to be good for the world. It’s okay. There’s going to be at least one more happy person (you). That’s always good for the world. And most reasonable people would agree that, in the case of translational medicine and applied science, the contributions of research has positively improved people’s lives enormously. But I am convinced that the same is true for basic science as well, including pure mathematics, i.e., that people are better off because of our slow-but-sure, never-ending, progress in fundamental understanding. It’s the natural world – not people (as in the case of all other intellectual pursuits) – that is the object of our study, the natural world that exists independently of us humans. This is true even in biology, where we’re studying life. Life existed before humans, and may well go on existing after the human species is gone. But humans didn’t have to be part of life. Life on Earth could have ended with bacteria. It could have ended with whatever. So, when we study molecular and cell biology, we’re not studying human behavior or humans. We’re studying life on a molecular level, which has translational medicine consequences.
Okay. So, to answer your question: remember, I told you that I had to leave photophysics and jump onto a new and steep learning curve, so I started thinking about statistical mechanics. Then, I had to leave simple fluids to think about how you develop basic phenomenological theories of complex fluids. How do you look at phenomena in general, that until now, have only been described descriptively? I’m very uncomfortable with the decades-long NIH stance that you need hypothesis-driven research, and that phenomenological work should be discouraged. People too often insist you need a hypothesis that you’re testing. If you’re trying to understand fundamental phenomena with simple theories and the right experiments, you’re not testing a specific hypothesis – rather, you’re testing a simple idea. But things have gone really out of whack when what the culture of modern molecular biology all-too-often considers to be a good proposal — a good research project – is one that is necessarily hypothesis-driven. For example: “We are going to test the hypothesis that a particular point mutation in a particular protein that’s involved in a particular signaling pathway has the following particular effect on that signaling pathway. I don’t think that’s the only or best way to do research. I’m sorry. I prefer asking: are there general phenomenological features of signaling pathways that we can understand by comparing very different, or apparently superficially different, signaling pathways? And so on.
So, that’s how I wanted to start working on more complex systems, ones that were inspired by biological processes. I wasn’t thinking about working in biology, but working on biologically-inspired physics. Today, as the zealous convert to, so to speak, “real” biology problems, I have limited patience with this approach, because it often isn’t addressing a biological problem or question. It’s addressing a physical problem that a physicist wouldn’t have been thinking about if she or he hadn’t bothered to learn about some biological systems and processes. In the 1990s though, I was thinking about biologically-inspired physics. And a friend – with whom I had worked on some self-assembly problems, but who had been trained as a biologist and was, in fact, an expert on liposomes — when he saw what I was doing with DNA condensation, he made a connection to viruses that really caught my attention. Let’s see — do I have it here? [Laughs, and goes to another part of his office to pick up another model.] This is double-stranded DNA, strongly self-attracting itself. You can see that it is rubber tubing wound into a toroidal structure but, locally, it is hexagonally close packed. This is how a stiff linear polymer gets as close to itself as it can. It can’t do this to get close to itself [takes his flexible belt and crumples it in a tiny, densely-packed, ball in his hands], because it has to be flexible enough to do that. That’s how a flexible polymer undergoes the coil-globule transition, or undergoes a first-order collapse — condensation transition. But if you take a stiff linear polymer like DNA, you’re not going to find it bending into a radius of curvature much smaller than the persistence length of that stiff polymer.
My students and friends, as a 70th birthday gift, generously commissioned an artist to sculpt my hands, holding my belt, demonstrating something about polymer physics, because of the unaccountable number of times I did that over the years. [Shows the sculpture, in the center of his conference table.] Okay. So, my friend, Gary Fujii, with whom I’d collaborated on self-assembly problems and liposome formation and so on, said: you know, your condensed DNA is pretty much what DNA looks like in a viral capsid. In order to take up as little space as possible, it has to organize itself in a way that has it as close to itself as possible. But I knew that the only way to get this compact toroidal DNA structure spontaneously is by having a high enough concentration of polyvalent cations or a bad enough solvent. I knew you don’t have that in the cell, so how on earth is the DNA going to give up all its entropy, and in fact, get a much higher energy because it’s crowded on itself? It’s repelling itself in the absence of polyvalent cations. It’s being bent more strongly than it likes. I asked him: who – what –does all the work of taking self-repelling, hard-to-bend DNA and getting into such a small volume, so that it’s organized this way? He said: well, there are proteins which are gene products of the virus, that are really strong motor proteins, and they stuff the DNA into a preformed capsid. To which I responded: but that would require the exertion of really large forces (by the motor protein) and the building up of really high pressures (in the capsid). Well, there you go, Bill, he said: Why aren’t you thinking about these things? It’s an important problem in biology.
So, when I organized the 1998-1999 ITP workshop, I was really thinking: during my sabbatical, I want to learn about viruses. How do they do this (package their DNA genome into such a small volume)? That’s a purely physical phenomenon, and yet a virus is a purely biological object. And then I thought: well, there’s no such thing as a purely biological object. Living systems obey the laws of physics and chemistry. It’s also a physical object. So, how do you begin to understand a virus as a physical object? That’s when I started working with Avi Ben-Shaul and our students Shelly Tzlil and James Kindt on calculation of the forces and pressures involved in the packaging of DNA viral genomes, and with Robijn Bruinsma, Joe Rudnick, and Roya Zandi on the self-assembly of viral capsids and the origin of their icosahedral symmetry.
Bill, how important was the genome mapping project that got underway in the mid-1990s? Was that a — nothing? Was not relevant?
No. I don’t think it’s relevant to a lot of work people are doing.
Ultimately, of course, it will help us understand all kinds of things. But you hear people proclaiming, “Now that we have sequenced the human genome, we can do it easily for individual patients, we can have individualized medicine, we can treat cancer — no. We don’t understand cancer. We don’t understand gene regulation. So, of course it’s helpful to know what the genes are [laughs]. Okay? But if you don’t know why some genes are turned on some times, in some cells, and not other times, or in other cells, then the sequencing of the human genome is helping to only a limited extent in medicine or not necessarily helping us understand fundamental phenomena in biology. It’s a perfect example of how a technological advance will be very important for understanding things better in the future. Just the way when Fourier analysis was worked out centuries ago, nobody could have realized that it would be the basis for electrical engineering theory and transmission of electromagnetic signals, and sound, and so on. But clearly that pure mathematics was instrumental. And then you have all kinds of pure mathematics theory being important for computer science, and so on.
So, the answer to your question is that all the details from the human genome project that are in principle relevant for understanding and treating viruses were not important to me when I started research on viruses, and are not even important now, because I’m still asking phenomenological questions. The fact that we could sequence the coronavirus genome so quickly, on the other hand, is obviously important. We needed to know the sequence to design a viral antigen for vaccines. But there’s a reason why you can sequence a viral genome more easily than a human genome. It’s tens to hundreds of thousands of times shorter! And of course, that’s the real reason that I started working on viruses rather than any other evolving organism. I learned that a virus can have as few as this many genes (raises two fingers)— two genes. A bacterium has thousands of genes. We have tens of thousands of genes. Earthworms and tomatoes have as many genes as we have. Okay? Tens of thousands of genes lead to hundreds of thousands of protein gene products. You have hundreds of thousands of molecules interacting with each other. It’s hard to understand life in any simple terms if you insist on confronting all of that complexity. A virus, on the other hand, can have just two genes! We work on a virus that has (almost, just) two genes. I cannot help but work on that virus, because it answers the question that I posed to myself and started emphasizing from the beginning of my research on viruses: I want to know, what is “the hydrogen atom” of viruses, i.e., how simple can a virus be and still be an independent virus? Can I work on a virus that’s simple enough so that I can answer questions about it that I can’t answer directly about other viruses, but that will help me understand other viral life cycles? Consider the fact that the hydrogen atom is “the hydrogen atom” of atomic and even molecular structure. In other words, if we understand the quantum mechanics of the electron in the hydrogen atom – i.e., one-electron orbitals – we understand almost everything about the quantum mechanics of electrons in many-electron atoms and even molecules. Think about the fact that we describe the structure of many-electron systems, to a very good approximation, by the “build-up principle” which involves assigning successive electrons, two at a time, to successively higher-energy one-electron orbitals. I mean, this is why we spend so much time teaching the quantum mechanics of the hydrogen atom.
And for molecules — okay, you go to H2+. It’s a one-electron system that leads to the molecular orbitals, still described by a one-electron approximation, that allow us to simplify – and hence understand – the electronic structure of many-atom/many-electron molecules. But let’s get back to your question. So, subconsciously, I was beginning to realize that viruses, because they have so few genes, can have an equally short “parts list”, as small as two: a capsid protein and a genome inside. The next step up, for evolving things, is a one-celled organism. But there you’ve already gone to something with thousands of genes and with tens of thousands of parts that have to be labeled, as we did dutifully in school. Actually, I didn’t do it. I refused to do it. Homework for tomorrow: label every part of this cell and include up to at least 30 labels. 30 out of hundreds of thousands?! Okay? In contrast, you have a virus particle, which consists of nothing other than a genome molecule and the protein making up its protective shell (capsid). In most cases you have exactly 180 copies of the capsid protein — not 181, not 179 — 180 copies of one protein making up the shell. The shell has perfect, icosahedral, symmetry. You have one molecule inside. I can deal with that very nicely. So, that was my ticket to biology, and it’s still what I hold onto. This takes us back to all the vaccine platform work we’re doing. We basically use as our vaccine particle, or our gene delivery particle, a reconstituted-from-scratch (from purified components) virus-like particle. It’s not infectious. It doesn’t have a viral genome inside. It has a therapeutic RNA inside.
And when does Charles Knobler enter the picture?
Yup. About 20 years ago, when Avi Ben-Shaul and I made our first prediction of the forces that have to be exerted, and of the pressures that are built up by the motor protein of the virus, when it packages its DNA into a preformed capsid, we were doing purely phenomenological statistical physics calculations, but we came up with numbers, like 50 atmospheres pressure in the viral capsid. That’s more than 10 times the pressure you have in a correctly inflated car tire — and would spectacularly blow out a car tire. And the car tire is thick rubber reinforced with steel belts and so on. So, that’s another physics problem. How can a nanoshell, like a virus particle, withstand that pressure when there’s no steel reinforcement, no thick rubber? There aren’t even chemical bonds connecting the proteins that make up the single-molecule-thick shell. Well, you can answer that question with physics too, but that’s a separate thing.
Where does Chuck Knobler come in? I started giving talks around the world about this prediction of capsid pressure that I had made with Avi and our students, and designed an experiment for measuring the pressure. It was the first time that I had to design an experiment to test my theory, because previously I was surrounded by spectroscopists, polymer chemists, soft-matter physicists, liquid crystal scientists — whatever — physical scientists who could perfectly well design a physics experiment to test a physics prediction. But here I was making a prediction about pressure and force involving viruses, and virologists don’t think about viruses in terms of physical properties such as pressure and force.
But biophysicists do.
Yes! So, I went to the biophysicists. But 20 years ago, while they were very adept at and proud of using laser tweezers to pull on DNA and proteins, for example, they weren’t working on viruses. They just ordered lambda (bacteriophage) DNA to study the physical properties of double-stranded DNA, or ordered some well-characterized, protein to study its protein folding, or how it denatures when you pull it. I couldn’t get a biophysicist to work on a virus, and I couldn’t get virologists to work on a physical measurement. So, I realized that I would have to find someone willing to try this experiment in the lab. In a talk I gave in Sweden, a postdoc there who knew my work from complex fluids came to my talk in spite of the fact that I was discussing the pressure in a virus. He, Alex Evilevich, was excited about it, and told me: I’d like to come to your lab and try this experiment you described — especially because I had mentioned that nobody was measuring this. And I said: great, but I don’t have a lab. Maybe we can set you up in the lab of a biologist in my department who works with bacteria — because we were going to do the experiment with a bacterial virus. And so that person would know how to deal with this virus. Or, maybe I can find someone working directly on this bacterial virus. But I also thought: maybe I can go to a physics or a physical chemistry friend who is retiring, who is willing to try this experiment, if we have a student like this postdoc coming, and we’ll learn together how to work with proteins, DNA, viruses, and so on.
So, Chuck Knobler was thinking of retiring. He was then in his mid-60s. He had been working for 10-15 years on two-dimensional (2D) systems, adsorbed monolayers, looking at phase transitions in them. For 10-15 years before that he had experimented with critical phenomena in polymer solutions, and for 10-15 years before that he had measured virial coefficients of simple fluids. So around 2001 he was talking, as a friend and colleague in the department, about how it might be too late for him to start a new 15-year project. But he wasn’t quite ready to quit. What should he do? So, I had the answer for him. You’ll start a new 15-year project, but you won’t start it alone. And what do you think of this idea – measuring the pressure in a virus? He was excited about it. Without a doubt I would have had a hard time getting lab space in the department, because every department is short of lab space. People had encouraged me, from the time I first joined the UCLA department, to do whatever I want to do, including the crazy idea I had in 2002 of trying a biology experiment. But they weren’t about to give me lab space. It would have meant taking it away from colleagues who were already doing experimental work and who were doing a beautiful job of it. But Chuck had a lab. So, we gutted it. He gave his lab equipment to the last postdocs he had who were going on to faculty positions and would be looking at 2D phase transitions and monolayers, and so on. We had an empty lab, and so over the next several years, one experiment at a time, starting completely from scratch – neither of us had ever handled a biological system – we built up a molecular biology lab. Each new student who joined us was either trained in molecular biology or was ready to learn in the lab of a molecular biologist who was willing to help train them.
We were extraordinarily lucky in finding extremely generous biologist colleagues cross-town at Caltech (Jim and Ellen Strauss), here at UCLA (Dick Dickerson, Sabeeha Merchant, Arnie Berk, and Jim Gober), and at Scripps (Jack Johnson) a couple hours down the road in La Jolla. And we had wonderful cooperation from them. They basically said: you have an exciting idea, you don’t have a lab or the expertise, so send us a student and we’ll set them up doing the experiment in our lab. When they’re ready to come back and reproduce it in your lab, they will. And then they did just that. They came back and taught other students how to do it. It took 10 years before we didn’t need help from other labs to learn how to express proteins, how to handle RNA, and basically to start doing molecular and cell biology with the simple viruses we were working on. And ever since then, we’ve attracted more and more molecular biology students who know that we can’t teach them anything in the lab. We can just give them a problem they’re excited to work on and provide the resources they’ll need, and encouragement.
Bill, what price have you paid, or catch-up that you’ve had to do, since you previously and so studiously avoided organic chemistry?
I avoided organic chemistry and all of biology. So, I’m learning whatever I need to know. Organic chemistry isn’t so important for us (for the moment, at least – if and when it is, I’ll struggle to learn it!), but instead molecular biology, biochemistry, cell biology, immunology, and all the techniques that go along with them. They’re of course crucial, and I’m learning them with a vengeance.
I’m the zealous convert. It’s all I want to learn about. Going back in (my personal) history, look at the book I’ve just picked up.
The title is Functions of a Complex Variable. Inside, two people have written in it: one is my father, who wrote “A. Gelbart, November 29, 1943.” But the other signature, so to speak, is this [shows an illegible scribble and] the broken binding. Obviously, as an infant, I was playing with this book.
I saw my father had written in it, so I signed it as well [laughs].
That’s awesome [laughs].
And you know, it’s one of the few books I have left that isn’t a hardcore biology book. I used to love to be surrounded by physics books, math books. I’ve given them away to people who come in the office and exclaim: Oh, you have this classic. Then I say: take it. In fact, take any — all. The only math and physics books I’ve kept are the ones with very personal significance.
That’s great. Bill, a technical question: what were some of the challenges in attempting to visualize long RNA molecules?
Yeah. Okay. It’s interesting that at journal club, in a couple of hours, we will be talking about the 2012 paper that we published on visualizing long RNA molecules in solution. It’s the first, and still one of the few, electron micrographs of long RNA molecules. I’ll tell you what I mean by “long” in a moment. Long RNA molecules, in their native state in solution. Normally, when you do electron microscopy, you put what you want to visualize on a grid – for example, you put virus particles on a grid. You put double-stranded DNA on a grid. The double-stranded structure of DNA is so robust that when the DNA molecule is interacting with a grid, it doesn’t disrupt the structure of the molecule. And a viral capsid is similarly such a robust structure. You put it on a grid, so that you have a nice focal plane, so to speak, for your electrons, and you stop the particles from moving. But they maintain their structure. It’s much harder, but you can in principle also put a small RNA molecule on a grid —100 nucleotides or less in length, say, that folds like a protein into a particular secondary, tertiary structure and is robust. But the situation is very different for a long RNA molecule, say one that’s long enough to have at least two genes of information in it. Long enough to be a viral genome. It has to have at least two genes, so it’s already many thousands of nucleotides long.
We had shown in earlier work with Avi Ben-Shaul, before our 2012 visualization paper, that there is no unique folded structure, secondary tertiary structure of such a long RNA molecule. It’s a statistical object that has to be represented by lots of structures. And the challenge was to characterize that ensemble without disturbing it. Just like a flexible polymer doesn’t look like this, or that (illustrates different conformations of his flexible belt). It can only be described by an ensemble of all possible structures, and you give statistical weights to different ones. Same thing with the secondary and tertiary structures of a long RNA.
So, you have to avoid putting the molecule on a grid that will disrupt its structure. You have to do cryo EM. You have to quickly vitrify your aqueous solution so that water hasn’t had a chance to freeze. You just have to stop all motion. If the water froze it would blow apart the delicate secondary-tertiary structure of the RNA molecules, just as when water freezes it can break a pipe. So, you just have to stop all motion, and you have to do it away from surfaces so that the RNA molecule doesn’t have its structure affected by substrate or interface. It took us many years to visualize long RNA, namely viral RNA genomes, in their native state in solution. Expert cryoelectron microscopists could have done it before us. We weren’t experts. We were learning cryo EM in order to visualize RNA. Why didn’t other people do it first? Because, I would say, they weren’t thinking physically about long RNA. They weren’t thinking that the ensemble of 3D structures of viral RNA is important for understanding the life cycle of an RNA virus. They’re focused on knowing the genome sequence of the virus. On knowing what the biochemistry is, what the evolution is, and so on. They weren’t thinking you had to know physical properties of it. If you ask an RNA virologist “How big is the genome of the virus you work on?” – Dengue, HIV, coronavirus, or whatever – they’re likely to tell you “It’s so-and-so many nucleotides long.” “No, how big is it as a physical object, the RNA genome?” “What are you talking about? Who cares about it as a physical object?” “I care about it.” “Why?” “Because the genome as a physical object affects how it gets packaged into the viral particle, how infectious it is, and so on.” That’s something that physical virologists are finally beginning to have people cognizant of. Back to visualizing RNA. We thought it was important for people to know what RNA looks like before it’s inside a capsid, and how its size and shape affect the robustness and the spontaneity of the packaging of that RNA molecule into a perfect capsid by the capsid protein you’ve added. Coincidentally, we’re revisiting that paper in the group after, what is it, almost 10 years, because we’re just about to begin a serious new cryo EM stage, asking a new set of physical questions about viruses.
Bill, describe to me the intellectual leap between RNA gene expression and the idea that you can successfully deliver cancer vaccines.
Okay. That’s a really good question, and a really big one. It’s a fundamental part of our big vision for gene delivery. You’ll get the point right away when I remind you that viruses have evolved to do one thing and one thing only. If you asked, “What’s the purpose of our lives?” Well, people have many different answers. In any case, you get into the philosophical and hugely important question of: look, we’re here, we have to give meaning to our lives. Or you can choose not to ask why we’re here: We’re simply a natural product of evolution, let’s deal with it. Okay. Back to the virus. A virus doesn’t have a life. It’s not alive. It has no metabolism. It doesn’t need to find food. A virus, when it’s not in its host cell, is an inanimate object, no less so than this pencil. That’s why when you put it into water, it’ll diffuse around just like a dust particle or polystyrene sphere, say, but it’s smaller. Okay? It’s a physical object. Similarly, in the body, until it’s in the host cell, in a particular host cell, it’s just a physical object diffusing around. It doesn’t do anything. But once it’s in its host cell, all hell breaks loose: it gets replicated by the host cell. Within hours, you have thousands of particles from one. You have thousands of copies of this inanimate physical object, each of which doesn’t do anything until it (again) gets replicated in a host cell.
So, what’s the challenge of a virus? It has to protect its genes and get them into the right host cell. It wouldn’t be here — viruses would not be afflicting us and wouldn’t be the huge part of life that they are — if they hadn’t found, impressively successfully, how to protect and deliver their genes to the right cells. So, you don’t need to wait for a lightbulb to go off. If you’re interested in gene delivery — and RNA and DNA vaccines are a part of gene therapy, making someone healthier by getting certain genes to certain (antigen-presenting) cells, in the form of vaccine antigen genes – you have to look to viruses for guidance.
Gene therapy is also used to get a therapeutic gene to a cell that’s in bad shape because it’s not expressing enough of a protein that is important for the health of the cell, or to get a cell-killing gene to a cell you want to kill. These are all examples of getting a certain gene to a certain cell. That’s what viruses do. So, how can you think about gene delivery of any kind without learning from viruses? That’s what we do, and because mRNA viruses are the simplest for reasons I could talk to you for hours (days…) about, for basic molecular biological reasons — why not learn from the simplest viruses about how to get genes to particular cells? Why from the simplest? Because the simplest ones can be reconstituted from purified components in a test tube. Or noninfectious forms of them can be reconstituted from scratch. More explicitly, we get rid of the viral genome. We replace it with an mRNA that codes for a viral antigen, like the spike protein of SARS-2, and we get it spontaneously packaged into this protective protein shell. We’ve learned from the virus. The virus has one of its few gene products, the capsid protein, that has evolved to do that one thing, to package RNA of a certain length into this perfect shell and to make that RNA accessible to certain cell that you get that particle into. The virus has to get itself into the right cell. We can take the virus-like particle that we reconstitute and put an antibody on the outside of it that has a high affinity for a membrane protein in the target cell. It’ll get the particle to the right cell, so we can get it to an antigen-presenting cell or to a cancer cell, or to a cell we want to get a therapeutic protein to. That’s the big picture, and that’s why we use mRNA viruses in particular. It’s an alternative to the adenovirus approach where you’re using the viral DNA genome to deliver genes and where the vaccine particles are synthesized in and purified from mammalian cells instead of being reconstituted from purified components in a test tube.
In your work on dissecting the viruses to understand virus infectivity, is that a literal use of the word “dissection”? Are you actually breaking the virus down into its component parts?
Yeah. We literally take it apart. This is the virus particle (picks up a lego-like model). There are 180 copies of this little tile here. They’re all the same. They’re colored differently here, even though they’re all the same. The groupings of five proteins are colored yellow. The groupings of six are colored green. You have 12 of the pentamers. That’s 60 proteins. You have 20 of the hexamers. That’s 120. So, you have 180 protein copies, and one molecule of RNA inside. All we have to do — we’re not really dissecting it – is disassemble/”dissociate” it. We just put it into a high salt solution. It turns out that the proteins no longer interact laterally with each other to form a capsid, nor do they interact with the RNA, under high-salt conditions, for physical (electrostatic screening) reasons. And now we have protein by itself, and RNA by itself. There are standard ways to separate the two. So, you’ve got purified capsid protein and purified RNA, and then you can ask: under what solution conditions will they spontaneously reform the virus particle? We and others before us have studied that physical process for many years. And in that way we’re learning about how a virus particle forms in the host cell. In principle, you learn how to interfere with that process. So many things are attributed to Feynman, or to Einstein, or George Bernard Shaw, or Winston Churchill. They were deep and clear-thinking people who were also good with words. Right? And you can get people’s attention by saying, “As Feynman, or Einstein, said”…. But I believe it was Feynman in this case, who said: “You want to understand a radio, or anything? Take it apart, and try to put back together again.” Okay?
It applies to the virus particles, because we can do it. I (or, rather, my research students) can put a virus particle together more simply than I (they) can put a radio together.
What has been the value in working to classify long RNA molecules as branched polymers? Is that just because we need to understand them, according to their differences?
It turns out that RNA molecules, as we showed in our 2012 RNA visualization paper where we compare a gene in RNA form to the same gene in DNA form, that the RNA is a small globular compact thing whereas the DNA is a big stiff linear thing taking up a lot of space. We wanted to know: why is this linear polymer (single-stranded RNA) so much more compact than another linear polymer (double-stranded DNA)? And the answer lies in the complementary base pairing that gives rise to RNA secondary structure — let’s go back to my belt here.
You have this linear polymer, RNA, taking up a lot of space. But what if you have a little string of nucleotides here, complementary to a string of nucleotides distant along the chain somewhere else? Then you’ll have a little double-stranded RNA there, and you might have the same thing happening here, and here. Notice how you’re gathering in the polymer because of this — it’s a weak interaction, hydrogen bonding that’s involved, base pairing between complementary nucleotides. But we wanted to show explicitly that the reason a gene in RNA form can take up much less space is because you’ve taken a linear object and essentially organized it as an effectively branched object. It was a way to understand how single-stranded RNA is so much more compact than double-stranded RNA, or DNA, which encodes exactly the same genetic information. It’s not something that a pedigree biologist will naturally think about. But I’m a “mutt”/mixed breed – part physical chemist, part theorist, and now part molecular biologist. I’m a scientist trying to understand viruses. I’m pursuing an interdisciplinary understanding of the components of a virus and how their physical properties affect their infectivity, life cycle, so on and so forth.
Bill, especially in the past 10 years, when has it been obvious that what you’re finding has therapeutic benefits, and who are your partners in moving this toward that end?
Okay. Well, we’ve sort of mentioned that. I would say it’s just in the last few years.
It started when the large German pharmaceutical company, Boehringer Ingelheim, approached us. We had published a paper in 2013 for which I hadn’t even written an invention report or filed a patent. They approached us, explaining: We think your work will help us deliver cancer vaccines, can we talk about it? And we had a great collaboration for a few years, up to the pandemic, and we’re starting again now. They are specifically interested in the power of our in vitro reconstituted mRNA-containing virus-like particles as an alternative to the less stable, less well-characterized, and less targetable liposomes and lipid nanoparticle delivery systems used in the Moderna and Pfizer vaccines and most other mRNA gene therapeutics. So, we are working with them on cancer therapeutics and cancer vaccines using what we understand, what we’ve been talking about. And, as mentioned earlier, with an immunologist (Dr. Otto Yang) in the school of medicine here we’re working on a Covid-19 vaccine, but we’re also beginning to pursue with him an idea in cancer immunotherapy that has nothing to do with vaccinating, but rather with transforming killer T-cells in a patient so that their cancer will be cleared by the killer T-cells that they’re missing. They are immunocompromised, so we’re trying to compensate for their immune system not being able to deal with their cancer. And there’s a longer list of translational medicine projects we’re working on, but I have to stop myself. Perhaps, though, as a final example of targeting therapeutic RNA with in vitro reconstituted and functionalized virus-like particles I’ll mention our work with a team at Cedars-Sinai Medical Center, also using our self-replicating mRNA. The self-replication aspect of mRNA is something we haven’t even talked about, and it’s what makes our mRNA different from the Moderna and Pfizer mRNAs. It’s self-amplifying, so you’re effectively delivering more and more antigen to antigen-presenting cells. We’re working with the Cedars-Sinai researchers on pancreatic cancer treatments involving self-amplifying microRNA. It’d take a little time to explain. But again it’s based on getting the right RNA to the right cells.
So, I think that for the duration of my life in lab research science, I’m going to be focusing more and more on these translational medicine applications, because on top of all the excitement of finally seeing the connection between all of science and a hard problem I’m working on, the fact that the applications of basic science — which always come, even if it’s hundreds of years later — can literally be next year, that’s exciting. It’s another profound level of excitement on top of the intellectual satisfaction of, as I said, feeling like a scientist for the first time because I don’t accept any label being put on me. And I don’t want to put a label on myself. I can just tell you why and what I’m trying to do and what I’m trying to understand, and for sure, it takes every ounce of everything I’ve learned from physics to chemistry, and to what I’m finally learning about biology.
[laughs] Bill, exactly on that point, now that we’ve worked up to the present, for the last part of our talk, I’d like to ask a few broadly retrospective questions about your career, then we’ll end looking to the future. So the first is: within this transition in your career, what are the things — if it can be divided into two parts, pre-1998 and post-1998 — what have been the key scientific concepts or things that you learned in chemical physics, in stat mech, in soft matter — that no matter what you’re working on in the world of virology and vaccines, always inform your world view?
So, there are two levels of answer. One, it’s the cumulative things I had come to understand, and especially from the work in the ’80s and ’90s on increasingly complex fluids. I would say that the quantum mechanics from the first 10 years of my professional life, looking at chemical physics and so on, hasn’t informed directly my post-1998 work. But that’s related to a second level of answer to your question. Another way you could have asked the question is: do you think that you’ll regret not having had that critical juncture come earlier? After all, you’re saying you’ve always loved science, but you love it so much more now, you’re that much more excited about it, do you regret not having moved sooner into biology. No. First, I don’t want to spend time regretting anything. I’ve been so lucky in my life on all levels. But the other thing is that I wouldn’t have been ready before. I needed those years learning (first about molecular quantum mechanics and then) about lots of complex systems. Biology, life, is complicated. And I needed to get to the point where I had the confidence as a scientist to work on problems that not only I’m not an expert on, but that I know essentially nothing about. My wife, most importantly, has been terrifically encouraging, even though it means my leaving earlier in the morning, coming home later, working later at night, waking up in the middle of the night, and in general not being willing to do much other than learn these new things I want to learn. I do the cooking, all the cooking, still. I love cooking [laughs] so that’s my contribution to the household. I wouldn’t have had the confidence (or the rashness) earlier to keep changing direction, to work with steeper and steeper learning curves. My wife, as a historian of science, points out that many of the breakthroughs in the history of science have been associated with outsiders in the field who aren’t burdened by having been trained too thoroughly in a field. And as I like to say, I benefit from not having been trained too heavily [laughs] as a biologist. In other words, I’m not drowning in too many facts. I may be [laughs] suffering from not knowing enough facts, but certainly, I’m not hurt by knowing too many.
Bill, if it’s possible to quantify intellectual satisfaction, how do you compare a basic science discovery — just understanding how nature works at the molecular level — against all of the applied research that really literally leads to saving lives?
That’s a tough question.
In terms of the motivations, the getting up early, the going to bed late. Is it all mixed in for you?
Oh, for me, I would say overwhelmingly – yes, the two are deeply intermingled. So, It’s a personal thing.
I think people are making comparably important contributions. The basic scientist and the person who turns the basic science, however many years later, into important practical applications. So, it really comes down to how much satisfaction do those people get from what they’re doing? That’s a personal thing. I think that what’s driving me, always, is that I simply want to understand.
I have no illusions about coming out with a better vaccine during the pandemic than what we have. I’m excited that on more and more levels, on more and more fronts, I have tools for understanding better. You can understand in so many ways, with equations, simple models, low-tech experiments, high-tech experiments. You can approach a good, hard problem in so many ways. I’m just trying to understand better what happens to an mRNA molecule in the cytoplasm of different cells and how that results in good antigen presentation, good protein synthesis to kill a cell, or good protein synthesis to save a cell. So for me, it is overwhelmingly intellectual satisfaction. I don’t apologize for it. It’s a selfish thing perhaps. I’m doing this to feel better [laughs], to have the deep intellectual satisfaction that (for me) only science can bring. But I think it’s clear that a lot of good for the world comes out of our understanding things better. You know what Jacques Monod — do you know the name Monod?
He wasn’t just a Nobel Prize winning biologist. He was, I think, one of a small handful of scientists who bring honor to the Nobel Prize rather than being honored by it. He was one of the great molecular biologists, and he was also one of the great thinkers. He wrote a fascinating book critically exploring the question: What is life, how do you distinguish living from non-living things, and what do you learn from doing so? I think it’s much more interesting, say, than Schrödinger’s book What is Life?. Monod’s book is called Chance and Necessity. And his dying words were, simply, “I’m trying/searching to understand.” That’s my mantra. I was going to say it’s how I fall asleep at night. But it’s how I don’t fall asleep at night [laughs].
[laughs] Yeah. Bill, you’ve made so many striking comments that indicate just how little we understand about so much of these things.
On that point though, to come at it from a different angle, what have you discovered or learned that really highlights how little we understand about that particular thing? What’s the starkest issue that comes to the forefront of your mind, to illustrate that point?
Because we have to stop in a minute – I have a meeting at 2:00, to which I’ll come a few minutes late – I’ll be brief and simply remark that the starkest “issue” illustrating how little we understand about science in the current forefront – and for me, personally – is the field of immunology. It’s the latest and most drastic change in research direction for me and, as we’ve been discussing off and on throughout the interview, it came about because of the pandemic and my switch from cancer vaccines to COVID-19 vaccines and to transformations of T cells and back to cancer biology. People have forever been looking for magic bullets for cancer. They have to be reminded how little we understand, even as we slowly but surely understand more and more all the time. We have monoclonal antibodies. We have individualized, personalized, genomics-based, immunotherapies. But we don’t understand enough about how these things work. Similarly we don’t understand enough about how the vaccines work that work so well. But let’s enjoy the fact that they do work to the extent that they do. And the fact that a hundred years from now we will have steadily figured out a lot more about how they work, and understand them better and better, leading to improved therapies – it’s the undeniable nature of science.
Bill, this has been an absolute delight, spending this time with you. I’m so glad we connected. Thank you so much for doing this.
Likewise, and thank you so much.