Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
We encourage researchers to utilize the full-text search on this page to navigate our oral histories or to use our catalog to locate oral history interviews by keyword.
Please contact [email protected] with any feedback.
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of William Bialek by David Zierler on August 25, October 8, October 16, October 23, October 28, November 10, December 2, 2020,
Niels Bohr Library & Archives, American Institute of Physics,
College Park, MD USA,
www.aip.org/history-programs/niels-bohr-library/oral-histories/46349
For multiple citations, "AIP" is the preferred abbreviation for the location.
In this interview, William Bialek, John Archibald Wheeler/Battelle Professor Physics, and a member of the Lewis-Sigler Institute for Integrative Genomics. Bialek recounts his family’s Jewish heritage in Europe and their observances of Jewish ritual during his childhood in San Francisco. He connects his natural curiosity to early interests in math and scientific theory which led to a formative high school experience learning about biophysics at UCSF. Bialek describes his undergraduate experience at UC Berkeley and he surveys the allegiances and separations between the science departments. He explains how his interests in connecting theory to biophysics led him to John Hopfield and why he decided to remain at Berkeley for graduate school under the direction of Alan Bearden. Bialek discusses his postgraduate research in the Netherlands where he found a strong tradition of biophysics research and where he developed a long term collaboration with Rob de Ruyter. He explains his decision to pursue his second postdoctoral appointment at the ITP at UC Santa Barbara, where he found a multidisciplinary research that fit his interests. Bialek explains that the opportunities that allowed him to return to Berkeley as a faculty member with joint appointments in Physics and Biophysics. He narrates the story of the threat of not getting tenure in the Physics Department and he explains his decision to leave Berkeley for a position at NEC in New Jersey. Bialek relates a reassuring conversation with David Gross that he would be connected to Princeton during his time at NEC. He narrates the origins of NEC and how his research agenda fit within it and he talks about the collaborations that led to the book Spikes. Bialek describes the eroding management culture at NEC and the opportunity that he had to join Princeton’s faculty and Lewis-Sigler. He talks about his interests in signal processing and information flow in neuroscience, and he reflects on the opportunities that biophysics presents for the broader integration of theory and experiment. At the end of the interview, Bialek addresses some of the deep questions posed by neural networks, he considers the notion of metaphysics, and he explains why he sees his research accomplishments not as revolutionary but as part of a foundation of future discovery.
This is David Zierler, oral historian for the American Institute of Physics. It is August 25th, 2020. It’s my great pleasure to be here with Professor William Bialek. Bill, thank you so much for joining me today. So, Bill, first, to start off, tell me your title and institutional affiliations. And I put an “s” on affiliation because I know there’s more than one.
Yes, okay. So at Princeton University, I'm the John Archibald Wheeler/Battelle Professor of Physics, and a member of the Lewis-Sigler Institute for Integrative Genomics. And at the Graduate Center of the City University of New York, I'm a Visiting Presidential Professor. I spend part time there and enjoy—we live in New York. I enjoy spending time both places.
During these pandemic times, are you more in New Jersey or New York these days?
So, we chose to hunker down in New York City. In Princeton, we share our house with our son and his family, and we thought—well, I think it was better for all of us to be separated for a variety of reasons, not the least of which is that I find it difficult to get work done with a four-year-old running around.
[laugh] I have a four-year-old myself. I can relate. [laugh]
So you know the—you know. It’s nice to—we just saw them recently, so that’s a pleasure, but not for working.
So you're in New York now?
Yeah.
Okay, Bill, so let’s take it all the way back to beginning. Let’s start first—tell me a little bit about your parents and where they're from.
Ah. So, my parents were born 100 kilometers apart from one another in what is now Poland. Depending on—it is the part of Poland that was Russia, before the Revolution and the First World War. They were born 1918, 1919. My father was actually born on the day the Treaty of Versailles was signed, so he always joked that he came by his confusion about whether he was Russian or Polish honestly. I guess in the morning, it was Russia, and in the—
[laugh]
—evening, it was Poland.
Did they meet in Europe, or did they meet here?
No. No, no, no, no. So, we're just getting started. Their parents—so the generation of my grandparents decided to leave as soon as possible after World War I. So my mother’s family came here to New York City. My father’s family went to Paris. They had siblings of my grandparents that did it the other way. So my mother had first cousins in Paris; my father had first cousins in New York.
Now, I'm sure there’s no shortage of reasons, but why specifically did both of your parents’ families leave?
The family is all Jewish, so life had been difficult for a long time. And also, my two grandfathers were a butcher and a tailor. And it was in that period that there were various waves of migration of Jews out of Eastern Europe. So they were in the one that happened right after World War I. So my father’s family settled in Paris, where he grew up, and my mother’s family settled in New York, where she grew up. My mother, her parents passed away when she was a teenager, late teenager. They tried to keep the business going, so she was a Kosher—my grandfather was a Kosher butcher in Brooklyn. They tried to keep the business going for a while. Things sort of fell apart.
Was your mother’s family observant? Were they Orthodox?
So my mother joked that they were observant for business reasons.
They were not open on Shabbos.
Well, you can’t be a Kosher butcher shop open on Shabbos. [laugh]
[laugh]
They kept a Kosher home. So my mother actually—so it’s interesting; my mother, as a result, had a very intimate knowledge of Jewish practice, although not a formal education.
Although for women, that was not uncommon.
Well, no. Right. So that would have been complicated, anyway. My father’s family—well, actually—okay, so let me draw this contrast and then get back to tell you my mother’s history. So my father’s family, their view was that they left all that behind when they moved from their small village in Poland to Paris. So they were observant in the way that, let’s say, modern secular Jews in the United States are observant. They observed the holidays. There was a lot of tradition. But they didn't keep a Kosher home. So my mother grew up in New York City. Life was difficult with the death of her parents.
And this was Brooklyn, where your mom grew up?
This is Brooklyn. Yes.
What neighborhood?
I would have to look it up. The problem is that the times when I've had all of that in front of me were times where I didn't really know my way around Brooklyn. [laugh] So I have to get it all lined up. It is something—so she did go—to set the time scale, she was among the first students to go to a junior high school. You used to go from grammar school to high school. And so she has this diploma, which we have.
And this is public school.
Public school. I don’t think I set foot in a private school until I gave a talk at Caltech when I was a postdoc.
[laugh]
So, yeah, she has this diploma from the first junior high school, which explains that she’s entitled to start in the second year of high school, right? Because the junior high school sort of overlapped—the last two years of grammar school and the first year of high school. And it is a document which competes in grandeur with my certificate of being a member of the National Academy of Sciences. Were I the framing type, I would put them next to each other. [laugh]
[laugh]
Because I think it means something. Anyhow, we can actually, from our window here in New York, we can actually see Ellis Island.
Oh, cool.
So it’s a reminder. So yeah, so she I think did not finish high school, although she hung around and took courses at the New School during interesting times. She went to work for the Soviet Purchasing Commission in Washington, D.C. to escape—I mean, life in New York had—the remnants of family life had become quite confining. She went to go work for the Soviet Purchasing Commission, which I think also aligned with her political views, and then joined the Women’s Army Auxiliary Corps. So that was her experience during the war, and eventually after the war moved to California, to Los Angeles, where she would meet my father. My father survived the war in Paris hiding in plain sight, as one says.
Can you clarify? What does that mean? Did he pass as a gentile?
Well, he lived in the apartment he lived in. He went to work at the job. So he was a radio engineer. The company that he worked for had clandestinely taken a contract with the Free French Signal Corps, which they continued to work on during the Nazi occupation, which was of course extremely dangerous. He of course was required to register. He wore a yellow star. He learned to carry his bookbag so that the yellow star was not so visible. We still have the stars. I once asked him about that, and “Why did you keep them?” He said, “Well, we can’t get rid of them.” The rest of his family was not so fortunate. And so when the war ended, he was alone. I mean, there were cousins and so on, but—so he set out to try and leave Europe, made his way first to New York where he had relatives, and then to California, where he met my mother. So they met in the early 1950s in Los Angeles, literally the other side of the world from where they were born not very far apart.
Was there a Jewish refugee/émigré community, particularly for your father, that they became a part of in Los Angeles?
I don’t think so. My father was very private. I think—so they would, after they married—I was born in Los Angeles, there was a little bit of shuffling around for a few years, and then we settled in San Francisco. My parents were older, because life had been complicated, right? So they were 40 by the time I was born, which was pretty unusual in those days. Forty plus, a little bit. The Jewish community—at least my impression of—you know, you grow up in an environment, and you don’t have an objective view, right? My impression of the Jewish community in San Francisco area was that it was dominated by German Jews who had—descendants of German Jews who had emigrated long before. Famously, Levi Strauss, but others. My mother actually worked as the secretary in one of the two large historic Reform congregations in San Francisco, Temple Sherith Israel, so that’s where I was bar mitzvahed. I remember—let’s see. This also led to weird introductions, so when I was going through the stage of wondering, “Do I want to be a scientist? Do I want to be a doctor?” my mother was talking to people she knew at shul, who—maybe somebody could give advice, and so it was suggested that I talk to a Dr. Feinstein. Turned out to be Dianne Feinstein’s husband.
Oh! [laugh]
Passed away relatively young, but yeah. Yeah, so somehow, my experience as a kid I think was that I had a sense of difference. I mean, the life experience of my parents was very different than that of the people around me, even the people I would meet when I went to synagogue. Which I must say was not that often.
And Bill, on that point, I just want to interject there—talk a little bit about the Jewish observance in your household growing up, in terms of who was the driver behind that, between your mom and your dad, and how you felt about those things growing up.
I guess we'll get to this, but it is—I was thinking specifically about this the other day—that by the time I was thinking about what to do—it’s not only the case that official anti-Semitism was basically gone—I wouldn't say anti-Semitism was gone, but I certainly didn't experience any directly, that I was aware of. But it’s also true, right, that there had been a whole generation of descendants of Eastern European Jews who became theoretical physicists. And so the heroic figures that I was reading about as a teenager from the war and the postwar era in American theoretical physics, there were a lot of people whose family stories were not unlike mine. They were a generation older, but still. So that makes all this maybe more relevant than it might otherwise be. So, Jewish observance. We didn't keep Kosher, although I remember learning about what that meant and learning about practice. My mother, having grown up in a Kosher household, couldn't—sorry, I'm guessing that this means something to you; I don’t know whether it means something to all our readers—she wouldn't break an egg directly into something she was cooking, right? …
Well, we'll explain that we don’t want to get blood into the entire bread.
Yes, okay. Right. That’s right.
[laugh]
So she lit candles every Friday night. She made challah. We observed the holidays. We seldom went to synagogue except on the holidays. I think because my mother worked at one of the Reform synagogues, it would seem weird not to go there, but she was always shaking her head about practice. We were realistically, I think, poorer than I realized, in the early years of elementary school. For many years, my father was unemployed and stayed home with me, which created a very strong bond. In particular, although my mother was an excellent cook, I also spent a lot of time in the kitchen with my father. Our grandson spends a lot of time in the kitchen with his father. I think these things are not—although now also more with—these months, of course, he spends a lot of time at home—
[laugh] All bets are off.
—so it’s maybe not the most calibrated. So that was an example of difference, right? My parents were older. They were born somewhere else. They grew up in a world which was basically gone, right? Even the experience—they had—my mother, having been in the Army Auxiliary Corps and having worked for the Soviet Purchasing Commission, had in some ways a closer connection to the events of World War II than many people. And of course, my father had lived through the Nazi occupation of Paris directly and lost immediate family members to the Nazis. Whereas I think for most of my contemporaries in school or even that I would meet in going to a synagogue, World War II was something that happened to somebody else. They might have had a grandfather who served or something, but that was about it. So these combination of things—and then on top of that, there was this weird thing that for a very formative period of my life until I was maybe eight or nine, my father was at home, which was very unusual in those days. So yeah, as I said, there was a sense of difference, not very clearly articulated.
Did your father talk much about—he was of that generation, on both sides of the Atlantic, that tended not to talk about the war for any number of reasons. Did he talk about his experiences?
It was a lifetime project.
To get him to talk, or to hear all that he had to say?
To gradually—let me put it this way. So he lived to be 91, in full possession of his faculties. I mean, the last part, there was a physical decline, but mentally [he was] fine. And he decided quite late in his life that he would write memoirs in a kind of short story essay format. He knew he wasn’t a natural writer, and so sitting down to write his memoirs was not something he was going to do. But what he wanted to do was to record various stories. And so gradually, these made their way to us—I'm an only child, so “us” is my family and I—and distributed to other more distant family members. And, mostly the essays recorded stories which I had heard, but it was great to have them written down so I didn't have to remember them. And of course, there was a richness of detail that I could not have reconstructed without reading them. But when it came to the war, there were things that he wrote down that he had never told me. So and this was—he was 80-something by that time. So yeah, it was a long process to—so there were bits and pieces. I knew all about meat rationing, which explained why I think for him meat on the dinner table was a symbol of freedom.
Did he approach Jewish ritual with a sense of obligation, or did he enjoy doing things in the Jewish tradition?
That’s a very interesting question.
And transposed on that is inevitably feelings, I'm sure, of bitterness and sorrow, given his experiences and those of his family.
Yes. I think the most relevant anecdote is about Yom Kippur. So he would never miss going to synagogue, even when it was difficult for him, because he had to say Yizkor [Jewish prayer of remembrance] for his parents. However, he confided to me at some point when I was a grown-up—and at that point, I could not reconstruct what happened when I was a child—he said, “You know I don’t fast. I've starved; I don’t need to fast.”
Hm.
And very late in his life, I asked him again about that. And he said, “No. I think I'm still ahead.” Meaning, you know, he had enough fast days saved up that he didn't need to fast one day a year.
Which of course is not a rabbinically approved approach. [laugh]
No. But—
I'm not passing any kind of judgment.
No. No, no, no, no. Anyhow, I think that’s the most relevant thing I can tell you, by way of an answer. So he wouldn't dream of not going to synagogue to say Yizkor on Yom Kippur, but on the other hand, he had this very definite view that he was entitled not to fast, due to his experience. So, okay. I mean, there’s obviously a lot there. I'm not sure that I unpacked all of it over the years. But—
Bill, of course generationally, for your parents, with the European perspective and a background, so much of ritual observance was simply cultural – that this is what Jews did.
Yes.
And of course, one of the grand stories of American Judaism and assimilation is that that inclination, to be involved and to do these ritual observances, whether you are a believer or not, really fell by the wayside. And so a broad question for you is, do you feel part of that narrative? That as somebody who grew up in this country, minimally one generation removed from that, that the cultural impetus to be involved whether the belief is there or not was not really a part of your reality growing up?
I did not feel the need to rebel against whatever level of engagement we had. I mean, our level of engagement was modest. I think parts of it were very deep, because of my parents’ experience. My education was spotty.
Did you give much thought to baseline concepts such as the universe has a creator?
I think I went through some period where I thought about this, and at some point you realize, “It’s not that complicated; I don’t believe in a God.” I don’t need to go to contortions to—I can believe in a God who provided the initial conditions for the universe, and after that, the laws of physics take over. I don’t know.
That’s inconsistent for you? It’s one or the other, you mean?
Well, I don’t know. I just—it doesn't—what’s the Laplace remark?—“I have no need of that hypothesis”?
[laugh]
Yeah, what does that do for me? I don’t know. My mother, I think—
But that’s precisely the point. Earlier generations would never have been—in the shtetl, they would have never even thought to ask, “What does that do for me?”
Exactly. So my daughter is a philosopher of religion. She’s an assistant professor at Washington University in Saint Louis, studies religion and politics. And we were having a conversation about these things, and I made a comment that I didn't believe—I was trying to distinguish my relationship to practice from that of my grandparents. And she remarked, “I think you just made a distinction that wasn’t accessible to them.”
Right.
So yeah. My grandparents had a proper religious education, I think. I didn't know them. At least in one case, you know why, but they all were gone well before I was born, well before my parents met. And my father’s mother apparently was quite well-educated, and in fact met her husband because she tutored him in reading and writing. She had auburn hair, and the nickname in the shtetl was “the red teacher.” Not for her politics, but—
[laugh]
So it’s back there, but this wasn’t very well—so it was this funny thing, that my father’s religious education was, I think, quite scant. My mother just sort of grew up around it, and so she knew all these things, even if the education wasn’t formal. Although I must say I was extremely surprised—after my mother passed away, we were cleaning out her desk, and we found an exchange of correspondence with an old friend, written in Yiddish, in Hebrew script. I did not know that she knew how to read and write in Hebrew. She kept that to herself. My father did not. It’s possibly related to why she kept it to herself. [laugh] So, yeah, I always—as a matter of identity, it was always very important. The question of what you do about it—I mean, if you're not a believer, but you have a strong cultural identity, that’s a different problem than—I mean, there’s different versions of the problem. So that was mine.
But at a certain point, you just satisfied yourself with the idea that the basis of Judaism—that there is one God, and he’s the creator, and that he has an anthropomorphic relationship with humans—you just did not accept the central premise of the faith.
Yeah. The notion that one could take that literally—I mean, look, there’s a cultural tradition. I mean, if you feel bound by a contract, it doesn't matter whether there’s actually another person on the other side, right?
Sure, sure.
[laugh] So to speak directly to the covenantal structure. I think my biggest discovery along this line was that Catholics lose their faith and Jews just stop going to shul.
[laugh]
I don’t remember any trauma.
Yeah. Well, there might have been—
At some point, I read these words, and—“Yeah, I guess I can imagine people taking it literally”—but I just didn't.
I mean, trauma might have ensued—the phrase is “off the derech,” right? So—
Yes.
Trauma might have ensued—
Oh, no, no, no, no. But we were nowhere near—
Exactly, that’s the point. That’s the point.
But we were nowhere near that level of observance where this is a problem.
Right. Hence the lack of trauma.
My father grew up in Paris; he liked eating a ham sandwich.
Yeah.
That was not—you know. Knowing the content of the tradition was very important. Participating in it at some level, so that you know what you're talking about, was important. The cultural identity is very important. The belief? No. As I say, I think my mother was a believer in the old-fashioned sense. My father, no. And it doesn't take a lot to think about why. But I don’t even know that it was that—again, was there a trauma where he finds himself alone after the war and he realizes he can’t believe in God anymore? We never had that—we never had any version—anything remotely like that conversation. And eventually, we did talk about almost everything, so that was not—I think that had slipped away somehow without fanfare.
My sense is that for many Holocaust survivors, it’s one of two ways: it’s a hard right or a hard left, in terms of your reaction.
Yeah. So I think elements of ritual, especially as I already emphasized—rituals of mourning—became very important. And I think there’s a logic to the argument that you should [pause]—you remember your parents the way they remembered their parents. That that’s the practice. So the fact that at the core of that practice—well, as one good friend who will remain nameless because I don’t know whether he would like to be quoted on this—liked to say—“I would go to shul more often if I didn't have trouble with the metaphysics.”
[laugh]
[laugh] You know, so can you take that out of the center, and it all still works? I think as an empirical matter, it must. Because there’s a whole lot of people [laugh] who are standing there on Yizkor remembering their parents, who are not believers.
Is there anything scientifically compelling to you about the Shema? The reach for unification?
I realize that I'm mixing cultural traditions, but, you know, different magisteria, right? Or “render unto Caesar” or whatever. I don’t know. No, I—no. No. It’s—so, I avoided for many years going to Israel because my mother was let’s say a naïve Zionist, and I had difficulty with that political position.
Naivete meaning that the Palestinians were just fine; there’s no problem?
There was no—it wasn’t more complicated, right? In any way. And I had trouble with that, and that was one of the few sources of conflict in the household. So somehow, I never went, until—and actually, one of my few deep regrets in my relations with my parents was that I didn't make it—by the time I was a very young adult, I was traveling a lot, right? Because that’s the life of a scientist, and I was very fortunate in being able to do that. And I didn't make a point out of finding a way to get my mother to visit Israel, which she would have loved to do. Unfortunately, her last decade was spent in a decline with Alzheimer’s disease. So that whole complicated thing. Anyhow, the reason I'm bringing this up is you asked about the meaning and ritual and so on. So I'll give you a kind of counterpoint to that. I did finally go—well, my family—so in a monthlong visit, at this point, one episode where we're living in Jerusalem for the month—one episode is I'm with my daughter at the Church of the Holy Sepulcher. Maybe she’s eight, at the time? CHARLOTTE
Seven.
Seven? I could—
Oh, I didn't know we had an audience. Hello!
Yes. [laugh]
We can say hi.
Yes. [laugh] CHARLOTTE
[laugh]
Charlotte, do you want to say hello? CHARLOTTE
Hello! Here’s my hand! [laugh]
[laugh] And a convenient reference. CHARLOTTE
[laugh]
Yes, please. We'll have a---
If the Greek chorus ever wants to weigh in on correcting any facts, by all means.
Well, she did! Our daughter was seven. CHARLOTTE
Seven.
[laugh]
There you go!
And so we go to the Church of the Holy Sepulcher. And the reason we go is not because we haven't been yet, but because I realized that, re-reading the guidebooks, that there is a place where the cross was supposed to have been planted. So it’s not only seeing the tomb and everything, the sepulcher, but there’s also the place where the cross was planted. And so I say, “I want to see this.” So we go—sorry, the printer will run a little bit in the background, just for a couple minutes—so we go, and I have to say that it is incredibly unimpressive. It is a very small hole, implausibly small. It’s fitted with a little bit of metal around it to preserve it and so on. And it just—I don’t know. I mean, I wanted to see something, and there just isn’t that much to it.
Now obviously, I have no—what’s the saying?—no dog in the race, or whatever. It’s not a matter of religious belief or anything else. I just, you know—I wanted to see something. And there just wasn’t much there. But fortunately, it was Friday afternoon, and every Friday afternoon, the pilgrims come, to see—I mean, obviously on Good Friday, there’s lots of ceremony, but just on a typical Friday afternoon, you will see an increase in the population of people in the Church of the Holy Sepulcher, and they also come, in particular, to look at this very thing. And I was moved. And I realized that ever since, for 1,500 years, people had come to this spot, and it became meaningful because, for 1,500 years, people had been coming to this spot. And I must say that that then gave me a different view of even the things that I have some relationship to. You know, why is it interesting to stand and look at the wall, right? I mean, you do realize the wall itself has no ritual significance, right? It’s a wall. It’s holding up something that used to be there. It’s holding up something on which there used to be something. Right? Which is of great significance.
Well, the significance of course is as a matter of faith, that the third temple will be rebuilt.
On the thing on top of it. It’s just the wall, right?
[laugh] Right, right.
It’s holding back the dirt, right? It’s a retaining wall. So why is it significant? It’s significant because for 2,000 years, people have come there! And the significance somehow is in—at least for me, I realized that the significance is in the community. And so in that sense, I'm not deeply conflicted about engaging in a ritual whose literal meaning I don’t believe. I'm not there for the literal meaning. I'm there for these human connections.
Did you ever talk to your parents about why you were an only child?
My mother was 42 when I was born. Yes, we did apparently have one very explicit conversation about this, in which, at age—seven?—I don’t know—I guess I had understood that there was some relationship between sex and children. So I woke up in the morning, and the first thing I said to my mother that morning, apparently, was—I asked her why my parents didn't—why she and my father didn't have sex anymore. Because obviously there were no more children.
[laugh]
[laugh] You asked. This is a literal answer to your question. So, you know, a little bit of knowledge is a dangerous thing, as they say.
[laugh]
I clearly had worked out some parts of it, but not—I did not have a very complete view, at age seven. I think it was seven. What’s interesting to me is I remember my mother telling this story; I don’t actually remember the conversation. So I'm going to assume that it actually happened. In part because that kind of reasoning is very—you see it in children all the time, right, where they have some part of the story, and they're very logical in working out the consequences, but of course they're missing something, so it can come out quite funny. I think I figured out pretty quickly that—yeah, somehow it didn't seem puzzling. My parents were older, right? So you knew that somehow—
Your sense was that had they not been older, they might have had more children, though? That was the key factor?
Oh. You know, I don’t think it would have—I honestly don’t remember—
I mean, some families specifically only want one child.
So I don’t believe—[pause]—pffffww. Yeah. I can remember people asking me what I thought about being an only child, and very early on realizing it’s a little bit like being asked, “How do you like being tall?” I didn't know anything else. That was life.
My question is a little more concrete in the sense that—did you pick up on your parents saying things like, “If we were not so old, we would have had more kids”? That kind of thing.
I have no memory of that. I think that they were—you know, parents love their children unconditionally, right? Our daughter works on this [laugh]—these issues of what love is, and whether there are conditions, and so on. So I don’t want to say—I don’t want—and also, you are your parents’ children and you're not anybody else’s children, so comparisons are—I mean, unless you do it professionally, it’s not clear how to arrive at anything reasonable. But, look, my parents were incredibly attentive to me as a child. They almost never left me with a babysitter. If they wanted to go to movies, I went to the movies. The first movie that I remember—oh gosh, am I going to get the—oh. Huh! I'm forgetting the name of the movie now. It’s a French New Wave movie. Maybe it’s—A Man and a Woman? Is one of the characters an auto racer? I think that’s right. So the first movie I went to was not a movie you're supposed to take eight-year-old kids to.
[laugh]
I thought it was about auto racing, so that was fine.
[laugh]
I was treated as another member of this family of three people. I was not treated—in many ways, I was not treated differently because I was a child. We sat over dinner and we watched Walter Cronkite tell us about what was going on in Vietnam, and we talked about it. I was expected to have a reasonable conversation at the table. We didn't often have guests, but when we did, that was also true. I wasn’t sent off. It didn't occur to me that you did things really without your parents. So it was a very close relationship. They were very attentive. Not controlling, at least not that I perceived.
And it is not—I don’t think I ever had this conversation with either of them explicitly, but consider their trajectory through life. It is not obvious that—in 1946, when after all, they were 27 and 28 or something like that, it was not obvious that they were going to have a family. That either of them would have a—sorry, there wasn’t a they. It would not have been obvious to either of them that they individually would have the opportunity to have a family. And so the idea of 14 years later, they had a child, they were married and had a child, and everything was basically okay—a woman having a child at 42 in 1960, when her best friend had two Down Syndrome children, it is literally the case that to have a healthy family was something to be thankful for. And so contrary to the usual quote, every family experience has its elements of uniqueness, and certainly for me this fact that I arrived late in their lives and that the path to that point was quite meandering, to say the least—entangled with the great upheavals of the 20th century—I think that is a big part of the relationship. And so, did they regret not having more children? I don’t know. I suppose they might have. But that wasn’t something that they shared. There was never a sense of the sacred duty of rebuilding the Jewish people after the Shoah, which some people in that generation felt very strongly, I think.
That’s on the secular level. There’s also a specific mitzvah to be fruitful and multiply as well.
Yes, yes. My father was more explicit in rare moments of basically saying, regarding conflict and those higher purposes and everything else, he had had enough. He wanted to live his life.
He had enough, and maybe the message was that you were enough.
Well, and maybe a larger message that we've all had enough. Which we did talk about. So I think his anti-militarism was very much born of experience. He did not think it would be heroic to send his son off to war.
Bill, let’s pull up—
Which, by the way, was the explicit answer to why we were in the United States and not Israel.
Right. [laugh] Point taken. Let’s pull up the science file.
Yeah, sure.
Let’s start talking about your interests in science and how early back they went. Do you have a specific formative memory or event that sort of turned you on to the world of science and discovery?
So I think I was always curious. I think children are born curious and we beat it out of them, mostly, or educate it out of them. I knew that my father had studied engineering, although he also studied management. And so when he did go back to work, he worked in sort of public management for the city of Oakland. He was a civil servant. And my mother was a secretary. Probably would have been a lawyer or something if circumstances had been different. She was very playfully verbal.
Anyhow, so I think—so I knew my father was an engineer. I knew there were things—because of conversations about what does it mean to do engineering, and I knew there were these things called research and development companies. So I, like all small children, I invented various things. I was good at math. I somehow figured out that being good at math did not mean that you became a mathematician. I'm not sure exactly how I figured that out. Maybe I didn't know what a mathematician was. But certainly, the being good at math in the school sense suggested that—
Including a capacity to think in abstractions?
Oh, gosh, I don’t know. Let’s get back to the things about abstraction and the different flavors of theory—I’d prefer to have a conversation about that in my adult self, rather than to project back—
Well, right. But the question is, how far back does that go? How far back do those capacities go?
Right. And I don’t know.
I mean, it’s one thing to be able to memorize a multiplication table, and it’s another to intuit calculus when you're ten years old.
To see patterns and—right. So, look, I was in some way noticed as being bright when I was a little kid. I don’t really know what the axes—I mean, some of that is substantive, and some of it is societal construct, so I don’t know what’s what. I had very good teachers in elementary school. We actually became family friends with my kindergarten teacher and my first grade teacher. My first grade teacher would eventually go on to get her doctorate and become really one of the leading figures in special education policy in Northern California. At some point, if you were a school district and you had a kid who needed a personalized education plan, and it was very complicated, Miss Carpignano or Dr. Carpignano by then, was the person you ended up talking to. That was half of a state, right? [laugh] So as she was working her way through her educational process in parallel with teaching, I was often the guinea pig. So I remember being given various kinds of tests and things like that.
So, you know, I was a talented kid, but it’s also true that there were a number of talented kids around. Why talents got directed to—and there were other talented kids. There were other kids who probably should have been noticed and weren’t, for all sorts of obvious reasons. So that was what it was. As good teaching continued in middle school, I actually think I became much more appreciative of—having grown up with the Vietnam War, I became more interested in history and politics, and so there were important teachers in middle school.
We moved, because my father now had a good job, so we lived in a neighborhood in San Francisco in which if you drew a—it was really downtown, and so there weren’t so many kids. So if you drew a circle that included a lot of kids, that circle overlapped Chinatown. So the middle school that I went to, in my year, there were roughly 400 students; there were 13 of us who were not Chinese, and that included the two African American kids. So that was interesting, right? So there was lots to be interested in. It was a very rich life in terms of intellectual and cultural stimulation.
The idea that I wanted to do science and physics in particular—actually I'm embarrassed to say I have never done anything about this, and I should—you asked about a specific formative experience. In the mid-1970s, there appeared in The New Yorker a series of articles by Jeremy Bernstein, which was a profile of I.I. Rabi. I have not gone back to reread it. And as I say, I'm embarrassed to say that I've never met Bernstein, and I've never written to him about this. I should. Maybe I should do it today. He painted a picture of Rabi as this kind of father figure, and then the generation who came of age during and immediately after the war. And as I think I already mentioned—well, Rabi’s own religiosity figures prominently in his biography, as you know. And there’s Oppenheimer, and Feynman, and Schwinger, and all these characters with whom I could identify.
Jewishly, as well.
Oppenheimer was the kind of family who were the presidents of the synagogue that my mother worked at.
When you say identify, specifically you mean in a Jewish context as well?
Yeah, they were all Jews, the children of immigrants. As I say, some of them were the children of kind of high-class German Jewish immigrants, like Oppenheimer. But Rabi or Feynman or Schwinger, they were the children of real Eastern European immigrants, just like my parents.
With the accent to match.
With the accent to match, exactly. And so, it was a picture in which the characters were people with whom I could identify. I mean, as an adult practicing physicist, to say that you identified with Feynman and Schwinger sounds colossally arrogant; I mean in the sense that you're a teenager and you're reading and you see somebody who is enough like you that you could imagine yourself on that trajectory. The role model concept.
Bernstein also painted a picture of people—of that community of physicists being at once deeply intellectual, very social. I mean, they talked with each other, they argued with each other, and, partly because of circumstance, but not only, deeply engaged with what was actually happening in the world. And that combination of things—I mean, for all its complexities, which I think were already not—they weren’t clear to me as a teenager. I don’t know; are they clear to anybody, in retrospect? But the notion that involvement in the Manhattan Project was not unambiguously heroic; that was clear—that was in there, right?
And so with all of its complexities, this idea that people were both profoundly committed to their intellectual lives and also deeply engaged with the real world, somehow it meant that—anyhow, it was very compelling. So I read the biography of Marie Curie, like everybody else. But if you asked—had I been a girl, I think it might have been very different, right? But if you ask, what was the thing I read that said, “Ah, I want to be like that,” it was Bernstein’s profile of Rabi. And it wasn’t that I wanted to be Rabi. It was this whole milieu. And to be a part of that—that was a landmark.
And when you say “that”—what exactly is “that,” in your mind? “That” could be many things.
The “that” to be a part of?
Right.
It was this sense that doing physics—so I think an important part of it was that doing physics was to be part of a community. There clearly was a community there. Sometimes it was a community that all came together. Sometimes it was a community that was literally drafted. Sometimes it was a community that was squabbling. But it wasn’t the model of each individual on their own doing their thing. There was this sense that to be a physicist was to be part of something. And it was clear that what that something would be in 1975 was not going to be what it was in 1945, or 1955. But that’s okay.
It’s very interesting that you identify or that you specify the communal aspect to physics. I've always sort of been jealous as a historian that—in the humanities, we don’t really collaborate in the way that scientists collaborate.
Yes, yes, yeah.
And so that’s a rather generic longing toward the sciences overall. And so I'm curious why specifically—there’s any number of scientific fields you could have pursued on the basis of your attraction at joining a community united in discovery. And so that just begs the question, why the community of physics, as a 15-year-old, for example?
Well, as you know, there would be some ambiguity about that for me, because I was also interested in things that were studied by biologists. And we can get back—that’s I guess the next chapter.
Right from the beginning, of course—with biophysics, a lot of people don’t even take—there’s still a ways for it to go in terms of it being taken seriously in, quote unquote, “proper physics departments.”
Yeah, let’s have that conversation in a little bit.
No, but my question—
It’s something that I have very definite views about. As you may know, the National Academy of Sciences reviews the state of physics—
Sure.
—in the country once every ten years, the decadal survey.
Absolutely.
This is the first time that there will be a separate volume about biological physics.
[laugh]
And I'm the chair of the committee.
It’s a long time coming. But my question in the narrative, in terms of where we are in the current conversation, is I'm curious if in the way that these concepts were presented to you formally in education, if you sensed even from the beginning that it would be an uphill climb to combine these fields of study? Or, that you would have to separate them out in your mind and pursue these interests not as a unified approach to science. Even as a teenager, I'm curious if those intellectual or thematic or even institutional distinctions were something that occurred to you, even at a very basic level, during those years.
So [pause] I would say that some of where I succeeded was made possible by an incredible naivete. [laugh] I was interested in what I was interested in. And we can talk about how that emerged. And the idea that that wasn’t something you were supposed to do kind of—didn't occur to me. And although there were hints, I don’t think I really always picked up on them. [laugh] So in that sense, maybe I was a little less dedicated to the social aspects of community than I think. [laugh]
Look, I think that—so to get back to the thread of, if one wanted to become part of a scientific community, there are many scientific communities—so a reasonable question is, had there been [pause]—yeah, had there been a Jeremy Bernstein who wrote about the emergence of modern neuroscience in the sixties or something—I don’t know—who followed the path from Stephen Kuffler and Horace Barlow’s visit to Kuffler, and Kuffler’s collection of young postdocs and assistant professors that included Hubel and Wiesel and all these guys—had someone painted that picture for me, would I then have—would neuroscience, a subject which I would eventually engage, would that have moved up the list ahead of physics? I don’t know. I knew that I was good at math, [laugh] and I knew that had something to do with being a theoretical physicist. And so that—
Even in high school, this occurred to you, the connection between math and theory?
Yeah, yeah.
Why? Were you reading advanced stuff? Did you have good teachers? What have been your outside exposure to have these kinds of thoughts?
So, I read quite far—so, okay. San Francisco has a selective public academic high school. At the time that I went there, certainly, I would say—as you know, now with the dramatic underfunding of public education in the country—by the way, so to be clear, this is California pre Proposition 13. In fact, my father was a civil servant, so he—Proposition 13 would be passed—I should look up the date, but I want to say when I was in college. So high school was blissfully free of that. Well, maybe it was even a little earlier.
Anyhow, public schools were supported very well. There were extraordinary teachers. Was there an extra concentration of them at Lowell, where I was? Maybe. But I know people who—one good friend from high school, her father taught science at one of the other public schools, although I don’t think I really knew about that, then; we've talked about it in retrospect. I think the gap in educational experience between going to the academic magnet school and going to a normal school was not as large as it is today. I think the idea that we're going to solve our educational problems by focusing—is it a good idea to separate out people who exhibit a certain combination of talent and test-taking ability and preparation, that early, rather than waiting for things to diffuse longer? I honestly don’t know.
I know that for me, what was wonderful about that high school experience was a small number of extraordinary teachers, many of whom, by the way, had also taught at other high schools in the city. So one could have encountered them in other places. And the collection of fellow students, some of whom I am still close to. Not many. I did a lot of reading on my own. There was a math teacher, Mr Abad, who explained that he had started a PhD program in probability theory at Berkeley, actually, which had a separate department of probability and statistics. I don’t know if it still does, but it did then. His brother, I think, finished his PhD. He left with a master’s and told us—us being the handful of kids who were always eager to learn more math—that he figured out he was not going to be a research scientist. He loved the subject, and so the thing to do was to teach it. And so he took that very seriously. And he was happy if we wanted to go faster. So if we wanted to skip a course and go to the next course, that was possible.
He had gone—I don’t remember now at exactly what stage of his educational life—he had spent time at the University of San Francisco. Which, as you may know, is not known for its scientific research. It was known at one point for his basketball team; Bill Russell played there. A Catholic school in San Francisco. And as a result, he still had his library card. And I remember, he would take two or three students to the university library, and say, “Pick a few books.” And I don’t know if we did it once a month? I don’t know. I remember there were books that I would—he would look at the books we were thinking about, and he would comment. He never told us—again, there were a couple of us on these trips—he never told us, “You're not ready for that.” Sometimes he would smile. And, you know, I have a vivid memory of checking a book out, struggling with it for a month, giving it back, and then six months later, trying again.
Struggling—you don’t mean concentration. You mean—
No, I just didn't understand. I was not in fact ready to read this book. So I also then found some way—I don’t remember—I found a quantum mechanics book by three Russian authors, which was sitting on the remainder table at a department store.
[laugh]
So it was cheap. It looked like—you know those days when you mimeographed things? Basically typesetting was—somebody typed.
[laugh]
And so I started in, right? [laugh] And I don’t know, was I a sophomore in high school, maybe? So I didn't understand things. I would ask my math teacher or ask my chemistry teacher. So let’s see, how did it go? By the junior year of high school, I took the Advanced Placement Chemistry course. The teacher, Mr. Dahl (who passed away in 2020), was very good. He was in fact one of the people who was on the committee for writing the Advanced Placement examination. It’s Rabi who says, “I discovered that the parts of chemistry that I was interested in were called physics,” right?
[laugh]
And that was okay with him, right? But I was just fascinated by this idea that you could calculate energy levels and figure out why the periodic table looked the way it did, and stuff. And so my high school was essentially across—well, across the street; it’s a slightly complicated geography—but it adjoined San Francisco State, which was a state college, now a state university. And Mr. Dahl knew a guy in the Chemistry Department who was a physical chemist. And he said—he arranged for me to go and chat with this guy. And so I would go—maybe there was a semester or even a whole year where I went once a week, and asked for advice about reading things and understanding things. Which was incredibly generous.
By the time—so the summer after my junior year in high school, I went to work in a lab. It is literally the case that my father was having a conversation with his oral surgeon. And, you know, small talk—“How’s the family?” And he starts talking about how, you know, he has this son and he’s interested in science, and duh-duh-duh. And this guy who in addition to practicing medicine was also on the faculty at UC San Francisco says, “Well, maybe he should go work in a lab. What’s he interested in?” And so he says something about, “He’s interested in physics. He’s interested in biology.” And he said, “Well, you know, there is something called biophysics. And there’s a Department of Biochemistry and Biophysics at UCSF.”
So this is 1976. Many of the rudiments of genetic engineering have just been developed. Some of the crucial people were there. Herb Boyer, people like that. And this doctor says to my father, “Well, he should just go to the Department of Biochemistry and Biophysics and ask them if there’s something he can do.” What do I know? So I go. I find the office. So it’s not quite the summer of 1976. I'm 15 years old. I go and I find the office of the Department of Biochemistry and Biophysics. I have the very vivid memory that the woman I am talking to is sitting behind a desk, and behind her there is a door, and the sign on the door says, “Secretary’s Chairman,” who was Bill Rutter, who would be one of the founding figures in biotechnology.
Yes. Right.
And the secretary who, I'm embarrassed to say, I don’t remember her name, she talks to me, and she says, “Hmm. Well, if you go work at a big lab, nobody’s going to have time to talk to you, and you won’t get anything out of it. So you want a small group. And you're interested in physics. There’s a guy in the department who did his PhD in physics before he started being interested in the molecular mechanisms of muscle contraction. Why don’t I see if he can talk to you?” So that got us the name Roger Cooke. He had done his PhD at the University of Illinois in physics, had actually done Mössbauer spectroscopy. So he was actually from the group in Illinois that was around Hans Frauenfelder when Hans was just doing Mössbauer spectroscopy. Right. So I think Roger’s PhD is from before Hans’s conversion to doing biophysics. I think that Roger’s PhD advisor was not Hans but Peter Debrunner.
Anyhow, so I went to talk to Roger. And, you know, I was 15; he was I guess in his mid-thirties? Young faculty member. And he said, “Sure. Come hang out for the summer. We'll figure out something for you to do.” So I did. By the time—eventually, we published a paper. Took quite some time. But I spent the first summer trying to do experiments. The notion that theory and experiment were professionally distinct activities had not occurred to me. As I would describe this—so I think the idea for the summer was to study the dynamics of polymerization of the molecule actin, which formed the thin filaments in muscle. So you have to go purify the protein, and then you have to study the dynamics of polymerization which is actually a nice physics problem. The way that you study it is by looking at light scattering. I believe I measured a lot of light scattering spectra—function of an angle, function of wavelength, whatever.
Now did you understand what you were doing? Were you just doing what you were told, or were you allowed to sort of explore and figure stuff out for yourself?
So I think—it’s hard to say, in retrospect, right? I mean, how much does one understand about what one is doing?
I mean, there’s a big difference, though, between just being told, “Go fill up these beakers” versus, “Here’s the lab. Have at it.”
So I would be let loose—so for example, “Here’s a test tube full of this protein. Dilute it to this concentration and add something, and then measure the light scattering and see what you get.” And so it started that way. Then, you know, Roger would say, “Oh, you've been here a couple weeks.” Or a month now. Or, you know, a few weeks now. I don’t remember; the summer isn’t that long, right? “And you've been playing with these proteins, but you don’t know where they come from. So today, we're actually doing a preparation where you start with a rabbit, and you end up with proteins. So, come join me.” So here I am, grandson of a Kosher butcher, and the first thing that I dissect is a rabbit.
[laugh]
I have a vivid memory of—fortunately he did not require me to actually kill the animal, but I did dissect away the—I mean, I still take pleasure in dissecting muscle away from the—you know, when cooking. I have a vivid memory of the metal scalpel touching the membrane around the muscle, and the animal twitching. And of course, it’s because, right, there’s a surface potential because the metal and the membrane, and that causes an action potential and the muscle contracts. And I remember being very calm and saying, “Roger, excuse me. [laugh] It just moved.” Right? “What do we do?” And I remember his response was, “Death is a relative thing.” So of course, when you are interested in biology, you study things that are in various states of [laugh] between fully alive and completely not, and you take pieces out, and so it was a time to talk about that. And we talked about that as we were standing there in the cold room.
So I did bits and pieces of everything. And I spent an enormous amount of time calculating. I sat there with textbooks about electromagnetism, trying to understand how light scattering worked, and imagining scattering off of various things, none of which really looked very—probably looked [not] very much like the actual molecules we were studying. I had various ideas about why the time courses could be the way they were. So I was given an enormous amount of freedom. There wasn’t something I was—I mean, there was something we were trying to do, but it wasn’t like he needed it for anything, so it was okay. I'm pretty sure that I spent most of the summer studying the light scattering from dust, which obviously was not the goal. But I'm not a gifted experimentalist.
Bill, were you a good mentee? Would you be able to go to him and report what you had found, problems that you were running into? Did you develop that kind of relationship?
We talked a lot. I don’t remember—it was quite informal. I do remember once or twice having to talk to a group meeting. Maybe that was the next summer? I don’t know. The timeline isn’t very clear to me. I remember at some point his recommending to other people in the group that if they had mathematical questions, maybe they should ask me. I think it was starting to dawn on him that I was better at that than I was at [laugh] doing experiments.
Anyhow, by the time I got back to high school that senior year, it was time to take the Advanced Placement Physics course. The teacher was very good, Mr. Stark. And he began by saying, “Introductory college physics is taught out of a standard textbook, Halliday and Resnick.” Whatever incarnation it was then. He said, “But, physics is—but it’s a bit dry, and physics is a very beautiful subject. And you should try and get some appreciation for that, even if that’s not the best way to learn it. So I'm going to give you this other book.”
When he said physics is a beautiful subject, what did you intuit specifically he was talking about?
I don’t know that I had a—what was important was what came next, was he distributed copies of The Feynman Lectures. And so of course, I devoured this.
Now, the name Feynman, did that mean anything to you at that point?
Yes, because this is after—I'm pretty sure this is after reading Bernstein’s profile of Rabi. And by the spring in the kind of—by the spring semester, he recommended that the people who wanted to take the—there was two versions of the Advanced Placement exam for physics. At least there were in those days. I don’t know if it was an A and a B, or an AB and a BC? I can’t remember how it worked. But anyhow, there was two levels of the AP exam, and for those people who wanted to take the higher level, he recommended that you come, I don’t know, an hour early for a little extra lecture or something. And it might be that this was after the exam; he sat with me and asked if I wanted to try teaching a couple of those lectures out of the Feynman books. And so I did. I don’t remember which topics. So that was incredibly generous. And this was made possible in part because I had kind of run through the math curriculum already.
And so the same teacher who used to take us to the library, Mr. Abad, he said, “Fine. As a senior, you just take independent study, and I will give you problems.” And I think he even—I think, if memory serves, because there were a cluster of us who had taken calculus when we were juniors, he taught, at least for one semester, a differential equations course. Which, you know, high school is not bad, right? So that was special. And we spent time—I think he asked me for help writing some of the problems? I don’t know. So yeah, I was very lucky in the teaching that I got from this handful of people.
I really want to say—the focus, of course, is about how I emerged to want to do science and what the preparation was. But there were some extraordinary teachers in the humanities, which had an enormous impact, since we do—we communicate either by writing or speaking. And the cool thing to do—this gives you a sense of how nerdy the high school was—the cool thing to do was public speaking, at least for a certain segment. And so I succumbed to this once; I think I went to one debating tournament, and that was it. So I didn't—it wasn’t—I wasn’t dedicated, right? I just wanted to try it, because I knew people who were doing it, and it seemed like fun.
And you had to try out, right? It was a team, so you had to try out. And the way you tried out was that Miss Thayer, who was the teacher, would hand you a newspaper article, and she’d tell you to go sit in the corner for half an hour. And at the end of that half hour, you were supposed to get up and speak about the contents of the article, for five minutes. And it’s very vivid in my mind. I can picture her sitting there, as I'm speaking. I can’t remember what the article was about. I can picture, as I'm speaking, she would maintain eye contact, and take notes. And somewhere in the middle of the five minutes, she turns over the piece of paper, and now she’s onto the second page of notes. And then when it’s all done, she had this analysis of things that I did well and things that I did badly, and things to watch out for. And what I can tell you is that to this day, when I am lecturing—of course I'm not sure how well all these things transfer to Zoom—but anyhow, when I am lecturing, I can hear Miss Thayer’s voice—
[laugh]
—telling me not to do certain things that I am about to do.
[laugh]
I think what I learned—I mean, this sounds trivial, but I think we forget about it—speaking in public is a professional activity. You have to learn how to do it. There are people who are gifted, and so they're born better at it, than others. But you wouldn't dream of saying about your mathematical skills that, well, some people are talented, and some people are not, so there’s no point in practicing, right? [laugh] And yet somehow, we sometimes treat communication in science as some extra thing that, oh, just take care of it yourself. No! I'm very fortunate that sort of long before I needed to do it, I had had the experience of a professional actually teaching me something.
And I also think—I have a stronger view, which developed much later, which is that communication is not an add-on to science. Success in science is not understanding something; it’s changing how other people think about something. If I understand something, and only I understand it, science has not progressed. Again, this gets back to it’s a communal activity. So if I want to succeed as a scientist, if I want to realize my ambitions as a scientist, the goal is—obviously, the first step is that I have to understand something. And maybe it should be “we” instead of “I” to emphasize that arriving at understanding is a much more communal activity than we sometimes give it credit for. But crucially, whatever group of people it is that thinks they've done something, they haven't actually finished doing it until they communicate it to other people. Because progress of science is that the scientific community changes how it thinks about something, not that one person or five people in the scientific community change how they think about something and smugly declare that they've understood it.
So having been exposed—I think that having been given an appreciation for oral communication as a kind of separate professional activity early on was really important to me. And the same thing happened with writing. I had this fantastic—I had a number of very good English teachers, but one really is a legend, Flossie Lewis, who is in her nineties. There is a piece that appeared on public television, actually, about her.
Oh!
A little short thing. She had us reading things—I think the class is the Fall of 1976, and we read Hamlet. And then after we read Hamlet, we read Rosencrantz and Guildenstern are Dead. Now, I think—I looked it up once—it never occurred to me that this was an unusual choice, because of course, when you're presented with it, it makes so much sense. But [laugh] then you look it up, and you discover that the play had only been written a few years before—of course it would become part of the modern canon, and Tom Stoppard would be this great figure. She had us read it in high school! We read Malraux. It was amazing. And we wrote, and we wrote, and we wrote. Every week, we wrote. I can picture her handing me back an essay, saying “It’s fine, but don’t ever write it the morning before class again.” And of course I had, and she could tell. She was magical. Tiny lady.
And I have this image of her with the tone arm of the record player in her hand. She insisted that in order to read Hamlet, we should hear it. And this is before YouTube, right? [laugh] Now, it’s not so hard, right? So she had a record—two records—of Laurence Olivier and Richard Burton each doing the soliloquy for Hamlet, and she had us listen to them. And I can picture her with the tone arm in her hand ready to set it down on the records. She said, “Oh, and by the way, you should know that Olivier would recite the soliloquy sitting down.” And then she plays the record. You know, this is getting to be a while ago, right? [laugh]
[laugh]
It’s 44 years? The image is still there, right? So, am I a skilled—I did not become a professional humanist, unlike my children, but that appreciation for literature and theatre, and especially the discipline of writing—incredibly important. These are incommensurate goods. But I just feel incredibly lucky that I had that kind of education. And there were a lot of very good English teachers, but what stands out is that one semester whereby the time you were done, you knew how to write an essay. And it did not feel formulaic. It did not feel like—it didn't—I know some people talk about learning how to write, and it’s very structured, and I don’t remember that at all. It might have been there, but that’s not what sticks with me. It’s just something about sitting down in front of the page, composing your thoughts, and so on, and being careful about your use of the language.
So yeah, high school education, when I think back, it’s like there were—was it uniformly great? Probably not. I remember a geometry course in which we never proved a single thing that wasn’t obvious, and so I did not understand why we were doing any of this. I managed, but that did not produce a great love of formal mathematics. So I don’t know; I thought it was a complete waste of time. I still believe that time spent studying trigonometry is weird. I mean, they have you prove all these trigonometric identities before they teach you about complex numbers. And after you know about complex numbers, a lot of those trigonometric identities are pretty obvious! [laugh]
[laugh]
Right? I mean, it takes some contortions to prove them if you're not allowed to use e to the i theta. Afterwards, it’s not so hard. In particular, finding relationships for the sine of the three times the angle, it’s e to the i theta cubed, right? Anyhow. So yeah, there were parts, for example, of my mathematics education that I look back and I think, “God, that was a waste of time. It could have been much more efficient.”
On the other hand, there was this one guy, Mr. Abad, who took care of all of us. And physics and chemistry were extraordinary. I took a high school biology course when I was still in junior high school, and I did not take another biology course. I was very much turned off by that biology course, which was a lot of memorization. Anyhow, so you asked about did I do a lot of reading on my own, and everything else. So the result of all this was that by the time I graduated from high school, I think especially the fact that Mr. Stark, our physics teacher, had given us the Feynman Lectures, and I’d had this weird encounter with the remaindered quantum mechanics book, I knew a lot of stuff that you wouldn't ordinarily know. And, I decided that somehow one should capitalize on all this. So when I arrived at Berkeley as an undergraduate, I went and took graduate courses in the physics department.
Bill, I think this is obviously—I can tell already—we have an epic interview on our hands here.
[laugh] Well, I—you know—
This is a multi-part. So I think—
Yeah, no, I'm happy with that. By the way, I trust that eventually we'll get to some things that might be the reason why anybody would care about any of this. I mean, I'm not [laugh]—
Let me just say at the outset—you can be humble if you want, but this is the stuff that biographers care about, and it doesn't exist in any other format. So I don’t want you to delude yourself that this is all fluff and prelude until we get to the stuff that you're known for professionally.
Okay. [laugh] All right.
Just so we're on the same page.
Okay. Yeah, yeah, yeah.
And you're in very good company with the multipart interviews. I mean, it’s all about the personality. Some people, they can do the whole thing in two or three hours; I talked to Carver Mead for seven different sessions. So—
When did you do that? How is Carver?
Carver is amazing, and he’s great.
Good.
And we spent multiple, multiple sessions together. So don’t feel self-conscious; it’s just all about the person and the way that they want to convey their story. So for our first session today—I'm calling it now our first session today—the last question I want to ask you is about Berkeley. My question there is, how big was your world in terms of where you were thinking about possibly going? Given strategically where you were located, Berkeley is—obviously it’s a world-class institution, but it’s also in your backyard, right?
Yeah.
So was that part of the equation? Did you specifically want to stay home? Or, more substantively, did you understand at some level that for your interests, even if like a Harvard or a Princeton or a something like that was available to you, what you wanted to accomplish, conveniently enough, really could have been done not too far from where you grew up?
So, somewhere along the line, I had skipped a year of school. Actually, pretty early on. I think my kindergarten teacher decided that I didn't really need the second half of kindergarten, and my first grade teacher decided that I should get back—at that point, you could actually—I think you could start—although there are two semesters, you really are supposed to start in September.
Right. [laugh]
And I think in those days, you could actually start in January. Or maybe people who—anyhow, the two halves of the year seemed like they were more separate. So the result was that I went for half a year to kindergarten, and then—actually, no, it was second grade—and half a year of second grade, and that got me back on the sort of integer cycle. So the result was that I was graduating from high school and would turn 17 in the summer before going off to college.
My parents—my mother—it is only something in retrospect that I realized that my mother didn't finish high school. It’s not something we talked about. I'm sure she was embarrassed about it. She shouldn't have been. My father grew up in Paris. If you grew up in Paris and had any higher education at all, where did you go? Well, you went to some school in Paris, and they were all public, right? Except for—so—so he had gone to some technical school after high school. It was all complicated because he was born in 1919, so if he’s going to have any post-secondary education, it’s right in the middle of the war. And engineering, particularly, was a sensitive subject. So—complicated. By far not the most challenging part of that period, but anyhow, his education was cut short a bit.
But I was not born into a family that had the notion even of going away to college. I remember thinking—I talked to friends who were planning on really going away, and I remember thinking—one of them said, “How can you live at home? Don’t you want to be independent?” And I thought, “What do you mean, independent?” I mean, I don’t know, your parents send you a check every month. I mean, I don’t understand. [laugh] I don’t understand. I thought independent meant something else. Anyhow, I don’t know—it seemed like what I should do is—Berkeley of course had a certain mythical quality to it. Remember which years I grew up in. And so somehow growing up in San Francisco, looking across the Bay—you know, shining city on the hill and all that—I did not for undergraduate have a different ambition than going to Berkeley. So I just don’t remember anything else as a solution. Again—
You were very lucky in that regard.
Absolutely. Look, I was incredibly lucky. I grew up at the peak of public support for education in the United States, in the state that made the biggest investment.
This sounds like many centuries ago, now, by the way. [laugh]
Yes, it does. It does. And it’s only one, it turns out! [laugh] We were poor, and we became middle class. And when our son was getting ready to go to college, I was talking to my father, and I asked him, “By the way, how did we pay for my going to college?” And he said, “I don’t remember.” He worked for the city government. My mother continued to work as a secretary. We were comfortable. You know, it was fine. But we weren’t rich. Can you imagine somebody of my generation saying, “I don’t remember how I paid for my children to go to college”?
You got a world-class education from a middle-class upbringing. That’s what it is.
Exactly. And it’s amazing. And I was prepared for it, because I had a good high school education. And again, Berkeley had this mythical quality. I would only understand later that a certain amount of the mythology that surrounded physics at Berkeley was actually rather poorly matched to my own interests and intellectual style. [laugh] The mythology surrounding physics at Berkeley of course was the great experimental programs descended from Lawrence.
Well, there’s also the theory as well.
Ahh—right.
I mean, like a Geoff Chew? Would that name have meant anything to you prior to getting there?
No, but he would play a very important role when I was an assistant professor there.
I know that. Yes, right.
Geoff will reappear. Marvelous character. Not uncomplicated.
So going in, Bill, the game plan was physics, from the beginning?
No, no. Because I had spent time in a lab working on proteins and muscle contraction—and, by the way—oh, I forgot to mention this—my mentor there, Roger Cooke, he said, “What do you know?” And “Eh, eh, eh,” right? As I said, I had taken a high school biology course when I was in junior high school. I took the biology course that was offered to ninth graders, in junior high school. It wasn’t especially advanced. And he said, “Huh. Well, you need to know something about proteins and structure.” So he handed me a copy of the first edition of Lubert Stryer’s Biochemistry textbook. Now, Stryer, as you may—again, this is something I can look back on and understand—Stryer’s Biochemistry book marks the transition from teaching biochemistry by having you memorize reaction pathways, to teaching biochemistry by saying, “There are these molecules, and they have structures, and they have physical properties, and everything is supposed to follow from that.” And Lubert himself is a very physical chemist, although actually is an M.D., by the way, which is interesting.
And so [laugh]—one of the things about becoming engaged with science when you're really young is you have a lot of time. So I went home and I read this book. [laugh] It’s a biochemistry book; it’s this big, right? And I came back. And I remember Roger telling me many years later, saying that he felt like he knew he was onto something, because I came back and asked him all these questions, including, for example, about chirality. I'm not sure I knew the word, but I knew there was a problem about these things being right-handed and left-handed. I thought Stryer had kind of glossed over this. So, “What’s going on with this?” And that somehow was a sign for him that maybe spending time with me was going to be worthwhile.
[laugh]
If that was one of the things I got out of reading that book, then probably it would be okay. What was the point of this? I knew I wanted to do something that was connected to biology. And we should talk more about how that balancing of physics and biology crystallized. So in fact, one of the last great gifts that Roger gave me was that when I was—although we continued to work together a little bit—and again, the paper finally came out in ’79—a paper—could have been more, but anyhow. Not the most important paper either of us ever wrote, but it was my first one, so it meant something. He said, “Oh, you're going to Berkeley. You know, they actually have a biophysics program at Berkeley, and I know a guy in that program.”
And so there was another guy from this era—you know, when the Mössbauer effect was discovered, people went around and tried to figure out all sorts of interesting things you could do with it. And so one of the things you could do was to use it to do spectroscopy of biological molecules that contained iron atoms. So there was a community of people who did that, including at Illinois, as I mentioned. But there was a guy at Berkeley named Alan Bearden, who had gotten into this because his idea was to use the Mössbauer effect as a source of very narrow line width gamma rays so that he could do a very precise inelastic scattering experiment and measure the excitation spectrum of superfluid helium. That was what he was planning to do, shortly after the Mössbauer effect was discovered. And as he was trying to do this, he joked that the theorists’ predictions for the cross-section kept going down. His instrument was getting more and more sensitive, and the theoretical predictions were getting the effect smaller and smaller. And so eventually he saw the handwriting on the wall that this was—he was not going to make his reputation by measuring the excitation spectrum of superfluid helium. This was before the neutron scattering experiments. And so he needed to figure out actually—[laugh] “I've got this big Mössbauer rig; what do I do with it?”
And so he tried various things, and then he found out that there were these molecules that have iron atoms in them, so you could try studying those. I guess he was—I should reconstruct this—I think he was an assistant professor in the physics department at Cornell. He also had a famous father, which we could talk about maybe next time. And he said that it was made clear to him that if he was going to spend his time with biological molecules sitting at the bottom of the dewar that probably this wasn’t something for the physics department. So he moved to the chemistry department at the then relatively new UC San Diego, where he also came in contact with people who knew something about photosynthesis. You may know that there are iron-sulfur proteins that are involved in the photosynthetic process, so he worked on Mössbauer spectroscopy of the iron sulfur proteins. He eventually moved to Berkeley and continued to work on photosynthesis.
So anyhow, so Roger said, “Go talk to Alan.” So I did, in this naïve way that I had. When you asked, “How large was your world?” I did not have a picture of—I had this romantic view of the physics community from Bernstein’s article, right? And I had the Feynman Lectures. And I’d worked in a lab. I knew that the kind of lab that was—even though it was very physics-y compared to the biology labs around me, I knew that that environment was different than a physics environment. I had sort of figured that out. But I had never been in a physics environment.
Bill, on that point, I think we can pick up—that’s a great narrative turn in your naivete, when you're about to go talk to Alan. Let’s pick up on exactly that for round two, and we can do that.
Great.
[End 200824_0276_D]
[Begin 201007_0334_D]
This is David Zierler, oral historian for the American Institute of Physics. It is October 8th, 2020. I'm so happy to be back with Professor William S. Bialek. Bill, thanks so much for joining me again.
My pleasure. You should remind me where we left off.
Yeah. So we're going to pick right up—my first question, and then we'll get you right back into it—is to set the stage, Roger Cooke gives you Stryer’s book. You realize early on that it revolutionizes biochemistry. And so my question to set the stage for you getting comfortable at Berkeley as an undergraduate—the first question there is, why not enter biochemistry right from the beginning? Why is that sort of not the immediate pathway to you from high school, from Roger Cooke to pursuing science at Berkeley?
Well, I had this romantic idea of what it was like to be a physicist, and that stuck with me, both because [laugh]—I mean, the sort of proximal stimulus of discovering that I'm not very good at doing experiments but also—
[laugh]
—a fondness for more mathematical things, and just an appreciation for what I understood the subject to be—being clear that when you start out, these commitments are to things that you don’t fully understand, I wanted to be a theorist. And it didn't seem like there were theorists in biology, at the first approximation. Theory was something that physicists did. And then I found out there is a thing called biophysics.
Now, it turns out—sorry, to leap forward, we're engaged in—the National Academy is engaged in the first decadal survey of biological physics, which is part of the overall survey of physics research in the country. And one of the challenges is to understand a little bit about the history of the field. And biophysics meant many different things, let’s say, over the course of the 20th century. And in fact, if you think back, you realize that in the 19th century, there were people who just routinely crossed the borders between subjects that we now distinguish as physics, chemistry, biology, even psychology, Helmholtz being perhaps the most emblematic figure, although an important part of the story is that it wasn’t just a handful of people, right? That these things were much more fluid. Anyhow, all that to say that discovering that there was something called biophysics seemed like it solved my problem.
And, Bill, this was something available to you at Berkeley? For example, at a place like Princeton, probably not; biophysics was not anything that an undergraduate could pursue. But at a place like Wisconsin, they probably had a developed program that had gone back at least two decades. So what were your options at Berkeley?
So at Berkeley, there was actually, at that time—so I'm not sure that I can completely reconstruct what was my view of the landscape at Berkeley as a 17-year-old arriving as an undergraduate, versus what is my view having actually been on the faculty. [laugh] I don’t know that I can sort all that out. But it is true that in 1977, when I arrived in Berkeley, the biological sciences were spread over maybe a dozen different departments, and actually even more than that. Something I didn't fully appreciate at the time—since it’s a land grant institution, it actually has a College of Natural Resources, and there’s a fair amount of really high-quality biology that happened in the College of Natural Resources.
And fast forward when—around the time I was being brought back to Berkeley as a faculty member, they were also undergoing a reorganization of the biological sciences, and it seemed like at some crucial moment—the example of why we needed to reorganize the biological sciences was, you know, “Why do we have a botany department? That’s so 19th century.” When all the dust settled, it seems that the people who launched this grand view of how to reorganize biology were engaged in a certain amount of triumphalism. Anyhow, never mind. They had forgotten that Berkeley was a land grant institution [laugh] and had a College of Natural Resources. [laugh] And so their vision of one big biology department that they were going to be in charge of fails, because there’s a separate college. And actually, people in the College of Natural Resources have 12-month appointments, not nine-month appointments, and they're not going to give that up. So you're not going to unify all of biology at Berkeley under one banner. And when all the dust settled, we didn't have a botany department, but we did have a department of plant molecular biology. Which as far as I can tell was molecular botany. [laugh] So anyhow, I think that whole thing was kind of a bizarre exercise.
But, the state in 1977 was there was a botany department, there was a physiology and anatomy department, there was an immunology and microbiology department. There was a biochemistry department. I think there was a biology department. [laugh] The biochemistry department was located physically on campus as far as you could possibly get from the chemistry department. I would eventually learn—these stories were told even to students—that the proliferation of departments was essentially a response to personalities. Berkeley had this idea that when somebody important couldn't get along with his colleagues in his current department, you just make another department. And so that’s how biology at Berkeley evolved.
But one result of that was there was a department called biophysics and medical physics. It was a small department. It offered an undergraduate major. I have to say with perspective, I would say that the undergraduate major was rather poorly constructed, in that it attempted to make the union of requirements for a biology major and a physics major, and then in order to fit, you make the union of requirements of introductory courses until you run out of room, and then you stop.
[laugh]
I don’t think this is actually—and then you add a few courses inside the department. I don’t think it’s actually a very good idea. So the answer seemed simple; I would become a biophysics major. I of course didn't know what this really meant, but it kind of combined my interests. And Roger sent me, as I think I mentioned last time, to talk specifically to a buddy of his from their previous scientific lives, Alan Bearden. And Alan said, “Well, there are a lot of weird things about the requirement structure for the biophysics major, but a peculiar feature of Berkeley is that students who maintain honors status”—so have a grade point average above something—“it is the prerogative of the department to waive any requirements.” And so what he said was, “Do what you want and keep your grade point average above this. But be sure you take the core courses in the department, but for the rest, do what you want.”
Now in terms of how the faculty was set up—and this is probably a good way of understanding the perspective of the department—given that the faculty were coming from a place where there were going to be stronger divides in their own education between the physics side and the biological side, were the faculty mostly physicists with a biological interest, or vice versa, would you say?
They were almost entirely physicists with a biological interest. The division inside the department was not between biologists and physicists but between people who were interested in sort of basic biological mechanisms and people who were interested in things connected to medicine. So again, I would sort of piece this history together, and there is a lot of interesting stuff here. The Lawrences’ mother had cancer. Ernest Lawrence’s brother was a physician. And at some point, when all else failed, they decided to stick mom in front of the accelerator beam and see if they could shrink her tumor. And they did. I don’t know whether it was 100% successful, but that was the beginning, at least in Berkeley, of what became medical physics. And so that was a thread that ran through that department.
So a lot of the things that happened at what would become Lawrence Berkeley Lab—it was already Lawrence Berkeley Laboratory by the time I was there, now Lawrence Berkeley National Laboratory—that was an important thread. And then there were people who were interested in not cancer therapies; there were people interested in radiation biology, which of course was enormously supported immediately after World War II. And, by the way, radiation biology led into important things in what would become the mainstream of molecular biology—understanding DNA repair mechanisms and things like this.
I remember the guy who had the lab kind of across from the room in which I sat when I was a student, a guy named Bob Mortimer. Bob led one of the first efforts to map the yeast genome, eventually. And it’s also true that to this day if you want to—there’s a standard laboratory strain of yeast that people study, but then if you want to pick one that is a little closer to what’s in the wild, that has carefully—the strain is actually RM, for Bob, because he also had a connection to a winery, as I recall. Anyhow, so I can remember Bob doing this experiment for—and people were starting to wonder, “Well, why are we studying the impact of radiation on cells? Okay, I understand why everybody was worried about this in the immediate aftermath of World War II, but why are we still doing this?”
So he did the experiment of—he took two strains of yeast, one of which is deleted for certain DNA repair enzymes, and he just took buckets of them outside. And the ones that are deficient in the DNA repair enzymes died because of the exposure to the ultraviolet from the sun. [laugh] So the point was, well, actually, you don’t need to go to the extremes of being near a nuclear weapon in order for the cells’ responses to radiation to be a really important part of their survival. We're constantly being challenged in that way.
So anyhow, so there were lots of threads, many of which went back to these more medical things, and radiation biology, sort of what was the U.S. government’s view of the interaction between physics and biology in the immediate post-war years. There was of course at Berkeley also a strong effort in the use of radioisotopes to probe biological things, which went back—well, Calvin was a still a figure, and we should go back further to Martin Kamen, another great historical figure, who had long since departed Berkeley, but the influence was still felt. So there were lots of these threads of what—okay, now with my perspective from here, I would say the application of the methods of physics to solve problems in biology.
But this is a very experimental world that you're inhabiting, it sounds like.
Yes, yes.
And yet even from the beginning, you have a nose for theory. You want to find out where you could pursue theory.
And that was not something that anybody was really doing in Berkeley at the time.
So just to state clearly, biophysics at Berkeley at that time was essentially, entirely an experimental program?
So in the biophysics department, I think that is correct. There are people who dabbled in theory in relation to the kinds of experiments they do, but they were primarily experimentalists. There was at least one person in the math department who taught a course in mathematical biology. I can’t remember whether that was officially a math course, or whether it was a graduate course in biophysics. I took the course. It was okay. He was an interesting character—Hans Bremermann. I think one of the things I learned was that there was—the idea of making direct connections between pure mathematics and the phenomenology of life, if you will—so the things that happen in biology—that, I don’t know, physicists have a style of, how does mathematics relate to the natural world? And it’s a well-developed—I mean, it’s hard to make it explicit. This is what philosophers of science spend some fraction of their time trying to figure out, right? What are we doing? But that was the style I was really infatuated by.
And so the pure mathematicians didn't—I mean, I appreciated some of the things, but I didn't find their way of looking at things quite right either. So yeah, I guess to answer your question more directly, it’s not like I went to Berkeley and I said, “Ah, there is Professor X. I would like to go work with Professor X and do what he does.” That set was pretty much empty. On the other hand—
And so there’s some danger there, in terms of you getting this green light that as long as you keep your grades up, you can do whatever you want. So there’s both danger and opportunity there for an undergraduate, given that there’s no perfect mentor who’s doing exactly what you want to do.
Exactly. And I was oblivious to the dangers, I would say. I don’t know whether it was—there’s confidence, there’s arrogance, and there’s naivete, and I don’t know what the balance was.
[laugh]
Maybe it’s not even a well-posed question, right? So depending on how you ask questions about what I thought I was trying to do, even at the time the same answer could be viewed as confident, arrogant, or naïve depending on [laugh] the character of the question, right? I'm sure there was a mix. I don’t know. So also, as I think we discussed last time, I spent a lot of time working on my own in high school, and so was very impatient. And so when I arrived in the Fall of 1977, the two physics courses I took were the two introductory graduate courses in quantum mechanics and electricity and magnetism. The course on quantum mechanics was taught by Stanley Mandelstam, and the course on electricity and magnetism was taught by George Trilling. George passed away not too long ago. [ed. Trilling died in April 2020]
I have a vivid recollection of going and talking to Trilling about whether I could take the course despite the fact that I was an undergraduate. I don’t think I said that I was a first-year undergraduate. I looked reasonably old. Nobody seemed bothered by this. And the conversation was I think the good side of the things that you're talking about at Berkeley, which was he said—I explained, and he said, “Well, have you looked at the book?” The book is Jackson. Jackson was actually on the Berkeley faculty at this point. He will reappear, if we keep talking. And I said of course, yes, I had. He said, “Well, have you looked at the problems in the book?” And I said yes. And he said, “Do you think you can hack it?” And I said, “I think so.” And he said, “Well, it’s your neck.”
What specific skill set was he asking you to self-assess?
I think what he wanted—so [laugh] look, I have no—I am telling you my recollection of the entire conversation. I mean, small talk at the beginning, and then that was the core. He did not ask me to go expand something in orthogonal polynomials or anything just to see if I knew how to do it. And in fact it’s a little ambiguous whether you're supposed to know that when you go in, or whether that’s part of why you took—in many departments at that point, the graduate E&M course was really there because it was replacing an older mathematical methods course. I mean, obviously you should know something about E&M if you're a physicist, but it also played that role, which delighted me.
What was the self-assessment he was looking for? I think what he wanted to know was, did I actually sit—did I try the exercise of reading a chapter and trying to do a problem at the end? And I had. And, you know, they're not easy. But seemed possible. So that was the—I don’t remember a similar conversation with Mandelstam, although there must have been something; I just don’t remember. They were both very memorable courses.
I, sort of first day, got acquainted with a couple of people with whom I'm still friendly, as is the way in those things. One—some remain very close. Yeah, so Berkeley had this laissez faire, to put it nicely, attitude about its students. The flip side of that is, I think you could drift through the place without ever having a serious conversation with a faculty member. Nobody was checking. I think that the fact that I had been sent to Alan, and that I had a chat with him, and I was serious—and I don’t know what Roger had said to him, or Alan’s own self-assessment—he was going to look out for me. And that was—I would stay at Berkeley and get my PhD, and Alan would officially be my advisor, despite being an experimentalist and I was doing theory. And when I look back, are there things that it would have been nice to have a professional theorist as my mentor more closely during those years? Of course. That would have been wonderful. On the other hand—
But that’s not to say that Alan was exclusively your mentor? I mean, in terms of the other people you learned from.
No, I could float around, and I interacted with a number of people. Yeah. But one of the things that it—so the experience—I should talk more about my life as an undergraduate more broadly, because there are some important points, at least for me. In terms of doing science and trying to learn how to be a theorist, there were these experiments going on in Alan’s group—so Alan, as I think I mentioned, was interested in photosynthesis. And he had taken that path essentially—again, it was a methods-driven path. It was the fact that the early events in photosynthesis are accessible to spectroscopic methods. You're moving electrons around, so this leaves behind unpaired electrons. So you can do electron paramagnetic resonance. Obviously, the photosynthetic apparatus absorbs light, and so it must have interesting optical properties, and those optical properties shift around as the crucial molecules go through their cycle. So there was a whole set of spectroscopic tools. Perhaps the greatest master of this was George Feher at U.C. San Diego.
And so that meant that there were all these things happening in the group that were very much grounded in physics, and it was a process—it was a particular corner of the living world that I think had always attracted physicists’ interest, right? It absorbs light and traps the energy. In fact, there are also other mysterious things like the light gets absorbed over here, and it gets used over there, so how does the energy migrate from one place to another? That’s a subject which has returned in interest, right? That part of that migration appears to be quantum mechanically coherent, as we now—that’s from recent decades. So it was a place where physicists had long been fascinated. There had been these papers from 1974 by Hopfield about the role of quantum effects in electron transfer reactions in photosynthesis in particular. And in fact, my officemate was a PhD student [Bob Goldstein] who was trying to do an experiment to directly test some of those theoretical ideas.
And so part of my connection to the experimental effort around me was, you sit in the room next to the lab, you sit next to the guy who’s doing the experiments, and you sit there and you try and understand what the theoretical issues are that are at play in these experiments. And it had a broader impact. I mean, since one of the primary tools in the lab was electron paramagnetic resonance, I spent a lot of time trying to learn things about magnetic resonance. It helped a little bit, but in the physics department, there was Erwin Hahn, the wizard of resonance, as he was so aptly described. Another great character. So I did my best to learn as much as I could.
And of course, somehow—magnetic resonance was a good thing for students because I think at least then—because the early—so if you went back—we weren’t that far removed from the foundations of the subject, right? I guess the great sort of authoritative textbook was by Abragam, and had been written—maybe it was written in the late sixties, early seventies? I don’t remember now exactly. But I remember it was a big, thick thing, kind of marvelous. But you know, it was a place to think about, oh, fundamentally there’s something quantum mechanical going on, because there are spins. But by the time you have a whole bunch of them sitting in a container that you're making your measurement on, you can almost think classically about this magnetization that’s rotating. You can sort of forget that there was a spin. So what you're really thinking about is the—you're kind of watching the evolution of the quantum state, except because you actually put them all—you prepare them all in the same quantum state, it becomes macroscopic. Not for sort of deep and mysterious reasons about macroscopic quantum effects, but just by averaging. You have to think about the fact that there’s relaxation processes, which are relaxation of coherence versus relaxation of energy. So when you come to equilibrium with your environment, the first thing you do is your destroy the coherence, and then you actually exchange energy and come to equilibrium in a more classical thermodynamic sense.
And so anyhow, thinking about—and actually in the context of photosynthesis, there’s an absolutely crucial experiment, which was interestingly not done—well, it wasn’t done at Berkeley—but where it was shown that the electron paramagnetic resonance line that appears with the very first separation of charge in the photosynthetic reaction center, its line width is off from what you would expect. So if you make a model—if you make a compound—if you make the chlorophyll molecule which is sitting there, from which the electron is removed, and you take the election away and you measure the hole that’s left—well, the unpaired electron that’s left behind, you look at its spin resonance signal, it is square root of two times wider than the signal that you see in the photosynthetic reaction center.
The reason is that the thing which is in the photosynthetic reaction center is a dimer, and the unpaired electron is actually shared between the two of them. And you get motional narrowing of the—the width of the line arises from inhomogeneous broadening due to the hyperfine interactions with all the nuclei, and if you share the electron between two identical molecules, then you get line narrowing, because you're averaging over twice as many nuclei. Sorry, am I getting the factor in the right direction? Yes, square root of two times—narrower, because it’s an average, not a sum. Okay. I think—I think that’s right. [laugh] It has been a very long time since I've actually gone through this.
[laugh]
It’s slightly embarrassing to be recorded for posterity and getting the sign of the effect wrong. The square root of two is the crucial part, right? That tells you—it’s not a coincidental—it’s not roughly 40% off, right? It really is a square root of two because of this sharing. And that’s something that had been understood not long before I started learning about these things. And the idea that there was a connection between physics at that level of refinement, and our understanding of how, after all, the process that captures all the energy for life on earth—photosynthesis—that’s very appealing. It’s not like understanding that factor of square root of two immediately we understood how photosynthesis works, but as a young student, that was very compelling.
And so being in a group where people made measurements of that flavor and were trying to understand things—and I would go off and calculate things that were vastly more complicated than was necessary in order to figure out what was going on in the experiment—that day-to-day interaction with an experimental group that was working on a system which illustrated important theoretical questions that I found interesting, I think that had an enormous influence on my view of how science was supposed to work. So maybe if there had been let’s say a more professional theorist, maybe I wouldn't have developed—I wouldn't have come as early and as firmly to the idea about—to my ideas about the interaction between theory and experiment. So it was different.
The other thing—okay, so my undergraduate years were relatively short, and I decided to stay to do my PhD—in retrospect, what were the alternatives? So one idea was I could go to Harvard. And the reason was that there had just been this fantastic paper by Berg and Purcell, in 1977, so really at this time. So Howard Berg had discovered that bacteria swim by rotating their flagella. There was all this fantastic work on chemotaxis. He had built a tracking microscope so that you could analyze the movements of individual bacteria. There was this beautiful picture of running and tumbling. And then in ’77 came this amazing paper by Berg and Purcell called “The Physics of Chemoreception” about what are the physical limits to the smallest concentrations that can be detected, and essentially arguing that the performance of bacteria was such that they needed to be counting effectively every molecule that arrived at their surface. And this was just gorgeous, and it remains a remarkable and inspiring paper, and actually I would eventually work on the problem of to what extent their marvelously intuitive arguments could be made more rigorous. And this had some resonance in the community and eventually had an impact on my own theory/experiment collaborations. We can get to that later, if you have the energy, all in the right sequence.
But so I thought, well, okay, so here’s an example of somebody doing theory that is of the flavor that I like. So I applied to Harvard, I was accepted, and I ran into Purcell. Because, let’s see, was Berg—where was he at this point? Maybe he was in Colorado at this point. He would then move to Caltech and eventually to Harvard. So I wrote to Purcell. This was a physicist whose name I knew. [laugh] And I wrote to him, and somewhere I might still have the letter in which he explains that he’s really not an expert in any of this stuff., everything he knows he learned from talking to Howard, and so it would be ridiculous for him to advise a student in this. So I stayed in Berkeley. The other [laugh]—I don’t know—so let me complete this thought. The other option, which somehow I didn't—I'm not sure how seriously I took it as should I apply to do this—but given my interest, the obvious person to try to work with was Hopfield.
Right.
Who, I guess during these years, is in transit from Princeton to Caltech. And I don’t remember exactly when he moves; 1980, I think. So somewhere around the time that I was getting ready to switch from being an undergraduate to a graduate student. And I don’t know why I didn't do anything about that. Eventually, one of my closest scientific friends would be somebody who had been a student of John’s during this period, José Onuchic. So at some point, we kind of compared notes on our experiences as graduate students, and what I realized was that there were things—life’s not a controlled experiment, right, so you can look back and say, “What were the good things and what were the bad things?” I think not having the experience as a PhD student of having a successful theorist who could explain to me how to sort through ideas, how to focus, how to figure out which are the things that you should push on and which are the things that you could leave aside—
What was Alan’s feeling about you going to Harvard?
I don’t remember discussing it with him. I think that might have come and gone. I don’t know. It might have all come and gone before—
But did he encourage you to leave Berkeley? Did he want you to stay?
I have no recollection of this conversation, either way.
But obviously, if you were aware, he must have been aware that to some degree, you would not be best served to have an advisor who did not have a theoretical perspective.
Yes. The flip side of this, which I think only—I think it took me a long time to realize this, and conversations with José about what it would have been like to be John’s student—the advantage was that my ideas were my own. And I had an extraordinary independence, some of which in retrospect was probably squandered doing things that turned out not to be productive. But, that’s okay.
I also should admit that I was in a hurry. I mean, all of these heroic figures that I learned about all somehow emerged very young, and so the idea of going through graduate school very quickly and everything was somehow very appealing to me. In retrospect, thinking about my own experience but then also thinking about what advice to give to young people, I think the issue is not what age you are when you get your degree. The issue is what age you are when your research program is your own. And so the gift that Alan gave me was that my research program was my own from the moment I walked into his office.
That’s a known quantity, and no matter where else you might have considered pursuing graduate work, that was not going to be a given. That would be something that you’d have to fight for.
Right. And by the time push would actually come to shove that you have to decide what to do, I was engaged in a whole bunch of projects, and the idea of leaving those behind to start something new didn't seem so attractive. And after all, my years as an undergraduate hadn’t been that long. As it would turn out, I would spend a total of six years. So a lot of people spend six years in graduate school. So maybe if I had spent a kind of normal four years as an undergraduate, the feeling that I needed to go somewhere else would have been stronger. But I don’t know; I think at different institutions, there are different sort of quasi-religious beliefs about this. When I was a postdoc at Santa Barbara, Tony Zee, with whom I had fantastic interactions and we wrote some very fun papers together, and also he was just an incredible person to learn from, had this capacity to become interested in whatever people were doing around him and make you better at it.
Okay, this is a bit out of order, but an important thing, I think, is that what I missed in my professional theoretical education as a student at Berkeley, I think would come to be replaced by my experience of being a postdoc in Santa Barbara, which was a sort of slightly higher stress environment in which to [laugh] have that happen, because you were just surrounded by professionals. So one of the reasons why that period is so important to me is because of that.
Before we jump into graduate school with both feet, I want to ask you also to broaden out the story of your undergraduate experience. Berkeley being Berkeley and given how multi-variegated your interests were, how politically minded were you, as an undergraduate? How involved were you with all kinds of things that were going on in Berkeley? Obviously, the Vietnam War had ended at that point, but the lingering effects, the legacy, of the late sixties and early seventies certainly had not evaporated entirely from the campus.
So [laugh]—I once said in a discussion about hiring many years later that I think that when you go somewhere, you go to your fantasy of the place, and by the time you leave, you leave the real place. To which one of my senior colleagues very quickly responded, “Yeah, Bill, and they feel the same way about you.” [laugh]
[laugh]
Which I thought was a very good response. I am not sure, honestly. I mean, I'm not saying this to say that I had some unique grand view of what was going on. I don’t know to what extent I actually went to the real Berkeley and to what extent I went to this place and I had this vision of what Berkeley was supposed to be. It was the Berkeley of student unrest and political engagement, and it was the Berkeley of people like Oppenheimer who built, after all—I mean, somewhat ironically, right, that Berkeley would sort of decline as a center as theoretical physics. But it is the origin of a—it is the first school of theoretical physics in America. So that very rich history was someplace where I lived mentally, right? That’s what I saw around me. Maybe this was just delusional.
I think the strongest—sorry, I guess it’s just after my undergraduate years, I remember the tenth anniversary of Kent State, and being quite shocked by the lack of—sort of weak response to that. I remember, having grown up in San Francisco—in terms of major political events, having grown up in San Francisco, the assassination of Harvey Milk and George Moscone stands out. I can picture myself walking across Sproul Plaza in Berkeley when I found out about this, actually from a high school friend who had just heard. It was kind of surreal. You may remember that this was also—not long before that is Jonestown. So those were things that were very much in the air, and I still had lots of—I was still very connected to things happening in San Francisco, so that was part of my life.
In terms of sampling the rest of the university, I spent an enormous amount of time in the philosophy department. So I started by taking the history of philosophy sequence that philosophy majors are supposed to take, which I guess was the lowest—those were the lowest numbered courses that I actually took. They really were for first-year undergraduates. So that, I did take, as a first-year undergraduate, in contrast to my somewhat odd path through the science courses. The history of philosophy sequence was interesting.
And then, I would go on to take—I think every quarter that I was there, I took a philosophy course, so I'm trying to enumerate them in my mind. The history sequence is the first three quarters. Two of the more advanced courses stand out. Well, two of the professors. One was Bert Dreyfus, who wrote What Computers Can’t Do. And he taught two remarkable courses. One was on existentialism in literature and film, and one was on Kierkegaard. Those were extraordinary. I think the course on existentialism in literature and film, he had a segment—he obviously covered various pieces of literature and film, but there was a very long segment on the Brothers Karamazov. And that is the only time I have ever been in a course where the students applauded when the segment of the—when a kind of chapter of the course ended. There’s often applause at the end of the course, but that was—it was extraordinary. He took great delight and with perfect—it’s not comedic timing, but it’s sort of the timing of a great actor, of saying—when asked what fictional character she would most like to portray in film, Marilyn Monroe answered, with no hesitation, “Grushenka.”
[laugh]
And it stuck. It basically sort of rejiggers your view of popular culture. Those were remarkable courses, and Dreyfus was really very compelling. [laugh] He’s also one of those people—we ran into him certainly after I had graduated from Berkeley, and I don’t know if it was immediately afterwards? So, you know, I had been an undergraduate, and then I stayed for graduate school. And I didn't really—I would see him on campus and wave and say hi, but then some time passes, and we're on a bus at JFK, the airport, going from one terminal to another, and there he is. And it would—over the years, he eventually developed a problem with recognizing faces, so I don’t actually remember whether he recognized me or not, at this point, but I said a few words. And what was fascinating was that the conversation, it was like we were transported back, with no interruption. [laugh] So, very memorable character.
The one that had more direct impact on my intellectual development was I took philosophy of science from Paul Feyerabend. And that was an incredible experience, and in many ways, I would say more impactful than the courses in the physics department.
Oh, wow.
And I think the reason is that—there are moments that stand out in courses in the physics department; I'll enumerate some of them. Actually, there’s one I'm going to mention because it relates to this issue about the relation of theory to experiment. But the courses in the philosophy department—science courses, I knew that I was interested in this stuff. I could pick up a book, and going to class was—it was having a guide to the book, right? Obviously, there were individuals who were more or less compelling as lecturers and so on, but it’s not like somebody told me something that I didn't know existed, or I didn't know that there was something to explore there. I might not know the thing they just said, and I might not have the perspective, but the idea that there was something completely new to think about somehow didn't happen so much.
And I don’t know if I'm being unfair, but in the philosophy department, that happened with much greater frequency. And in particular, taking Feyerabend’s course, he’s some—what’s the Morgenbesser line? Supposedly, Morgenbesser, the philosophy professor at Columbia, supposedly once gave as an exam, “Some people think that Marx and Freud went too far. How far would you go?” Right? So philosophy of science from Feyerabend has its flavor, right? That many people found him to be just too much. I guess maybe because I was the right age, I found it incredibly thought-provoking.
There’s this line in Bertrand Russell’s Problems of Philosophy where he tells you about all these amazing questions, and he said (and I’m paraphrasing): “Well, if you're looking to philosophy for answers to these questions, looking in the book for answers, I'm sorry, you're going to be disappointed. Rather, our job is to show you how close beneath the surface these incredibly deep problems lie.” So in that sense, I think Feyerabend was an extraordinarily successful philosopher, at least for me, because he made me think hard about some of these problems.
And the structure of the course was wonderful. I can’t remember even if he passed out a syllabus at the beginning; he just started. And he started by telling you about, “So here’s one view of how science works. And here’s an example from the history of science that shows you that this can’t be how it works.” And then he would say, “So here’s another view of how science works.” [laugh] So this is in fact the structure of—part of the structure of Against Method, the only book that I know that has a footnote in the title. So the formal title is Against Method, and I think “An Anarchist View of the History of Science” or something like that. And there’s a footnote to anarchist, to say exactly what he means by “anarchist.”
And what’s great is that he would do this, and he would give you just example after example. And then at some point, some poor kid in the class just couldn't take it anymore. Right? Because as far as you could tell, he was going to destroy every rational view of how science proceeded, and he wasn’t going to stop until he was done with everything. So somebody—and I think I know that in my year, this is what happened—but I always imagined that it was some engineering student who had taken this course because they figured this was a way of satisfying their humanities requirement that wouldn't be too painful. Except it was actually excruciating, right? [laugh] Because here you thought that you understood what was going on, and it was gradually being torn apart. Much worse than, I don’t know, taking a poetry course or something, where you just learn—either you liked the poetry or you didn't, but maybe it wouldn't bother you if you didn't like it, right? [laugh] You’d just say, “I don’t know, they required me to take a humanities course, so I took this one. Poor choice. Too bad.” But this was about science, right? And all the people in the class cared about science. And here, gradually, it was being dismantled in front of our eyes.
And so only when the student just couldn't take it anymore did he lay his agenda out on the table, which was, first of all, that he thought science was just one of the most interesting things that human beings did, and that’s why he actually spent his life studying it, and anybody who thought otherwise obviously just had an irrational view of his own psychology. [laugh] Which is a kind of obvious point. If you didn't think it was so interesting and important, you wouldn't bother studying it. So that was the first observation. And the second observation was that most of these attempts to rationalize what scientists were doing just didn't do justice to the richness of the human enterprise which is science.
And then, you asked about politics, right? And then he said the thing that I think was really what drove it. He said, “Look, we have arrived”—and this seems strange, saying it now—“We've arrived at a moment in our development as a society where scientific has replaced God-given.” And so you can make an essentially ad hominem attack on somebody’s proposal for addressing the problems of our society on the grounds that it’s unscientific. And that what we need instead are scientific solutions to our societal problem. And that it is—so he took the view that the belief in scientific method as a kind of inexorable path to truth could be used as a cudgel, not necessarily by practicing scientists, for whom they would quickly enough find out they were wrong, but by people in the political sphere who could then insist that what they were doing was not discriminating against people or being inhumane, but rather being scientific.
And I found that to be really challenging. And actually, it’s something—until we had this conversation, I hadn’t thought about this particular resonance—I have been thinking about the fact that today, in our society, in our country, there’s this idea, to be very practical about it and not try and abstract away—is wearing a mask in public a political act? And you say, “Well, one position is, yes it is. I don’t want to be—I'm rebelling against the intrusion of the state into my personal life.” Blah blah blah. The opposing view is, “No, no, no, no. It’s not a political act. It’s just the science that tells you to wear a mask and stop the spread of the disease.” There’s a problem here, which is I think very much related to—in this case, I think you should wear a mask [laugh], so I don’t actually have a problem with the conclusion. But I think that some of the subtlety of what Feyerabend was getting at is important. You should only wear a mask if you actually care about other people. What the science tells you is the impact of wearing masks on spreading the disease. But what if you actually value your personal freedom above the lives of others? Well, then, in that case, don’t bother wearing a mask. So there’s an issue about values, which is not decided by science. And this isn’t actually about whether the scientific inference about the nature of the spread of the virus is correct, right? What you need is to couple your—science provides you with certain kinds of answers to certain kinds of questions. And you could argue that maybe the most important part of science is that it provides you with a framework within which questions can be sharpened, so that they have answers which are satisfying in their own terms. The relationship of those answers to what you want to do as a human being requires injecting something else. And so I think part of what Feyerabend was trying to remind us is to not confuse the progress of science with human progress, right? That there’s something else that is missing there.
He also was worried that—and this is where I think other people would say, “No, no, no, no, no, he went too far”—is our confidence in science as a path to knowledge becomes so great that we stamp out other paths to knowledge. One of my favorite, and I actually think, again, there’s a version of this which is easy to parody, as a scientist and say, you know, “What’s he talking about?” But then it’s also true, for instance, that there’s a confusion sometimes, particularly in the science of more complex systems, notably humans—there’s a confusion between absence of evidence and evidence of absence. And so there’s a difference between there has been no controlled experiment that shows you that this happened—I mean, I don’t know, are infants who smile at their mothers just having indigestion, or is this genuine communication? Well, trying to answer that question in a way that you would find scientifically satisfying is damn difficult. And so our inability to do that gets turned into mothers are deluding themselves into thinking that their children are smiling at them, when in fact they have indigestion. Well, no. [laugh] Okay? Also, as I like to point out, dismissing something as old wives’ tales when you're talking about small children seems like a bad idea, since it’s “old wives” who have the most interaction with them and collected the most data. So there’s this—and that’s right in the heart—that kind of argument is right in the heart of the thing that Feyerabend was worried about, right? I don’t know.
Or, you know, does meditating improve your health? I don’t happen to meditate; I'm just not patient enough. Well, how do you—I mean, let’s remember, first of all, that most drugs which are supposed to work on particular diseases only work on some fraction of the patients anyway. Now think about, how would you actually design an experiment to convince you whether meditating improves your health? It’s pretty hard, right? Now, actually, there has been progress on this, right? There’s a whole community of people who worry about this. And what does sticking somebody in a magnet and imaging their brain while they're meditating teach us that the Zen masters didn't already know? I don’t know. I mean, well, now we know for sure that something happens in your brain. Well, people report having had a different experience. So unless you're a dualist, you're fairly strongly committed to the idea that something was different in their brain. This doesn't show that meditating is good for your health. It just shows that people’s report of their internal experience is related to something that happened in their brain. Which, as I say, I think we believed beforehand.
So anyhow, for me, Feyerabend was incredibly important, because he made me think about not only the relationship between science and society, but also even the scientific process, right? One of the things that I've been interested in is problems of how the brain performs inference and how we extract meaningful information from the signals in our environment, and things like that. On the one hand, I want to turn those questions into things that have some mathematical precision. But then on the other hand, I sort of hear those lectures from long ago, worrying about whether that’s sort of equivalent to trying—that’s yet another attempt to produce a rationalization of what happens when we draw inferences about the world, which of course is somehow science, writ small. Science writ large is the accumulation of all these inferences. So I don’t know, that was incredibly important for me.
Bill, let’s go back to the physics classes that you wanted to expand upon.
So in terms of understanding the relationship between theory and experiment, I would say the course that was the most compelling—so actually, let me back up a moment. I learned a number of things about teaching. As I later realized, I'm not sure it registered quite in that form, at the time. I was trying to decide whether to take the undergraduate course on statistical mechanics or the graduate course on statistical mechanics. And Charlie Kittel was teaching the undergraduate course.
Oh, wow.
And this may have actually been the last time he taught it. It is true that he taught from a set of mimeographed notes which became the next edition of Kittel and Kroemer, which would become a standard textbook for the course. A book with which I have—I have many problems, now, with how he taught statistical mechanics. I think actually there are a number of things that are deeply wrong about the approach he took. But—
On its own terms, or with your biological sensibilities?
Well, hard to separate. I think that one of the things that Kittel did was to argue that in some ways, statistical mechanics was well posed only when you were summing over discrete states.
Aha.
Because then you didn't have to worry about that funny factor of H bar to normalize the volume of phase space, to make something dimensionless, blah blah blah.
And this would have been a proposition more rooted in theory or experiment, from Kittel’s perspective?
I'm not sure. I never had an explicit conversation with him about it. But the impact of this on our nation’s physics curriculum is enormous. What it did was to push statistical mechanics to being a course that you can only take after you've taken quantum mechanics. Although the only thing you—so statistical mechanics, the centerpiece of the course is like the ideal quantum gases. And so as I say, because of that structure, it pushed the course late. And so certainly if you're interested in the physics of biological systems, this is a bad idea, because you should really know something about statistical mechanics as early as possible. It’s also weird in that it’s completely ahistorical, which is an important contrast to all the rest of our physics teaching. So physics teaching is usually criticized as being overly historical, but the notion that there is no statistical mechanics without quantum mechanics seems kind of wrong by 50 years. Right? [laugh] This would come as a great surprise to Boltzmann, Maxwell, and Gibbs, right?
[laugh]
And importantly, Einstein. Because since statistical mechanics [in Kittel’s version] is about summing over states, figuring out how on Earth you're ever going to manage to talk about Brownian motion is incredibly complicated. And in fact, not only does statistical mechanics as a subject get pushed late in your undergraduate physics curriculum, later than it belongs historically, and I would argue in terms of its utility at helping you explore the natural world, even much of physics today, much of the core of physics today, but worse than that, because of this focus on summing over states as being the whole thing, Brownian motion is a really advanced topic. It’s about fluctuations and it’s about dynamics. And you know, that has no place in the general introduction. It just sort of hangs off the end. Except, of course, that this is actually how we know that the world is made out of atoms and molecules. This is really problematic. I'm not saying that I know for sure what the best solution is in terms of how to reshare our physics education, but—
But it’s not controversial to assert that stat mech is so fundamental, literally so fundamental, it belongs at the foundation of the curriculum.
So, I think—okay, this is now not—so I think everything I just said, I'm willing to argue. I think the idea that you can get a bachelor’s degree in physics from a reputable university and you don’t understand Brownian motion seems to me—that’s as shocking as any other thing that you would cross off the list, right? And in particular, I think it’s very important for people who grow up in this era where we have the impression that we are directly visualizing microscopic events. Of course, we're not, right? [laugh] We're doing things that feel that way. I mean, I can remember—yeah, this was brought home to me by a lecture that Frank Wilczek gave, which he had one of these images of an event at an accelerator. And you have the detector, and you see the jets, right? And he says, “So now we say without pause—there’s a quark.” [laugh] You're pointing to nothing, right? It’s the thing that’s at the end of the jet that comes out.
[laugh] Right.
But, what are we doing? So anyhow. So in particular, for people who grow up in this era, it’s really important to understand that the way in which the scientific community arrived at confidence that the world is built out of discrete particles, is the analysis of Brownian motion. And you can say, “Well, we don’t need to do it that way anymore.” Okay. But it’s also true that along the way to learning that, you will learn a whole bunch of things that are incredibly useful in thinking about the world. So that’s a position that I'm willing to put my hand in the fire for. Exactly what we should do about it, and is there a unique answer to what we should do about it, I don’t know. And that might depend on what parts of science you find most exciting. Does understanding Brownian motion help you think about cosmology? I don’t think so. Maybe I'm missing something, but I don’t think so. So maybe if you think that’s where you're headed, maybe you could skip it, and it would be okay. I don’t know. But that’s not how we draw the physics curriculum. We have this view that the physics curriculum is a set of ideas that are somehow foundational for thinking about anything we might find ourselves thinking about over the next decades. So the claim that I want to do—I want to explore this part of the world is not an argument against something being in the foundational curriculum.
Anyhow, a less well-thought-out view is that I think—you say that statistical mechanics is foundational—I think that in the U.S. physics curriculum, it’s sort of like—what was it somebody once said? “In physics, we don’t teach anything—we don’t teach undergraduates anything that happened after”—pick your year. Roughly speaking, World War II. So let’s say 1950. And then in biology or computer science, we hardly teach anything that happened before 1950. So I don’t know, maybe we should worry about that, as a community. So after 1950, we have this complete change in our understanding of emergent phenomena in many-body systems. And statistical mechanics is still presented to students as if somehow it is—the glory of physics is the reductionist program of taking the world apart and findings its parts, and finding the parts of the parts, and so on. And this is of course an incredible intellectual achievement. And then statistical mechanics is sort of this add-on that says, “Oh, by the way, if you want to get back to the macroscopic phenomena of your everyday experience, here’s a tool for putting in those fundamental constituents and calculating what happens when you're looking at Avogadro’s number of them.”
And there’s two problems with that. One is that it doesn't—it leaves the impression, perhaps, that there isn’t anything very fundamental about the problems of statistical mechanics itself, which I think is a mistake. And it misses the “more is different” thing, that “more is different” has two parts, right? One is a slightly almost mystical part. The things that emerge when lots of particles interact are so surprising given the character of the fundamental constituents that although in principle they follow from the interactions of the fundamental constituents, saying that they merely follow is missing the point. So that’s one observation. But then maybe more deeply, part of what we understand is that the macroscopic phenomena that we observe come in classes which absorb—which encompass many different microscopic possibilities.
So there’s many examples, but maybe the most everyday one is there’s a subject called fluid mechanics, and it doesn't matter what the fluid is made out of. So in some way, all the molecular details get lost as you pass to the level where the equations of fluid mechanics are valid. And so what that means is that there’s some [sense] which in which the equations of fluid mechanics are not consequences of the microscopic details, right? Because I can change the microscopic details and the equations stay the same. And so it’s not only that—I think if you look at Anderson’s essay “More is Different,” I think—Phil of course was fighting against the claim of the particle physicists to be doing the only fundamental thing. And so I think the emphasis was on the idea that it’s not that we don’t believe that the microscopic constituents and their interactions predict the behavior of macroscopic phenomena. Nothing mysterious happens. But that what happens is so surprising and so different that it kind of—it acquires a life independent of the microscopic constituents, and to say that knowing the microscopic constituents then sort of makes this uninteresting and not fundamental is somehow unfair.
But I think we now realize—I mean, he gave that lecture in—I think it was published, I guess, in 1970. He gave it in 1967. I think it’s more than that, right? The notion that macroscopic behaviors are derivable from the interactions of more fundamental microscopic constituents misses the point that the mapping is many to one. And so that means that the macroscopic phenomena are themselves classifiable, right, and that that classification is fundamental, and that that’s a perfectly sensible way of looking at the world.
So for example, if you think about the ideal gas law—well, there’s several ways of thinking about it, right? One is the sort of kinetic theory, so the most reductionist view is, “I have a box filled with particles, and they have a mass, and they have a distribution of velocities, or momenta, more precisely. And they collide with walls. And there’s some rule for how the collisions work. And now I compute momentum transfer to the walls, and that is what produces the pressure.” Right? And I go through this very careful calculation. Which is extremely instructive for students.
The only problem, of course, is that if I made the molecules out of something else and I made their interactions with the wall different, and blah blah blah blah blah blah blah, the ideal gas law would still be true in the dilute limit. So somehow, that calculation—and it’s something that you find, if you actually stand up in front of a class and try to do this—you have all this microscopic detail, and it all disappears, seemingly miraculously, and you're left with something that relates macroscopic quantities to one another.
Again, this is something I remember from Kittel’s statistical mechanics course. The centerpiece of the course is computing the specific heat of the ideal Fermi gas, with all those, whatever it is, the zeta function out in front. The coefficient of the—it’s linear in temperature, and you have to get all the factors of pi right, in the coefficient out in front. And then there’s a figure, and it shows you the specific heat of potassium, which is linear, at low temperatures. It’s great. There’s only one problem—the mass is not the mass of an electron; it’s an effective mass. So how the hell did that happen? And you know, Charlie Kittel, right? [laugh] He’s the guy who wrote the book on solid state physics! [laugh]
[laugh]
Not a word! Not even a pointer that there’s something deep to be said there, but it’s too advanced for the course.
Not a word, because he hasn’t considered it, or he has and it’s inconvenient to the larger framework?
Somehow, he doesn't view—he says, “Well, electrons behave as if they have an effective mass.” But you know, they're charged. They're interacting. Why is it that I can do a calculation of non-interacting particles and get the right answer, modulo the effective mass. Now of course this is a very—and we're having this conversation, right, in 1978, let’s say. So this is 25 years after Landau Fermi liquid theory, right? Twenty-five years is a long time. So I'm not saying that we should all have been learning Landau Fermi liquid theory, but you know, shouldn't there be something about—some pause, to talk about why does this work? And he says, “Well, electrons behave as if they have an effective mass.” But could you at least let me know that thereby hangs a tale? [laugh] There’s something deep there, right?
So that’s a criticism of his approach to the course. However, I learned a lot of things about teaching, because he was an extraordinary teacher. And so his explanations of things—although from the perspective I have now, I think back about how he structured everything, and I would say I find it unsatisfying—locally, when he was trying to explain what was going on, it was fantastic. And also, his discussions of the relationship of ideas in statistical mechanics to the way physical chemists think about things, that was also very influential, because of course being interested in biological things, a lot of the interaction was with people who came more from chemistry than from physics.
He also did something which was wonderful. This is literally about the craft of teaching. He said—there’s always the question of is the exam open book, or open notes, or whatever. Remember, this is before the internet. And he said, “I think that saying you can bring whatever you want is not a good idea, because there’s a danger that you'll just get lost in leafing—if you're leafing through the book looking for a hint for how to solve the problem, you're not going to do well.” Right? “So don’t have that be your plan. Instead, let me recommend that you bring one piece of paper into the exam with you, and you can write on both sides of that piece of paper whatever you think will be helpful to you as you try to solve problems.”
And then he said, “I've been teaching a long time. There have been years in which I let people bring the book. I've had years in which I let them bring nothing. I've had years in which I let them bring one piece of paper. I see no correlation between this and the scores on the exams. On the other hand, people feel much better if they can bring something. And”—he said—“the exercise of deciding what you think belongs on that piece of paper is a very effective way of studying.”
And so, I don’t think he was unique in having that insight, but he is the person that I heard it from. [laugh] And I also liked his thing about, “It actually doesn't matter for grades whether you bring a piece of paper or not, but it does matter for how you feel about it, and so I think I should do the one where everybody feels better.” Which perhaps is a modest version of the conversation we were just having about science versus policy, right? The science says that it doesn't matter. [laugh] But that’s only if you only measure the score on the exam.
So yeah, and I remember his discussion, which I pushed him on a little bit, because I really didn't understand, and he was very clear about it—when you see a table in a chemistry book of the—there are bond energies, but then there are also bond free energies. And I think I understand what the bond energy is, because that’s about ground states. I cut the bond, and the ground state energy changes. In what sense can I attach an entropy to the bond? So there’s a subtlety there about, well, forming the bond restricts the entropy of the molecule as a whole. But you couldn't break two bonds and get twice—get the sum of the entropies. That doesn't work. Whereas to some extent, breaking two bonds, you get something—the sum of the energies wouldn't be such a bad approximation. Anyhow. So that whole discussion, he was very clear on. That was fantastic. These little things that stand out. Also, of course, it is worth remembering that one of the most influential teachers of physics in the second half of the twentieth century was a man who had a very serious speech impediment—
Yeah!
—which he conquered. Every once in a while, every couple of lectures, he would get stuck, and push through.
That’s a Moses-level observation.
Yeah. Well, I'm not always sure I agree with the direction in which he led his people—
[laugh]
—but nonetheless, as a student, it was compelling. I have a vivid picture of the first time he sort of seized up, stammering, and then fought his way through. I thought, “Wow.” I don’t think I was aware of it at the time, but you knew he wasn’t a young man, so he had been doing this for a long time, right? And that is an amazing thing to have to—I guess we have examples of politicians who have overcome speech impediments, right? But that’s a—you're setting yourself up for a—faced with that particular challenge in life, you might think about finding a way where you don’t have to confront it on a daily basis. And he clearly didn't do that. So that was inspiring.
I also remember talking to him about—and I don’t remember whether this is as a student or as a colleague—I remember talking to him about being a textbook writer. And he said that he really—it was a conversation I also had with Dave Jackson. Two great textbook writers. And they both talked about writing textbooks that got to the point where they became so popular—popular by physics standards, right? [laugh]—that publishers started to have opinions about what the book should look like in order to be more marketable, and their fights, more or less successful, against that. So that was an interesting thing to talk about.
Bill, let’s set the stage for graduate school. And I want to ask, given how self-aware you are of the limitations on the theoretical end of things, and working with Alan, to what extent did that box you in? In other words, did you see opportunity to work within those limitations, or did you specifically want to set up your research agenda so that Alan would be front and center as a mentor, but you would have outlets to pursue the things that you wanted to, where at a certain extent, Alan could only be so helpful?
I don’t think I was that—
Strategic?
Yeah. So I was having an enormous amount of fun exploring. I should say that a significant part of my time would be spent fascinated by the possible role of quantum mechanical effects in biological systems, partly inspired by what was kind of mainstream in the context of photosynthesis. Although I was also interested in the fringes there, too. And partly fascinated by the quantum limits to measurement and wondering whether the measuring instruments that we carry around—our eyes and ears and so on—could reach those limits.
So this was the time when the very first experiments, clean experiments on recording the electrical signals generated by single receptor cells in the eye and the ear, were appearing. There were these gorgeous series of experiments from Denis Baylor and his colleagues at Stanford about recording the responses of rod cells to single photons. And so I have to admit, I think I spent a certain amount of time trying to understand, “Well, if you're in the limit where you're counting single photons, do you need to think about quantum mechanics?”
And so that led me—I went back and read all these—the papers by Glauber about photon counting, and the relationship of photon counting statistics to the states of light. I had the good fortune that one of my close friends from those first days at Berkeley was a graduate student, a guy named Dan Seligson, a graduate student in John Clarke’s group. And let’s see, also in John’s group at the time was Roger Koch, who unfortunately passed away relatively young, and Michel Devoret, who’s now at Yale, and John Martinez, whose name you see in the newspaper associated with quantum computing. And I can’t remember now who were postdocs and who were graduate students, but anyhow, they were all around. We can sort it out.
And this was the beginning of searches for macroscopic quantum effects—trying to look at tunneling, trying to build—there was also the question of if you used SQUIDs as magnetometers, what were the noise limits that set—what really set the noise floor? There’s a quantum noise floor as well as a thermal noise floor. Can you cool the system down to the point where you see that? And coincidentally, in Kip Thorne’s group at Caltech, they were thinking about quantum—what were the real limits that quantum mechanics impose on your ability to measure position? And the idea of making quantum non-demolition measurements, and what does that have to do with—if you make interesting states of light, which are not coherent states but rather squeezed states, can you play with—if you use those in an interferometer, can you then effectively make different kinds of measurements, and sort of squeeze—the quantum limits to measurement are really about the uncertainty principle, right? They tell you you can’t measure two things at the same time. Well, you can, but there’s a relationship between how accurately you'll measure them. And most of the natural things that you measure sort of mix the two quantities, right? But there’s nothing that says that you have to do that.
So are there ways of—so the simplest example is—to get back to the photon counting example, that’s an invasive measurement, it’s a destructive measurement, but roughly speaking, I can take a box filled with photons and figure out how many photons are in it. I can count them. And the uncertainty principle doesn't stop me from doing that. What it stops me from doing is also knowing the phase of the electromagnetic waves at the same time. That’s what I can’t do. So what are the real quantum limits to measurement? So in some sense, if you ask the right question, there isn’t one. But if you ask sort of questions that arise more naturally, then there are limits. So I became very interested in all of these things.
And an important part of my physics education was a reading group about classical and quantum noise, with all of those guys from John Clarke’s group. There was another student, J.D. Crawford, who went on to work in non-linear dynamics and plasma physics. Eventually was on the faculty at Pittsburgh. Unfortunately passed away very young, as well. I remember a moment where he [laugh] said, “Okay, I'm willing to talk about the theory of measurement and foundations of quantum mechanics, but we have to set some ground rules.”
So yeah, so all of that was incredibly exciting. And actually, the problem of how to describe quantum systems in which dissipation is non-negligible, which is essentially the problem that Caldeira and Leggett solved not much later, that is very relevant to thinking about things like, “What if I'm looking at an electron transfer step in photosynthesis?” In some way, the motion—the flexibility of the surrounding molecules, the flexibilities of the protein and the motions of the surrounding water, are the source of dissipation that destroys coherence for the electron that’s hopping back and forth between donor and accepter sites.
And so, I was interested in those issues about—you might say sort of foundations of quantum mechanics and measurement, which obviously are tied up with the description of dissipation, for reasons about the limits to measurement. But then it’s also related to these very molecular problems. My efforts to do something about quantum mechanics and limits to measurement were kind of a failure. Unfortunately, a public failure, in that we did write a wrong paper in which we argued that the available data pointed toward the very small displacements that the inner ear can detect being close to the quantum limits. And in the end, they're not. And I would say that’s an example where maybe—I continued to be interested in that problem for some time after graduate—so graduate school ended with these kind of back-of-the-envelope—I mean, I did more than back-of-the-envelope calculations for my thesis, but that claim was a kind of back-of-the-envelope level claim.
I should point out this was also a time when people were just starting to understand that the mechanical behavior of the inner ear was active and not passive. And so one of the things we were thinking about—I was thinking about—was the ways in which if I have active elements, I can change—I can create—so if I have a mass hanging from a spring, the bandwidth of that device, the frequency response, has a width which is determined by the damping in the system. And the damping in the system is tied to the random forces that act on it by—so if you think that the noise—so if I take a mass on a spring and I apply a force—let’s think classically—I apply a force and I ask, “Well, why is it hard to—what limits my ability to detect the force,” the answer is that the same mechanisms that produce dissipation also produce fluctuations, so that the system comes to thermal equilibrium. And that random force that you describe using let’s say a Langevin equation, that random force is the noise against which you are discriminating when you try to detect a real force. If you have active elements at your disposal, you can, for example, measure the position of the mass on the spring and apply a force proportion to the velocity. Either proportional or minus proportion. And what that means is that from the point of view of the deterministic dynamics, you've changed the drag coefficient, which means you've changed your bandwidth. But if the amplifier didn't have any noise, then you haven’t actually changed the noise in the system. And so you are discriminating against the same noise source as you were in thermal equilibrium, but you have a bandwidth which is not the equilibrium bandwidth. And so, as a result, certain kinds of measurements can be improved, right? You can beat the thermal limit.
And so one of the ideas that I had—I'll say in a moment about my collaborations in this period—was that maybe part of the reason for active mechanics in the inner ear was not just to enhance frequency selectivity so that we could be more frequency-selective, which of course is important in our ability to recognize what it is we're listening to, but also that by this method, you could do a kind of active cooling, which is an idea that has been around in various guises in different parts of the physics community, and just detect smaller displacements. And it is true that the displacements that the inner ear detects are extraordinarily small. They're Ås.
And then you ask yourself, “Well, okay, so what’s the limit to that?” And, well, there are limits that are set by—there are no noiseless amplifiers because of quantum mechanics. So that was part of the interest was maybe this somehow matters, right? And that led me to think about, “Well, under what conditions—?” Is it possible that in the sort of non-eq…if you do a sort of naïve calculation of, are quantum effects important in any biological setting, leaving aside the question of tunneling at a molecular level, if you get to anything sort of even moderately macroscopic—it’s like microns, right—you're in trouble, right? Dissipation takes over. All quantum effects go away. And so I spent several years thinking about, well, how do you get around this? And well, in the end, I don’t think the ear makes quantum limited measurements. We were wrong. It is an interesting experience to have one of your big ideas you're excited about when you're young be just wrong. Not, “Oh, it doesn't turn out quite right.” No, we were wrong.
Almost satisfying just how clearly wrong—it’s just like, “It’s wrong.”
Yeah, yeah. “The world doesn't work that way.” So my interest in hearing had a foot back in the time I spent as a high school student and undergraduate at UCSF, and I had gotten to know a guy who was, again, a quite remarkable character, who never got his PhD. He worked essentially as a technical support person in one of the labs that worked on hearing. And very creative, very lively mind. And I don’t know, we became friends. And a certain amount of this was really the product of our late-night conversations.
And then, it was also a time—his name’s Allan Schweitzer. He eventually came—later on, when I moved to NEC, he came and worked there as a technical support person for a while. Eventually married one of the administrative people at NEC and they moved to San Diego and lived happily ever after, as far as I know. Occasional contact. Both quite remarkable people whose trajectories through life were more shaped—more limited by their circumstances than by their talents.
I guess that’s another important thing, of having gotten to know—having worked closely with somebody like that, that you realize, “Gosh, we have this—life is a little too—” I mean, it’s not my place to tell his personal story, which I'm not sure I know completely. But certainly he didn't—he described to me once being—applying—taking the SAT or something, and they tell you “Which schools do you want your scores sent to?” and he realized he had never thought about it, right? So it just wasn’t—didn't grow up in an environment that supported that, and sort of found his way to honing his skills on the technical side, so that they would be marketable. He joked that at some point in his life, he had a job because he—it was not widely known that computers had wires coming out of them. Out the back. [laugh]
[laugh]
So, this was specialized knowledge. [laugh] They weren’t just standalone boxes. You could connect things to them. So yeah, so my interest in hearing, that continued. And actually this is a very important part of my time at Berkeley, was that because of my interest in hearing—and exactly how we arrived at this now, I don’t remember—but Alan made connections to other faculty on campus who were interested in hearing. He himself had a long-standing interest in music, which was a non-scientific interest, but he started following along. And would eventually take a certain fraction of his research program and move it toward trying to make mechanical measurements on the inner ear using interferometry.
Remember, if things were moving by Å s, then actually measuring those motions is an interesting physics problem, especially if they're inside complicated things, right? It also meant getting to know, “You probably don’t want to do this,” or if you can get away with making those measurements in something other than a mammal, which is sort of fragile, because we're warm-blooded, and everything has to stay oxygenated and everything else, but also the ears of mammals are inside solid bone, so getting in there is going to be challenging. So anyhow, this meant getting to know people around the campus who were interested in hearing.
So there was a guy in electrical engineering, Ted Lewis, who was sort of from the early days of bioengineering, whatever that would mean, who was interested in hearing and did experiments including classic experiments with his colleague, Peter Narins, who eventually I guess went to UCLA. They discovered these frogs in Puerto Rico that were incredibly sensitive to ground-born vibrations. So right in your ear, you have not only sensors for sound, but also sensors for the rotation of your head and also vibrations. Which is not so important for us because our head is up here and the ground is down there, but if you're a frog, you're a lot closer to the ground. And these are organisms that you can record single neurons coming out of their vibration-sensing organs, and you literally shake the entire frog up and down by a tenth of an Å, and you can see the response of the neuron being modulated. It’s extraordinary.
Bill, did your interest take you as far as even a medical environment? In other words, your research focus is so expansive, even as a graduate student. Not that you're heading to become an MD, but were you thinking at all about the possible clinical or therapeutic or even diagnostic possibilities?
So, here I will say something that maybe reflects my own limitations; I found that such a strong thread of the way in which the living world and physics interacted with each other was that physics gave you tools that could be applied to medicine. And that was of course very strongly represented in the department where I was. And of course, it’s also incredibly important, but I wanted to do the physics of the processes themselves. I found that framing the interaction between physics and let’s say the life sciences as physics applied to the life sciences and then inevitably physics applied to medicine, that this rubbed me so much the wrong way that I became actively disinterested in those things.
And as I say, I still think it’s very important to articulate the biological physics as trying to bring the physicists’ style of inquiry to the phenomena of life, and not applying what we know as physicists to problems that biologists have already posed. That there’s an important difference there. Because one of them is I apply physics to—physics provides tools for helping biologists solve their problems. Physics provides tools for helping chemists solve their problems. Physics provides tools for helping engineers solve their problems. It is applied physics. And applied physics is a beautiful subject. And you can argue, and I think it is an argument worth making, that the existence of applied physics is one of the reasons you should spend money on physics itself. [laugh] Because if you want the applications to keep coming, you have to seed the stuff in the center.
But even at that time, I don’t know that I could have articulated it, but it really was a very strong aversion to these things that felt like application. That I wanted there to be something—I wanted, as a physicist, to ask what was fundamental about the phenomena of life, not to take things where people had already posed the question that would be useful in some way, and say, “Oh, I have a tool for that sort of problem.”
That said, an enormous amount of research into the living world happens in the context of medical research, and so that means seeing that world in some way. I mean, I talked about going to UCSF as a kid and working in a lab that was run by a physicist who worked on muscle. But it did not escape my attention that this was a medical school. [laugh] Mostly it’s a hospital [laugh] with a little bit of basic science sprinkled in. And so I spent a lot of time thinking about that, but that really isn’t what I wanted to do, and actively so.
Because it simply wasn’t the most intellectually compelling thing for you to work on? And the other question there is—where is careerism in all of this? Are you thinking in those terms as well?
[pause] I believed, perhaps naively, that if I succeeded in doing the science that I thought was interesting, everything would work out okay.
Naïve but also noble. In a perfect world, that’s how it should be.
Well, or privileged. What I would say is—and again, this is the advice I give to students today—is that I think that if you want to do science, your best shot at being able to do that, as a career, is to do the best science you can. I don’t think there is another calculation to be done. What I can’t tell you, and what we know isn’t true, is that if you show promise to do interesting science, everything will be fine. That’s not true.
Not even close.
It’s not even close, and it’s not true in ways that are not independent of the other problems in our society.
The National Science Foundation. DOE. [laugh] OMB.
Right. But, you know, there’s also expectations about what scientists look like, and—you know. People carry the prejudices of their society, and we're not independent of those. And as I think I said at the beginning, by the time I came along, explicit anti-Semitism in academia had more or less faded away. The heroic figures were the people who fought through that. And of course, part of the fading away is that you don’t tell that part of the story anymore. [laugh]
Right, right.
What was it—? There was a nice argument that said, “It is a measure of how far we've come that Kamala Harris was viewed as a reasonable compromise candidate.”
[laugh]
Somehow a safe choice. Right? [laugh] I mean, independent of your political views—
Absolutely, absolutely.
—I think that is really interesting. That there was a, “Oh my god, how could you choose somebody who looks like that?” No. Actually of the set of people who were available, this seemed like a perfectly reasonable choice.
The needle has moved.
Yeah. Right. So I was fortunate that for me, for the categories in which I happened to belong, the needle had moved before. But the needle also isn’t where it should be, and so other people are going to have different experiences. That’s something that we as a community should spend time thinking about. But the result of this was that [I] had a kind of naïve absence of careerism. Look, [pause]—so let me phrase this in terms of when we’re doing graduate student admissions. I remember once doing graduate student admissions in the physics department at Princeton and it occurred to me that, well, a large fraction of our students do actually have academic or industrial research careers in which they are independent investigators, whether they are professors or working in industry, or leading groups in industry. Anyhow. And so, more perhaps than at many places, and more than at many other similar institutions, you are trying to find people who will grow up to be like the people who are around the table in terms of their achievements. That’s not the only thing you're looking for, but certainly you're looking for people like that.
So I said, “Okay, let me think about what I know about the paths taken by the people around the table.” And there was something very striking, which is that all of us who were theorists all shared the experience of having been noticed when we were fairly young, and encouraged in some way that was immediately relevant to becoming a theoretical physicist, whether it was your math teacher in high school or whatever. The details varied because the people around the table came from very different places, but we shared that. On the other hand, the experimentalists, the trajectories were just all over the map. Very complicated.
Which you would expect. That’s the nature of the craft.
Okay. So let me—I share your view, but I don’t want to—so the challenging part here is let me think about the theorists. So on the one hand, you could say, “Well, this makes the problem of selecting theorists relatively easy, because the ones who are really going to be successful are the people who are noticed early.” The problem is that what if the people who are doing the noticing all share all the same prejudices? Then we're not actually—you could say that what you're looking for at this stage is that they were noticed at that stage, but there’s an obvious problem with this, right? That said, I was one of those kids who was noticed when I was young, and I was encouraged. So in that way, I don’t know whether—and also, it somehow was a time where it seemed not implausible that things could work out?
I think today, or let’s say over these years since then, I think there are a lot of people who sort of look at the scientific world in which they live and think about their place in it, and have a hard—even if they are in the category of people who look like they belong, I think they have a reasonable question about whether there is a place or not. Not just, “Will it work out for me?” but “Can it work out?”
And I think that didn't occur to me. It occurred to me that I might fail to do something interesting, but it didn't occur to me that I might do something interesting and not have a job. And I don’t know, I didn't arrive at that view because my parents’ path through life had been simple and I had a privileged upbringing in the financial sense. That’s not true, as we talked about.
But every privilege in the educational sense, as we also talked about.
Well, so, their own educations were not what they had hoped for.
No, I mean the macroeconomic support that you enjoyed.
Yes, that’s right. That’s right. So I guess we talked about my asking my father, “How did we pay for me to go to college?”
Yes.
“I don’t remember.” So yeah, I grew up in this sort of magical period, where the public schools were being supported. The public university was being supported. And so you could be at that point—I guess by the time I actually arrive in college, I have to say middle class, rather than a working class kid. I mean, my mother worked as a secretary, right? So it’s not—and it mattered. In the beginning, she worked because my father didn't have a job, right? [laugh] And then eventually I think she did it because she liked doing it. But it certainly made a difference to our lives, that she worked. It’s not like it was just for entertainment, right?
So I would say our position, our family’s position in the sort of socioeconomic strata of the United States was certainly not what mine was when my kids went to college. So yeah, this was not born of financial status. It was not born of sort of an inherited sense of belonging in society, because of course my parents’ experience of their lives were anything but that. But somehow, yeah, it’s well post Sputnik, right? Science is relatively well supported. There’s a certain amount of saber-rattling in Central America, but mostly you grow up not—I mean, the Vietnam War ends when I’m still in junior high school. The world is not a completely peaceful place, but you don’t feel worried about getting caught up in that. I mean, there would be interesting things about the role of the physics community in these things, and we'll get to that. It is the end of my graduate student years, right? I get my PhD in 1983. The March meeting of the American Physical Society in 1983 is in Los Angeles. And it is during that meeting that Ronald Reagan gives the “Star Wars” speech.
[laugh]
And so, you could get up in the morning after that, and wander around in the hotel lobby with a thousand other physicists, all looking at each other—you know, "What the hell was he talking about?”
[laugh]
And I was one of those people who signed a petition saying I would never take any money from the program or whatever. Happened while I was a postdoc. So yeah, there was some feeling that the challenge was to do something interesting, and everything else would take care of itself. And I think this is an enormous privilege in all of the senses of the word. And I won’t claim to have been any more insightful about that fact of my life at the time than anybody else was.
Bill, I think that’s a perfect narrative break point for part two. Perfect.
All right, thanks.
[End 201007_0334_D]
[Begin 201016_0344_D]
This is David Zierler, oral historian for the American Institute of Physics. It is October 16th, 2020. I am so happy to be back with Professor William Bialek. Bill, good to see you again!
Nice to see you, too.
We left off last time where you were giving me the wonderful classic Berkeley discourse on what it means to be a graduate student there, where you're so wrapped up in so many different intellectual and academic and political pursuits, we didn't even get around to talking about your dissertation, almost as if that was an afterthought to the whole experience, and of course that’s not true. So let’s talk specifically about that. At what point in your graduate education did you find that it was time to hunker down and narrow your focus and actually pick a dissertation topic and go with it?
So, I had been working on several things, some of which—well, most of which made it in some form into the thesis, and we can talk about which things, in retrospect, I think were valuable, and which things were just wrong. To be honest, I really was working somewhat haphazardly. I had these two very different interests. One was in sort of the more molecular part of biophysics, so I was fascinated by the very early events in photosynthesis. This was well-trodden ground. Hopfield had written these marvelous papers in the seventies about tunneling and electron transfer that of course had the fascination of having something to do with quantum mechanics in a biological system. This was the period where pulsed lasers were getting faster and faster, and so people were able to resolve more earlier and earlier events in the process by which the photosynthetic reaction center absorbs light and captures the energy. So I was very interested in all of those things.
And those were, as I think I mentioned, part of my connection with my advisor was that he had been working on photosynthesis, and one of the students in the group, Bob Goldstein, was doing experiments that were directly aimed at trying to test some of the theoretical ideas. One set of tests were the theoretical ideas that Hopfield had proposed, and so this seemed like a place where there was something to do. So that was one part of my interests.
And as I think I mentioned, this was also a period in which people were starting to think more concretely about the way in which quantum mechanical coherence gets destroyed by interacting with other degrees of freedom, and things like this. So although maybe you didn't really need to understand all that in order to understand how electron transfer works, it wasn’t unrelated, and so it was a place to exercise myself, thinking about all those things. So that was one set of interests.
And then the other set of interests were a fascination with the extreme sensitivity of our sense organs. The ability of the visual system to count single photons. The fact that when you are sitting in a quiet room and listen to a sound you can just barely hear, your eardrum is vibrating by an Å or so. And so I was very interested in are there real physical limits to the smallest signals that you can detect? What’s the level of Brownian motion in the ear against which these signals have to be resolved? And I also again because of I think a not uncommon fascination with the possibility that quantum mechanics is more directly relevant to biology, I was curious whether—there are of course quantum limits to mechanical measurements, and could it even be that those quantum limits were close to what the ear actually accomplished? So spoiler alert—no—but it took quite some time to—of course, if that were true, that would be quite extraordinary and really make you wonder about a lot of other things. And so it seemed like it was a place to spend some time.
Did you feel like you were standing on Hopfield’s shoulders, or were you looking to develop the field sort of in tandem with what he was doing at the time?
So I think that the work on photosynthesis was very much more derivative in that sense. That he had set a direction. Whereas thinking about the sensory systems, that was a direction that had precursors but was really more mine than anybody else’s at the time. So again, as I think I mentioned, I start at Berkeley in ’77 and I get my PhD in ’83. So in that period, you have Berg and Purcell writing their paper about the physics of chemoreception, realizing that there are limits to the precision with which organisms can measure the concentration of molecules in their environment. You have the first papers from Denis Baylor and his collaborators at Stanford, directly measuring the single-photon responses of photoreceptor cells in vertebrates. You have the first papers from Jim Hudspeth and Robert Fettiplace and Andrew Crawford, recording the responses of single hair cells in the inner ear.
So it’s a time when ideas about—and of course associated with the work of Berg and Purcell, there was also things—the sort of consequences of that were unfolding. The conclusion from Berg and Purcell was that in order to explain the reliability with which bacteria were navigating in chemical environments, they had to be essentially counting every single molecule that arrived at their surface. And so there were these experiments from the time when Steve Block was a graduate student with Howard Berg, where they actually measure—try to get to the absolute sensitivity by applying—well, first of all, they measure the temporal impulse response of the chemotactic apparatus. They try to get to an absolute measure of sensitivity by using clever inputs. And that’s emerging sort of at the end of the period of my thesis. I think the papers sort of stream out, ’83 to ’86. So there are all these things about sensing physical signals in the environment. And I was just fascinated by this. I still am. I still think it’s extraordinary, that organisms can do this.
And the interest in hearing then led—I think I started to talk about this last time—Alan Bearden, my advisor, knew that there were a handful of people on campus who were interested in hearing. So there was a guy in electrical engineering, Ted Lewis, who was a kind of early bioengineer and was interested in characterizing the responses of individual nerve cells. I'm worried that we actually talked about this, because I'm sure I told the story of the frogs that can respond to fractions of an Å vibration of the ground. So anyhow, so the result of this was—so there was Ted Lewis in electrical engineering. There was a guy named Jeff Winer, who unfortunately passed away relatively young, in what was then the physiology and anatomy department, who was a great anatomist of the auditory nervous system. There was a guy named Erv Hafter in the psychology department. And then there were us, who were a bunch of physicists, up at the other end of campus.
And so, we had a seminar about hearing that brought together people from all these different departments and was a kind of genuinely multidisciplinary seminar. That was consequential for me for a whole variety of reasons. One is that it was an early introduction to talking to people from very different disciplines. I don’t know, I feel like this issue of doing interdisciplinary science—I'm now old enough that I'm worried that the character of the conversations doesn't seem to change.
Well, it’s not doing multidisciplinary for the sake of that being the starting point. It’s that you have a research interest, and it doesn't fit well within any given discipline—
Right, but what I find is—
—or an academic department.
What I find is that people—when I listen to conversations about physicists who are interested in biological problems and the issues of doing things at the boundaries between physics and biology, I feel like I'm hearing the same conversations that I've heard for 30-plus years. Whereas I feel like I don’t have a problem anymore, right? [laugh] I also think that—well, maybe this is a more sociological thing that we should talk about separately—I think one of the things that has happened—maybe we should delay this, but just as a foreshadowing, I would say one of the things that has happened in these 30-plus years is that the physics of biological systems has emerged as a subfield of physics, as opposed to just being something that happens at the interface with biology. And I think actually that in the discussions about the interactions between the disciplines, the assumption is often that the disciplines are rigid. And of course, they are to some extent rigid, but they're not infinitely rigid. So what happens inside a physics department or a chemistry department today is very different than what happened a generation ago, but it’s still called the physics department.
It’s almost outmoded, essentially. That doesn't even work as an umbrella anymore.
Well, I don’t know. I think actually—I actually think that—well, I think the characterization that physics is this, and biology is that, and so the fact that you want to do something that’s in between causes a problem because it doesn't fit in either one—well, that presumes that things that happen at the edge of—I mean, all of the excitement of a field is at the edges, right?
Yeah.
Because the thing in the middle is understood. And the edges sometimes are protrusions into directions where nobody else is interested. And sometimes they're protrusions in directions where you touch other people’s interests. But it’s always at the edges. And if you succeed in finding something when you spread out in that direction, then the nature of the discipline shifts, right?
I'm thinking when Charles Darwin was sitting in his lab, looking at his houseplant and wondering why it was bending toward the sunlight, it seems sort of artificial that we would try to bound what he was doing in by—you know, was he a biologist trying to understand the physical properties? Is this biophysics? It’s curiosity. It’s science.
Right. It’s also true that—we can have a longer conversation about the nature of physics as a discipline and what I think that has to do with what we've tried to do over the years, and I wouldn't mind having that conversation. I should get back to the historical sequence. I also feel like my views have evolved, and I didn't have very clear views at the beginning. So this discussion belongs later. But again, to foreshadow, I would point out that there’s almost no biology department in the United States that survived intact the emergence of molecular biology. So there was something about biology—I think that physics has—the notion that there’s a subject called “Physics” with a capital “P”—and we can argue about where the edges are, and where the boundaries are—is a much stronger notion that there’s a subject called Biology with a capital “B.” And there’s good reasons for that. But I sometimes find it strange when people hold up, “An obstacle to progress is the rigidity of the traditional disciplines.” Well, actually, the traditional disciplines are themselves very different from one another, and I would argue that physics has evolved quite a lot and has survived—the notion that there is a physics department and that there’s a discipline called physics has survived enormous upheavals in what happens underneath that umbrella, in ways that hasn’t happened in biology. And so there’s some real difference there.
So when you hear the conversations about disciplines versus interdisciplinarity, and you hear it because physics and biology, you have to remember that actually, as disciplines, physics and biology have very different histories and very different cultures. There’s also the remark which I heard ascribed to Michael Fisher, but when I asked Michael, he said that although it sounded like something he might have said, he didn't say it, and that was, “The first casualty of interdisciplinarity is discipline.”
[laugh]
Anyhow. So there’s a conversation to be had there. Let’s go back to—
Bill, relatedly, I wonder, because your dissertation—whether or not you were caught up at the time with this idea of multidisciplinary research or not, I'm curious, because your topic didn't really fit well within a particular department, if you were more active in terms of having input on who would be on your committee, or who would be the people whose mark of approval were important to you that specifically would branch out beyond any one particular department at Berkeley?
Yeah, so that was true. I talked to a lot of people. I can remember, yeah, putting together—so the notion that it was all supposed to cohere into a thesis comes late. I think that this notion that you just—I was interested in what I was interested in, and I pursued it, and I did what I was doing—and I was [pause]—I was unprofessional.
Well, you were not careerist.
Right, I wasn’t—well, no, but I would say unprofessional in both the good and the bad ways. Right? There were aspects of my behavior that I think was unprofessional in that I wasn’t as focused or as rigorous with myself as I should have been in spots. And that is what it is. But it was also unprofessional in the sense that I was blissfully unconcerned with these career issues and so on. And I honestly cannot remember why it is that I felt that freedom. I mean, I had lots of people telling me I was a smart kid, so that helps. But the question of what you should—worrying about the problem of getting a job and—
Well, to get back to a—
The fact that I was interested in things that didn't fit into many established structures should have produced some anxiety about how it was all going to work out. And I don’t really remember that. I mean, I was more worried about could I make things work, just scientifically. The question of how it was all going to work out in terms of career just didn't come up. And that was an incredible freedom that I had.
To go back, Bill, to an earlier conversation we had, of course your entire life up until this point, a beneficiary of an incredibly generous social system that supported world-class education.
Absolutely.
And probably, your lack of concern for next steps, for better and worse, was a product of that, to some degree.
Yeah, yeah. I think so.
So would you engage with your mentors in terms of, on the good side of the professionalism question, how you might package all of these interests into a faculty line position at a university that would be interested in the Bialek package at that point? Did you engage in those kinds of conversations?
I think I can remember my advisor telling me not to worry about these things.
Like, “Do what you love, it’ll all work out, you'll get something”?
Yeah, yeah. “Worry about the science. Be sure that—” Yeah. It’s funny, I think I—I was clearly asking—so, if I try to see myself with a little bit of objectivity, which is always hard, I think I was asking questions other people weren’t asking. And so that was exciting. I think to be self-critical, I don’t think I was getting to the answers as sharply as I would have liked. So at some point, I have to figure out—time is marching on. Although I was still very young, but it seemed like at some point you want to move on.
Well, let’s just ask—moving on, who was on your committee?
So, my thesis committee consisted of—if I'm remembering correctly, Alan Bearden, my advisor, who as I say was an experimentalist who came from a very strong physics culture. I mentioned how I found my way to him, and that he had come out of doing things in Mössbauer spectroscopy, which eventually led him to thinking about biological molecules more generally, and settled on photosynthesis.
I may not have mentioned that he was also the son of a very distinguished experimental physicist at Johns Hopkins. He did precision x-ray measurements. So Alan was a bit to the manor born. He remembers—he told the story of Fermi coming to visit the house, and having an argument with him about model trains, which I think boiled down to differences in track dimensions between Europe and the United States or something like that.
[laugh]
That’s one of his childhood memories, so that gives you a sense for what his upbringing was like.
[laugh]
When I finished my PhD and we were on our way to Europe, we stopped on the East Coast, and I went and met my advisor’s father at Hopkins, which was marvelous. He was 80-something at that point, I think. Still in his lab. He joked that what he really wanted to do was die in his lab. [laugh]
[laugh]
He said, “It’s going to be hard on the secretaries who find me.” That was his comment. “But, too bad.” He wanted to be in his lab until the end. He almost made it. And the only concession to modernity was that some of the microscopes that were there for aligning the great circle diffractometers to do the x-ray measurements also now had little laser interferometers next to them [laugh] to improve the alignment. But otherwise, it was like stepping back into a laboratory of a generation before.
One of the most—this is again slightly out of historical order, but since we're talking about who were the people on the committee, I think understanding a little bit about my advisor’s father, Alan’s father, is useful. When he was a young faculty member at Johns Hopkins, he was making precision x-ray measurements. And if you make precision x-ray measurements, you determine very accurately the lattice spacing of crystals. But if you know the lattice spacing of crystals, then you know how many atoms there are in a given block. Which means that you know—well, after a few steps, you get to an estimate of Avogadro’s number. If you combine Avogadro’s number with the Faraday, which is a mole of electrons, a mole of charge, you get an estimate of the charge on a single electron. And that estimate, which Alan’s dad believed in very strongly because he made these measurements, disagreed with Millikan’s value for the charge on the electron. So this is in the thirties, I think.
So he sat down and tried to figure out what the problem could be. And so he went in great detail through Millikan’s measurements, the oil drop experiment. What he discovered is that in the original measurements, there’s a correction for the viscosity of the air. And then he looked at the number that was plugged in for the viscosity, and then he went and looked up the best measurements he could find of the viscosity of the air, and he may even have gone and remeasured it himself for this reason. And that number was wrong. And so he chased this all down and convinced himself that Millikan’s value for the charge on the electron was incorrect, and that this precision x-ray measurement actually got—once you knew what to correct for, you could bring them into line, but it was this precision x-ray measurement that gave the right value.
And so he submitted an abstract to the APS meeting to explain all this, and rumors began to circulate. It appears that Millikan was not going to be very happy about the idea that he was no longer the reference for the charge on the electron, and I gather that some people felt it was not beyond the realm of possibility that he would [be] a little spiteful, and for a young faculty member, that might be dangerous. And so apparently, this young faculty member in the physics department got a call from the president of Johns Hopkins, who wanted to see him.
[laugh]
And he said, “Okay, yeah.” “So I understand you've done something interesting. And then my friends in the physics department tell me that it’s not beyond the realm of possibility that Millikan would be not very happy about this, in ways that could be difficult for you.” And he kind of shrugged and said, “Well, I don’t know. I made my measurement, and—will report it.” Right? “It is what it is.” And the president of Hopkins apparently said, “Well, let me assure you that you will always have a home here.” And in his early eighties, as he told this story, he leaned back in his chair and gestured to his laboratory and said, “And as you see, I'm still here.” [laugh]
[laugh]
But yeah, that visit also included looking at the original plates from Rowland’s measurements of the solar spectrum and things like that. So it was a fantastic step back into the history of experimental physics. Anyhow. So, Alan was my advisor and the chairman of the committee. The second member of the committee was a guy named Geoff Owen, also an experimentalist. Worked on the retina. So he did beautiful measurements on photoreceptors and the next neurons in line, the bipolar cells, that provide the first processing of the single photon singles. So that was the connection. In fact, Geoff and I actually taught a course together when I was a graduate student, about the physics of the sensory systems. So we talked about these emerging experiments on vision and hearing that I was telling you about.
Geoff and I would eventually collaborate when I was back in Berkeley as a faculty member, trying to understand whether you could view the processing done at that first step in going from the receptor cells to the bipolar cells as being a kind of matched filter for the single photon signals. Which seemed to work. So that was fun. We can talk about that at the right moment. And the third member of the committee was a real theorist. There was a new young assistant professor in the physics department named Orlando Alvarez.
Ah, yeah.
Who had just come from being a postdoc at Cornell, where he interacted a lot with Ken Wilson. Taught the first course on renormalization group in the physics department at Berkeley.
Oh, wow. Wow. Did you take that course?
Yes.
How wild and new was renormalization for you personally?
So there were a couple of things that were wild and new. I remember there was a course—so let’s see, what were the memorable courses? So I don’t know whether I actually took the renormalization group course. I think I sat through it. Because by then I was close to finishing, and the idea of turning in homework was not very appealing. I never found it very appealing, but near the end of your thesis is not the time to start doing it again.
[laugh]
So yeah, that was incredibly exciting. So let’s see, what are the—yeah, we were talking, from early in my education, taking the course taught by Charlie Kittel, and the E&M course taught by Trilling, quantum mechanics course taught by Mandelstam—there were other memorable courses. There was a more sort of phenomenologically oriented kind of field theory for particle physics course that was taught by Gene Commins. And I remember—there was one lecture from that course that stands out, which is that he feels it’s—we've gotten to the point where it’s time to appreciate the precision of QED and the nature of the comparison with experiment, and how it had evolved, starting with the Lamb shift and the anomalous magnetic moment to a couple of decimal places continuing forward. And so I have this—he was a very dapper guy. Always wore a tweedy jacket. I can’t remember whether there were actually leather patches on the elbows or not, but—
[laugh]
—that’s the idea. And you knew that it was going to be a serious day if he very carefully took off his jacket and folded it, and put it on the counter in front of the—these demonstration things, right, in the front of lecture halls. So one day he did that. Literally takes off the jacket, carefully lays it out, rolls up his sleeves, and starts to lecture about the comparison between theory and experiment in quantum electrodynamics. And by the end of the lecture, the board is filled with—there’s the Lamb shift, and there’s the Lamb shift in helium, and there’s this measurement, and that measurement. All the different—so it wasn’t just, “Let’s talk about the anomalous magnetic moment and then start adding decimal places.” It was, “Let’s look at all the places you can check,” right? And when the board is filled, he turns back and looks at it, and smiles, and gestures toward one of the numbers and said—“And that was actually the first number I ever measured in physics.” That was the end of the lecture. [laugh]
Whoa.
So, that was memorable.
[laugh]
He came back—so I am not remembering exactly in which year I did these things. But then after that, he taught a special topics course. So then there are three special topics courses that stand out. One was taught by a guy named Gerd Schoen, who was visiting. He was a condensed matter theorist. And it was about superconductivity. And he must have done this in class, but I remember actually sitting down and working through for myself how in Ginzburg-Landau theory, you get the Higgs phenomena. To do what Anderson did in 1962 or whatever, right?
[laugh]
Just worked it out. And that miracle [laugh] of the massless gauge degrees of freedom going away—or sorry, what should have been the Goldstone boson somehow combining with the gauge degrees of freedom and going away, I mean, it was just—that really was one of those things where you're like, “Oh, this is really beautiful.” So that was the thing that stood out. I mean, there must have been other things in the course, but that was—I can still picture myself working this out.
That, by the way, agrees very strongly with—there was a very mathematical physicist named Eyvind Wichmann, from the sort of constructive quantum field theory, proving theorems about cluster decompositions and C?-algebras, and all this stuff, which I must say I never—I never found myself attracted to that direction. But he taught a quantum field theory course, which I sat through the beginning of, and didn't stay to the end. And his remark about really calculating—so part of the reason I didn’t stay to the end was it became clear that he wasn’t especially interested in calculating anything. He was interested in the formal structure of the theory.
So that’s why I sat through more of—I sat through the whole of the course given by Commins, because that one, he was really going to calculate things. I don’t know, somehow that seemed, at the time, more appealing. And he [Wichmann] said about the problem of calculating things, and understanding the anomalous magnetic moment of the electron, all the great triumphs of quantum field theory, “Well, you know, you should go home and work these things out.” [laugh] “Cause my standing up here and lecturing isn’t gonna do it.” Right? “So just go home and do this.”
So there’s an interesting point about pedagogy, right? In theoretical physics, there is this performance of standing up there and deriving things. And I love it. I love doing it. But I also wonder sometimes—the things I understand are the things I work out for myself, so what am I doing, showing people? Anyhow.
Bill, after you defended, what were the most interesting postdocs that were both available to you and interesting to you?
Sorry, one more special topics course. So there was superconductivity. And then—
Renormalization.
Right. So Orlando taught the course on renormalization group. And Gene Commins taught a course on gauge theories of the weak interaction. So this ends—this is in spring in—I want to say ’79, but it could have been ’78. I think it’s ’79. It’s easy to check, because the course ends—and I think I might even have the notes somewhere, the mimeographed notes, as they were in those days, which would eventually become a book, I think, that he wrote with Phil Bucksbaum. And it ends with the explanation that a crucial test of all of these ideas is in the fixed target polarized electron scatter…polarized target scattering experiments that were happening at SLAC. And it’s a spring semester, spring quarter course, and it ends by noting that the team doing those experiments has scheduled a press conference for—and there’s actually a date in the notes, which I think was in June. And that is when they—I mean, that is when they announced the results that convinced everybody that the Weinberg-Salam model was correct. So he sort of led us—it was a fantastic course that led you right up to the edge of what was about to be resolved. So that was very memorable as well.
Yeah, so those three things—the course on superconductivity, the course on renormalization group, and the course on gauge theories of the weak interaction—those were quite something. Very different teaching styles, but all quite something. In terms of postdocs, I knew one thing, which was I needed to go far away. I had grown up in San Francisco and gone to Berkeley and stayed there for my PhD. And so although the total time at Berkeley was not that long, it obviously wasn’t very far from home.
And the East Coast wasn’t far enough, apparently.
I had the idea that I wanted to go to Europe for a year. I don’t know, maybe it also relates—go all the way back to that article by Jeremy Bernstein, about Rabi and the coterie of people who surrounded him. And there was this somewhat romantic tradition of going off to Europe. You know, young American physicists going off to Europe.
And on the home front, are you single at this point? Are you married?
Ah. Well, another important feature of that interdisciplinary seminar was that I met the woman who would become my wife, who was at the time a graduate student whose own PhD program was meandering a bit. That’s another conversation. I'm not sure it’s really relevant here. But she had decided that she wanted to learn something more about psychology, and was actually working as a subject in the psychology of hearing lab, and so came to the seminar. And so that’s how we met. And so Erv Hafter, the psychology professor who was one of the faculty sponsors of the seminar, always takes credit for this.
I ask because the decision to go to the Netherlands obviously was then a joint decision?
Absolutely. And it turns out that my wife, Charlotte, was born in Amsterdam.
Hmm!
So she’s from a Dutch family. They had finally come to the U.S. when she was kindergarten age, so she had grown up here. But there was still family in the Netherlands. I had mentioned that the interest in let’s say the physics of the senses was something that was not—it didn't have the sort of contemporary drive like Hopfield’s contribution to think about electron transfer in photosynthesis, but it had sort of sporadic classical pieces. And one of them actually had its origins in Groningen, in the Netherlands.
A guy named Hessel de Vries, who was an extraordinarily creative guy, did all sorts of things, including recalibrating the carbon-14 dating scale, but also spent a significant chunk of his career being interested in the senses. He did an experiment where he thought, well, let’s see—the absorption tail of the visual—the long wavelength absorption tail of the visual pigments is the result of thermal fluctuations. So if you take a big molecule—so naively you think, well, there should be—so—if you take an atom, there are energy levels, and that’s that, right? [laugh] But now if I take a big molecule, there are energy levels for the electrons, but there are also energy levels for all the vibrational degrees of freedom of a molecule.
So if you think in a kind of semi-classical way, then what you can say is, well, there’s some coordinate of the molecule, and as those coordinates fluctuate, the energy difference between the electronic states varies with it. And so if you're sitting at the most likely value of the coordinate, then you see a particular energy difference between the ground state and the first excited state. And so that’s the peak of the optical absorption band. But of course, the structure fluctuates. And in particular, when it fluctuates in one direction, the two energy levels get closer together. And so in order to get into—and the way you generate the low energy, or long wavelength tail, of the absorption band, is from those fluctuations. Which you can also think about as sort of borrowing energy from the thermal bath to make up for the fact that the photon doesn't have enough energy to excite the molecule. All of that is actually—I think that’s not a very good way of thinking about it. But anyhow, he understood that the long wavelength tail of the absorption band of large molecules came from thermal fluctuations. So that means that its slope actually—it’s a thermal fluctuation, so it’s in fact exponentially suppressed. So the tail is exponentially decaying. And the decay rate depends on the temperature. So what that means is that if I change the temperature, I can actually change the actual amount of absorption deep in the tail by quite a lot.
So his idea was, well, my perception of color depends on the relative number of photons that are absorbed by the visual pigments in the three different cones, three different kinds of receptors. So if I change the temperature and I go into the right range of wavelengths where I'm in the tail of the absorption band of one of the pigments, maybe I can shift your perception of color. So there’s some legend about him setting up a hot tub in the physics lab to get people’s temperature to change, and see if they would see differences in color perception. Everyone assures me that the experiment was done, but I'm not sure I've ever found the paper. [laugh] Anyhow. So that was his—that gives some feeling for his intellectual style.
So he was very interested in a whole variety of problems surrounding the sensory systems, including the problem of how it was possible for—so he worked on photon counting and vision, and in particular the role of the sort of square root of n counting fluctuations in setting your ability to discriminate intensities. Which there is a regime in which that is what limits you. He also worked on hearing, was interested again in these very small displacements. Actually discovered—went and tried to record from cells in the inner ear, or in other organs that have the same kinds of cells, like lateral line fish. So there was a tradition of that kind of biological physics in Groningen, and there were three or four people whose work I had been following for various reasons who had coalesced there in recent years. And so it stood out as a place that was very interesting. Coincidentally, my wife’s grandmother and her aunt also lived there. So at that point, we sort of thought, “Well, okay, let’s go there.” And so we spent a year there, which was incredibly productive.
What was your research? Were you looking to expand on the dissertation, or was this an opportunity to do new stuff entirely?
I always have in mind that there were four different experimental efforts that interested me. And I'm now not sure I can enumerate all four, but I have this memory of being in Europe for a conference—I decided to go sort of sight unseen. I had read all these interesting papers by these people. They had all moved there relatively recently. It had this interesting history. There was a family connection. “Let’s try it.”
But then I was in Europe for a conference, and so I decided to stop by—oh, and I had gotten a fellowship, so it wasn’t—I could just go. Again, the largesse of the taxpayers. In fact, it’s really the good old days, right? The fellowship was a National Science Foundation postdoctoral fellowship, but the money came from NATO. And so this is this romantic period in which it was thought that perhaps having scientists move around the globe would be good politics.
[laugh] Sounds like a century ago.
A quaint—yes, it was the last century, right?
[laugh]
Well, also, there was a Soviet Union, right? It was a different time. So yeah, I benefited from growing up in a state which, for whatever reason, had decided that spending money on public education was a good idea. And then as a graduate student, I had an NSF graduate fellowship, and then I went to Europe as a year for a postdoctoral fellow supported by the NSF and NATO. So there were groups doing things on vision. There were groups doing things on hearing. There was also a theoretical physics—so there was a laboratory of general physics, which was were biophysics was, and there was an Institute for Theoretical Physics. And so I kind of introduced myself to people in both places, and they were very happy to have me there. And I sat roughly four days a week in the biophysics group and would spend a day a week out at the Theoretical Physics Institute.
I believe that that choice was a combination of what’s exciting in terms of day-to-day conversations, and also the fact that the university was in transition. As you may know, many universities in Europe which started in the heart of cities, as time has gone on, science in particular has been pushed out to campuses in the suburbs. We had the good fortune to find an apartment in the center of the city, and my going to the biophysics group involves a ten-minute walk, crossing a canal, and going to some grand old building.
Oh, boy. Lovely. [laugh]
Very charming. Very charming. Whereas going to the Theoretical Physics Institute involved getting on a bus and going out to someplace that was, as yet, quite unpopulated.
Bill, a cultural question. To the extent that you can extrapolate from your experience to the European experience, how was biophysics done differently in the Netherlands?
Ah. So this is really interesting and important. In the Netherlands—so, let me talk about the Netherlands. In the Netherlands, there was a very strong tradition of biophysics.
Right, right.
I was telling you about some of it, but that wasn’t all of it, right? There was a group that worked on photosynthesis, for example, and had done all sorts of interesting things. There was a very special thing, which I didn't fully understand until I got there and started talking to people. So one of the groups that I found very interesting was a group that worked on hearing, and they were in the medical school. And although I had spent time—at UCSF as a high school student and an undergraduate—by this point, the idea that you would find real physicists in a medical school—I knew that that was rare in the U.S. and that I had been very fortunate in finding people at UCSF. So it turned out that, at the time—and this has somewhat eroded, so I don’t know exactly what the state is now—there was a tradition—so the university system in the Netherlands at the time was very hierarchical. Professor really meant something, right? Professor was more nearly head of department, right? Or officially kind of head of a group. So there would be one professor and then lots of people who were there permanently but were not professor. And there was a tradition that in the medical school, in the Department of Ophthalmology and in the Department of Otolaryngology, so ear, nose, and throat, in each of those departments, there was a professorship which was reserved for a physicist.
Oh, wow.
And the argument was that these are parts of medical practice which are very much dependent on physical instrumentation. And that had been true—obviously, there are now many more parts—now, actually, find me a part of medical practice that doesn't depend on sophisticated physical instrumentation, okay, but—I mean, this was true 100 years ago, right? And in their wisdom, the people who set these things up understood that in order to be professional at this, you should really have somebody who was a physicist, who held that chair. So one of the chairs of otolaryngology, of ear, nose, and throat, was a physicist, and one of the chairs of ophthalmology was a physicist. Of course, they would then build a group around them. And that group had, on the one hand, responsibility for aspects of medical practice, but also they would build a research group. And they obviously did things which were relevant, but they all came from physics.
And so the Dutch had this fantastic tradition in, if you want, sort of the physicist version of experimental psychology, psychophysics, of hearing and vision. And so, some of the first work on photon counting in vision was done in the Netherlands. Beautiful work on aspects of auditory perception that had their interpretation in terms of the detailed mechanics of the inner ear, and so on. So it was a very rich and storied tradition, which I came to understand had this structural origin. Of course, it’s also true if you have one person, they can nucleate a school. But there were jobs for these people.
So there were two things that were really special. One was that in the physics department, there was a biophysics group, and there were two professors, one who worked on vision, and one who worked on hearing. And then there was this thing in the medical school, which was kind of interesting—that there were specifically physics professors in the medical school, not who did medical physics, but who did vision and hearing, just from the point of view of a physicist.
And Bill, just so I understand, this administrative setup, which obviously you respond to quite positively, you only discover this on the ground. In other words, you didn't—
Absolutely, yeah. No, I had no—
You didn't go there because you said, “This is the kind of setup that I want.”
No, so what I knew in advance was that there was biophysics in the physics department, and that that had been true all the way back to the 1950s.
It was established. It was not an upstart. They were not fighting for their status. None of that.
That’s right. So that was different. The medical school thing, I didn't learn until I got there. So although I had read papers by all of these people, I didn't realize what was going on structurally. And why should I? Who cares? So that was interesting. So the really crucial thing—I mentioned that I always think of there having been four groups, and I'm not quite sure I can reconstruct—I can sort of get to mind three out of the four. So I'm in Europe for a conference, and I stop there for a couple-day visit just to see what it’s going to be like, meet some of my in-laws, and I talked to all four of these different groups. And what I remember very vividly was coming away thinking, “Hmm. Well, three out of four ain’t bad.” There was one group which I thought, “I don’t know. This just isn’t very interesting. It’s not doing it for me.”
So I arrive, and I start talking to various people, and trying to do things. And I start talking to the guy who has the office next door. Well, we all have shared offices, but anyhow. So the person sitting in the office next door. One of the people sitting in the office next door is a graduate student in the group that I had concluded was not very interesting, which was a group that worked on fly vision. So this is Rob de Ruyter. Actually, he has a more complicated last—he has a compound last name, de Ruyter van Steveninck. From two noble families, more aristocratic families.
And Rob also was a little, let’s say unsatisfied, with the directions of his research group, although he had been working in it for a little while. On the other hand, I quickly learned that he could do experiments that struck me as being absolutely extraordinary. So if you poke around in the fly’s brain, you can find regions where there are cells that are selective for different features of the visual stimulus, of what’s going on in the visual world. And in particular, there’s a corner of the fly’s brain where there are cells that are selective for movement. So you put your electrode in the right place, and you're listening to the electrical activity of that particular cell, and you move your hand, and when you move your hand this way, the cell starts firing lots of action potentials. You move your hand the other way, the cell goes silent. And that’s actually quite deep into the nervous system.
In fact, if you go—if you try to think of the nervous system as, well, you start with sensory receptors, and then you do various steps of computation, and then eventually you start trying to figure out how you're going to move through the world, and eventually you come out the other end at the muscles—this is not quite in the middle, but getting close to the end of vision. And in particular, there’s a handful of cells in the fly which are responsive to vision—sort of motion on the kind of large scale. And these cells are crucial for how flies actually navigate through the world. They use the signals from these cells to control—to steer, as they're flying.
So what I learned quickly from Rob was that he had gotten to the point where he could hold the little fly down and put his electrode in, find these particular neurons, which have names and numbers, because it’s a fly and it’s reproducible from individual to individual, and he could set things up so that while he was doing the experiment, it was possible for the fly to feed himself. So he could sort of set the fly down so there was something in front of him, so he could stick his tongue out and eat some sugar water. And as a result, he could record the activity of this one neuron for a week. And it was stable. Which was just extraordinary.
Tells you what?
Well, what it tells you is that if you have subtle quantitative questions about how this neuron responds to signals in the outside world, or if you'd like to explore how it responds to a wide range of signals, you can actually hold onto it for long enough that you can explore. So a typical experiment in those days was, you know, you would encounter a neuron in the brain, and you would record from it for ten minutes. And it was routine—because Rob came—it was a physics lab, right? So you could generate in the lab all sorts of complicated movies to show the fly.
Furthermore, you could, in the spirit of the kinds of things that were being done in other organisms—I mentioned the things that Baylor and colleagues had done in the vertebrate retina—you could record the activity of individual cells, of receptor cells in the fly’s retina. And those recordings were also incredibly stable. You could see single photon responses. You could measure the signals and the noise and convince yourself—that you could measure the regime over which photon counting was actually the dominant source of noise. You could do all these things.
And what that meant was that here I had a little piece of the brain in which you could characterize the signals at the input, and you knew what it was trying to compute. It was trying to compute something about motion. And you could record from the neurons in which, as it were, the answer was written down. And the answer is written down in the form of this sequence of action potentials. And so here was a place where we could ask all sorts of things. We could ask, for instance, is the reliability of the estimate of velocity, that comes at the end, close to the limits that are set by the actual noise level, in the receptors at the beginning? So that the computation in between is almost noiseless. Or, how close to that do you get? And how close is that noise level to the limits that are set by photon counting?
You could also ask—in order to answer that question, you had to understand something about how the dynamic information about motion as a function of time was represented in the sequence of action potentials. And the idea that you could do this in a place where you could do such long, stable recordings, meant that you had a shot at answering these questions quantitatively and really sort of thinking about—think about the problem of estimating motion from noisy images, as a physics problem, as you might if you were trying to analyze data in the laboratory. And think about the problem of how information is represented in this sequence of action potentials. This is a problem in coding, right? And there was a lot—the problem of the structure of the neural code was something that lots of people thought about, but here seemed like a place where we could get at many old questions, and some new questions, in ways that just were never possible before.
So I had always shied away from thinking about anything deep inside the brain, because it looked too messy. And suddenly, this guy who had the office next door showed me that you could do experiments that weren’t messy at all. That they were beautiful physics experiments. And so the most—okay, I'm going to zoom back out—Rob and I are still writing papers together.
Ah!
I would say that the things that we did over—so we worked together very intensely for that year while I was in Groningen, and then he finished his PhD. He went to England for a while. He came back to Groningen and actually ended up working in that auditory group in the medical school for a little bit. Then when I moved to Berkeley to—actually, one of the first things I had to do as a faculty member at Berkeley was I got a telegram from one of the senior professors in Groningen that said that, “Congratulations that you're a professor. Rob’s finishing his PhD, and in order to give him honors at his PhD, we need letters from—we need testimony from faculty members outside the Netherlands. So would you be willing to do this?” So that somehow was—of course, the fact that it’s a telegram [laugh] also by now seems—
[laugh]
Anyhow, so that was kind of charming. And then, I was on the faculty at Berkeley for a while, and then I moved to NEC. So this is jumping ahead a little bit, but just to give you a sense, we then recruited Rob to come to NEC. And so we had another decade-long period of working together very intensely, which then he eventually would move—as NEC was falling apart, he moved to Indiana University where he remains, happily, and doing all sorts of interesting things. But then we've continued to work together since then.
But I would say that for both of us, the things that have their origin in those conversations—for both of us, a lot of our scientific reputations rest on things that had their origins in those conversations, in that year. And of course I also count him as one of my closest friends. And it continues to amaze me—and I do not tire at all of repeating—that he worked in the group that I thought didn't do anything interesting. So if I had taken seriously my initial impressions, I wouldn't have bothered talking to him. But fortunately, he was in the office next door. You should talk to the person in the office next door, for human reasons if not for scientific reasons. And then you realize, “Oh, but wait—[laugh] there’s a difference between what you could do in certain direction, and what is actually being done.”
And so, it was marvelous. And we were very well encouraged and supported by the senior faculty there at the time. I think they were quite charmed by this collaboration. It also taught me about the interaction between theory and experiment at a different level than what I had gotten when I was a student at Berkeley. I mean, I was the student of an experimentalist, right? So you think, “Oh, I know everything I need to know about how theory and experiment interact.” But, no. With Rob, we really—there was something about it that made it possible to go from, “Here’s a theoretical idea. What’s the thing you should measure? How do you do that experiment? How do you analyze the data? How does that influence our theoretical thinking?” that was really just incredibly productive.
And so that led—we can talk about what came out of that, but integrated over our collaboration, we showed that the precision with which the fly’s visual system estimates motion was indeed within a factor of two of the limits that are set by noise in the photoreceptors. We showed that you could look at the sequence of action potentials coming out of a single neuron. Despite the fact that they were very sparse, you could reconstruct the time-dependent velocity of motion, of something moving in the visual field. So you could literally read the code. We sort of reintroduced ideas from information theory as ways of describing what was going on in the code and showed that the amount of information that neurons conveyed about the sensory input was within a factor of two of the limits that are set by the entropy of the neural responses themselves. We showed that by thinking about how that coding efficiency depends on the time resolution with which you look at the sequence of action potentials, you could get at sort of classical questions about whether the timing of action potentials was important.
We showed that because—motivated by theoretical ideas about you live in a world where signals are very intermittent, and if you want to represent those signals in a way which is efficient, in an information theoretic sense—you know, you have a physical resource, which is you only want to generate so many action potentials, but you'd still like to convey as much information as possible about the signals that you really care about, then the way to do that is to change your coding strategy as you move through the world, so that your coding strategy always matches the distribution of signals that you're seeing.
And this was an idea that had come up 20 years before, but people were thinking that the matching was occurring on a kind of evolutionary time scale. But if you look at the statistical structure of signals in the real world, that’s not enough, because they're sort of non-stationary. And so we showed that you could see this matching happening in real time. As you sat there through the experiment, if you change the statistical structure of the movies that the fly is watching, the coding strategy that the neuron was using would track that in ways that served to optimize how much information was being transmitted. So it was just an incredibly rich collaboration which was sort of life-changing for both of us.
Bill, was the postdoc open-ended? Could you have stayed longer if you wanted to?
It was a one-year fellowship. I think we went with the idea of just spending a year. So I started applying for other things right away. If you ask, well, suppose I really wanted to stay for another year, could something have been worked out? Maybe? We really went with the idea of we're going for a year.
But it sounds like it may have been a little tough for you to leave. I mean, you were kind of in heaven there.
Um [pause]—well, I went to a different kind of heaven, right? I ended up going to the Institute for Theoretical Physics at Santa Barbara.
[laugh]
Right? So to be fair, Berkeley—I was talking about how I sometimes wonder whether I went to the actual Berkeley or to my romantic vision of the place. One important difference is that the Berkeley that I actually attended was not one of the great centers for theoretical physics at the time, which itself has an interesting and complicated history as to why that is, since, after all, it was at one point where American theoretical physics began, surrounding Oppenheimer. So there was a sense in which I hadn’t had a very complete—I mean, the first course on renormalization group gets taught when Orlando comes, just before I finish my PhD, right? There are a lot of things that I didn't know. And so having the opportunity to go to Santa Barbara was fantastic.
So going to Groningen, as it turned out, I had this experience of a very intimate theory-experiment collaboration, which sort of set a model in my mind for how these things could be done. And also, suddenly instead of talking about—I think about the beginning of my years at Berkeley, and Berg and Purcell write this paper about the physical limits to measuring concentration, and they're thinking about a single-celled organism maneuvering through the world. Here, we're talking about a big, complicated, multicellular organism and what it can do with its brain, and we're seeing that the same notions of physical limits are relevant. And in the year that I was there, did we get to the end of that? No. We got sort of bits and pieces.
But those bits and pieces were on the cusp of—this is really fundamental stuff.
I’d like to think so, yeah. Certainly I think—
There was no real analog anywhere else, with regard to what you were doing.
No. So, look, I think that the collaboration with Rob was the first thing—it would play out over quite some time, but that was the first thing I did that would last.
And it’s probably proof positive that the advice you got from your mentors at Berkeley—“Do the science that you love and the rest will fall into place”—and it did.
Yeah.
You can’t argue with that, because it happened. [laugh]
Right. That’s right. But, you know, N equals one, right?
[laugh]
[laugh] So yeah. And somehow in that period of learning about photon counting and the incredibly small displacements in the ear, and molecule counting and chemotaxis and all this stuff, this idea that the physical limits to performance of biological systems was somehow very important. And actually in some ways even some of the thinking about the early events in photosynthesis. I mean, these incredibly fast reactions, right? Sort of on a time scale which is so short that you even worry about whether your usual notions about when does physics stop and chemistry begin—whether those are correct, or whether you're on the boundary.
So somehow this vision had taken hold that the triumph of let’s say the 19th century into the early 20th century—life obeys the laws of physics. But the laws of physics allow some range of things. And what these examples are showing you is that in many cases, life goes right up to the edge of what’s allowed. And being able to—so I wasn’t the first person to think of that. The idea about photon counting and vision goes back all the way to Lorentz, if you trace the history correctly.
But the idea that we could go into a new place, this part of the fly’s brain that computes motion, and take those ideas, and work out things that—when I went back to be on the faculty at Berkeley, one of my first students—actually, two of the first students [Fred Rieke and David Warland]—joined in this effort with Rob, going back and analyzing his data in a different way and using ideas about how to do the decoding and—anyhow, we could talk more about that. And one of the results of work that we did together was an estimate of coding efficiency. So how much information do you have, versus how much entropy do you have in the ensemble of responses. And you can’t convey more information than you have entropy. And so you can get a notion of efficiency as a function of various things.
And this student, Fred Rieke, who has gone on to be actually a remarkable experimentalist working on the retina, now at the University of Washington—Fred said, “The really important thing about this graph is that if you're standing in the back of the room, you can see the data, and you can see one on the scale.” So the question—this was a sort of, “Are we in the right ballpark?” Are the physical limits to neural coding, to neural computation, to the reliability of inference in the brain—are the physical limits anywhere near being relevant? And the answer was yes. Now, are you so close that that closeness itself is an observation that you can leverage to try and predict things about the way in which the system works? That’s a challenge that we would meet later, right? And I think the answer to that is yes as well.
But the first-order thing was, we could go into this much more complicated setting, and lo and behold—a way that I remember thinking about it, as we were working on it, was I could start by thinking about the optics of the compound eye and the rate at which photons were arriving at the retina, and compute something which corresponds to something you could measure in the response of a neuron, which is many layers deep into the brain of the fly, and you'd get it approximately right. With no injection of biological detail. You really were calculating things from physics, and go into this neuron however many layers deep, and measure the right thing, and you know, you were on within a factor of two. And that for me was [pause]—that said, okay, maybe we're onto something. And so for me, that was the thing that drove things.
I think for other people, there were other aspects of it that would prove to be more compelling. I mean, the reintroduction of ideas from information theory—information theory had sort of—as it rose in the 1950s, people tried to use it to think about biological things and about language, and then it kind of fell out of favor. And we kind of brought it back, in the context of neural coding.
Bill, just to clarify the kind of mental zone you're in here when you say that you're onto something, is this very much basic science? Or are you thinking, even at this point, there might be, for example, biomedical engineering that might come as a result of this, eventually?
As I think I said already, I had a kind of active disinterest in anything applied. Because I had seen the history of the field as being so heavily driven by medical applications, I felt like—I really didn't want to think about that. Now, as it turns out—
Even when it’s staring you in the face, you still want to not deal with it.
Yeah. In fact, it’s only staring you in the face [laugh] if you can see it. [laugh] I really wasn’t thinking about it. Today, there are brain-computer interfaces which then the goal is to record from neurons in the brain of someone who’s quadriplegic, and essentially record the signals that their brain is generating that would drive the movements of their limbs, but they can’t. And so maybe you can decode those signals and drive a robot arm. And if you dig into the algorithms that are used for that decoding, turns out—we weren’t the only ones, we're not the only influence, but we did write a paper called “Reading the Neural Code” and that work, of showing how you do the—how you can go from sequences of action potentials to continuous signal—discrete action potentials in sequence to continuous signals—that has an input into that very applied and potentially very important application. A piece of, as you say, biomedical engineering. That was nowhere near my consciousness.
Another feature of that period in Groningen—so one of the things that our mentors—so there were two professors who did biophysics in the physics department. One was Hendrik (Diek) Duifhuis who worked on hearing, and one was Jan Kuiper who worked on vision, on insect vision. And Jan actually had been a student of de Vries, the fellow who had done all these pioneering things.
Parenthetically, de Vries was—there was a kind of tragedy that loomed over the whole thing, which is that he, at some point in his life, has had an affair with his secretary, and when she finally ended it, he killed her and killed himself. And so although there was this rich history in the department, it was a little weird, because the embodiment of that history was a figure whose end you didn't want to focus on. As I learned from my in-laws, the family of the victim was a very present family in the community. And so you couldn't honor de Vries’s contributions, although he was a great scientific figure, because he was also a murderer. Anyhow, so nothing is simple.
But yeah, so Jan was de Vries’s student. And he listened and listened to what Rob and I were doing. And as I say, the outlines of many of the things that would become papers in the next several years were clear, but we certainly didn't get to the end of anything, in that year, although the directions were clear. And he knew that there was another guy in the Netherlands who had also thought hard about this problem of how continuous signals are represented in the sequences of discrete action potentials, and he had some related papers. And there were some complicated papers and we—and he [Jan] said, “You know, we should get you guys together and talk.” And so, this was arranged.
And I can’t remember whether he brought this fellow to Groningen or whether we went to see him. I think he brought him to Groningen. Because I have this image of having a conversation in Jan’s office. It’s also true that in contrast to what I had experienced in the U.S., physics in the Netherlands had these rather grand offices. If the conversation went to 5:00 in the evening, they would pull out a bottle of jenever, and that seemed perfectly normal. Which, I mean, I did not have a puritanical upbringing, for reasons that I think you understand—
[laugh]
—but that was still a little different than my experience of life in the U.S. Anyhow, and we sat there, and I sort of had known that these guys had done these things but had never quite understood what they were doing. And it was literally at that point that I realized, “Oh, they got this far, but not as far as we have gotten.” And so one of the things I learned from that was that when you have a question, you need to find out whether somebody has answered the question. But if you can’t get an answer that satisfies you, you don’t want to spend too long trying to figure out exactly what everybody has done, because that’s a big effort. But then when you finally answer your own question, you can look back at what other people have done, and say, “Oh, I see. They did this part. And these other people did this part, and did this part. And so we were able to go further.” Or put the pieces together, or something.
So at least for me, I learned that it would be okay, maybe, to waste some time doing things that other people had done. Of course, this is different as a theorist and an experimentalist, right? So as a theorist, you're not wasting time; you're actually working it out for yourself and understanding it, even if somebody has already done it. And if that meant that you went to some extent down a path that was already outlined, if you go further, then from that vantage point, it’s now easy to look back and see what other people did, because you now have a framework that hangs together. So that was another thing I learned. And I'm grateful to Jan for having brought us all together so that I could see that.
And that’s something that again I try to encourage students—“Work things out for yourself. Don’t try to learn everything that has been done. If you ask a question, and somebody says, ‘I know the answer; it’s this,’ then maybe that’s that. [laugh] But if they tell you, ‘Oh, so-and-so did something that’s relevant,’ and you try reading so-and-so’s papers, and you can’t quite understand how this answers the question you had in mind, at some point, you should have the courage to say, ‘I'm just going to work on my question. And if it turns out these guys answered part of the question, well, good. I'll understand that when I get to the end.’” Trying to understand that also if you—I think if I had gone down the path—if I had worked very hard to reproduce the path that they had taken, I might not have gotten as far, because then you start thinking about it the way they were thinking about it, right? Anyhow. So yeah, the work with Rob would unfold over decades. But spending that year in Groningen was life-changing, absolutely.
KITP seems like a very interesting choice for you. How did that come together?
So, my—
I mean, you're not doing any of the things for which KITP is classically known. You're not doing cosmology.
Right.
You're not doing lattice gauge theory. You're not doing condensed matter.
Right.
What was it at KITP that was so attractive to you?
Let’s make a couple of observations. The first observation is that this was, as I like to describe it, the pre-K-ITP.
Right, right. The ITP.
Yes. It was just the ITP. And so there’s two questions. One is, why would I want to go there? Well, I guess three. Why would I want to go there? There’s sort of questions from both sides. What was I doing there, and what were they doing hiring me? So what was I doing there was—this again, I owe to Alan Bearden, my PhD advisor. This was still a relatively new place, right? The ITP opened in ’79, and I arrived in ’84. So there had been five years. And it comes in five-year chunks, so I was in the second cycle. And so it was still a relatively new place, but it had very quickly established itself as a just fantastic place to be a postdoc doing theoretical physics. Incredibly lively, very interactive.
Was there any track record already in the first five years in biophysics?
Zero.
Or were you really coming to create something?
Zero. So Alan thought that this would be good for me, because I would be in that sort of—
Present at the creation kind of thing.
—cossetted environment of high-class theoretical physics, which somehow I had not been at Berkeley. Some of that is I was doing things that most of the people in the physics department weren’t interested in, when I was a PhD student. Some of it is Berkeley was not a great center for theoretical physics at the time. There were some very good people, but it wasn’t lively in the way that Santa Barbara was.
And the flip side of this is, why did—so Alan had been a faculty member at UCSD at some point, and so he knew Walter Kohn, who was the first director of the ITP. Coincidentally, my stepfather-in-law—so my wife’s stepfather—was Keith Brueckner, who founded the physics department at UCSD and hired Walter, among other people. So actually, Charlotte knew Walter much longer than I did. She grew up with him as a kid. And Alan I guess called up Walter and talked to him about me, and said that he thought this was—I don’t know what he said; it’s not my business. What I do know—and I don’t know if I'm supposed to know it or not—but apparently in the discussions about hiring postdocs that year, Frank Wilczek said, “I’d rather take a chance on doing something new than have one more particle theorist.”
[laugh]
I mean, they hired some number every year, right, and—
What a very “Frank Wilczek” thing to say. [laugh]
Yes. And so, I arrived. To set the scene, it’s the second five years, so Walter had stepped down as director, and he was succeeded by Bob Schrieffer. Jim Langer was there. Frank Wilczek was there. I believe that Tony Zee was visiting in the first year that I was there and moved full-time in the second year. I don’t remember exactly how that worked. But anyhow, that interaction was incredibly important. And Doug Eardley was there doing astrophysics. There were an incredible collection of other postdocs. I remember being many years later, we had these seminars that were jointly between the NEC Research Institute, Princeton, and Rutgers, and I looked around the room at how many people I knew from my two years in Santa Barbara, and it was quite astonishing. Yeah. So I arrived, and in particular—so Bob was very thoughtful about—
Bob Schrieffer?
—making sure all the postdocs—you know, he talked to all the postdocs.
Bill, this is Bob Schrieffer you're talking about?
Bob Schrieffer, yeah. He talked to all the postdocs with some frequency. Not huge. Actually, one of my favorite interactions with Schrieffer was the year that the American Physical Society March meeting was in Las Vegas, we somehow were on the same plane, in one of these small planes. Maybe it was between Santa Barbara and L.A. And then we changed plans to Las Vegas or—anyhow, it was a small plane and we were sitting next to each other. And so we had this very relaxed conversation about physics and his own trajectory and so on. But on arrival, Jim Langer and Frank Wilczek decided that they needed to know what I was thinking about and what was interesting, and said—basically I was supposed to show up in one of their offices every week and stand at the blackboard for an hour and talk about what I thought was interesting about—what physicists should be doing, thinking about biology. And they actually arranged I think I gave a couple of these sort of informal blackboard talks, not specifically necessarily about what I was working on, but what was going on in the field that I thought was interesting. So yeah, they somehow decided that it would be good to have somebody around to do something different. And for me, it was just incredible, because it meant that I learned about all sorts of things in theoretical physics that I might not have otherwise known.
Absent the day-to-day collaboration with Rob, in what ways did that open up new doors for you, just in terms of how self-consciously you were looking not to be in that comfort zone?
So I thought about a lot of different things, some of which, again, were flat-out wrong, which is okay, I guess. I did a lot of exploring. I felt like I had a window in which I could just learn about things. I even wrote two papers that were not specifically about biophysics, one sort of more condensed mattery, and one sort of vaguely elementary particle-ish. Yeah. It was just a—there was—it’s interesting. It’s funny, I have this view of that period of just being incredibly stimulating and forming some very important friendships, scientific and otherwise. Our son was born. Spending a lot of time talking to people like Wilczek and Zee and Langer about what’s interesting, what’s a good theoretical problem.
And I remember—so Frank Wilczek and one of the other postdocs, John Moody and I, who had also gone on to do things in neural networks—I don’t know, we were talking—I had this long-standing interest in the limits to measurement, and so we started talking about, well, what about various things in physics where there’s something that we know is almost zero, for which there’s no evidence of it not being zero. What’s the most sensitive measurement you could make? So an example is electric dipole moments for elementary particles, which should violate—if the electron has an electric dipole moment, then that violates time reversal invariance. So that would be interesting. And there was some bound, that came from atomic physics. And there was all this excitement about doing quantum limited measurements with superconducting devices. And in particular, superconducting interference devices—SQUIDs—are very sensitive magnetic field sensors. And so the question was, could you use SQUID magnetometers to make an essentially macroscopic measurement that would help you to bound the electron electric dipole moment? And so that led us to thinking about noise in measurements, and I don’t know, you'd get some—you'd basically get some macroscopic chunk of something in the right part of the periodic table, and basically apply an electric field, and see if it magnetizes. Because you can measure magnetic fields very sensitively. You can measure magnetization very sensitively. And so we argued that a feasible experiment of that flavor could surpass the existing bounds on the electron electric dipole moment. And so that involved understanding noise in SQUIDs and thermal fluctuations in the material, and all sorts of other—it also involved, and I don’t know, this is one that—I don’t know who the audience for these interviews is, but every physics student knows that there’s this horrible thing about units in electricity and magnetism. And there’s the permittivity of the vacuum and the dielectric—dielectric permittivity of the vacuum, the mu zero, and things in MKS units. And there are these four pis floating around. Anyhow, there’s all this stuff, right? So there’s the beauty of Maxwell’s equations, and then there’s—you actually want to calculate how strong is the electric field, in some real material. And there’s a bit of messiness, there.
And I remember that Frank had these very elegant ways of cutting through all the messiness, where he’s say, “Look, there’s one thing I know, and that’s that the fine structure constant is one over 137.” So there’s some set of units—so you're doing some problem in which there are charges on the electron, that appear in your calculation. But it’s not particularly a quantum mechanical calculation, but you put the h-bar cs in there anyway, because you know that there’s some system of units in which e squared over h-bar c is one over 137. So, choose that system of units. And then h- bar, and c are just what they are, right? Because those are things that don’t involve all this electromagnetic stuff. So spending time at the blackboard with Frank, I learned a lot about abstract theoretical things, but I also learned about, how do you get—how do you get to numbers, in a way that has a certain elegance to it.
So I haven't thought about that one in a while, but it’s also interesting—I would say that the things that we did in that paper—which by the way, it’s not a very important paper, and the reason is that very quickly, the atomic physicists got better. So I actually know atomic physicists who have worked on that problem, of getting bounds on the electron EDM [electric dipole moment], and they know that there was this idea of trying to do it another way. That there was some little window in which it looked like it might be better. But they eventually blew it out of the way. Which is great. That’s fine. I learned a lot by doing this.
So yeah, so I interacted with Frank and with Jim. And Tony Zee was special, because he had this ability to become interested in what other people were doing. And of course, he’s a very powerful theoretical physicist. And his idea of having a conversation about what I was thinking about was he wants to go home and calculate something. And I remember actually not from the period when I was in Santa Barbara, but subsequently—I guess I was at Berkeley and he came for a visit, and we spent the afternoon chatting at the blackboard. And like a week later, this envelope appeared in my mailbox. Which, you know, this is pre-email, right? I think there was email, but anyhow, it was an envelope, with this much handwritten notes, of calculating things.
And so the most important thing that—so Tony and I spent some time thinking about how to use these ideas. What are the physical limits to inference of more sophisticated quantities? So sort of in the spirit of what Rob and I had done or were doing play out in thinking about estimation of visual motion, but we thought about all sorts of other things, like what about the perception of visual symmetry? What about sort of higher-order things? And I would say that in some ways, we were doing things that would have echoes in work done by other people later, but we were not showing—we were not being very strategic, in retrospect, about which things we were choosing as examples. We tended to choose things that were really challenging mathematically, and maybe not—so you could only solve them in regimes that perhaps weren’t so interesting for actual visual perception and things like this. Okay, fine.
But we wrote this one paper together in which, inspired by the things that Rob and I had done, we tried to think about this issue of how continuous signals are represented in sequences of discrete action potentials. And we did that in the context of models for how the coding would work, and asked, what could you say about the problem of decoding in the context of these models? And what we realized was that there was a regime in which decoding became simple, even if encoding was complicated. So the process by which continuous sensory inputs get turned into action potentials is a very non-linear process. But what we showed is that you could try to turn the sequence of action potentials into an estimate of the continuously varying signal.
Oh, and by the way, that non-linearity was something that Rob and I had appreciated and been exploring. And we were seeing that if you asked what—so a crucial thing that Rob and I had done was to really think probabilistically about the problem of neural coding. So not, “Here’s a response,” not “How does this particular thing in the outside world get represented in neural responses?” but turn the problem around, take the point of view of the organism, and say, “I'm sitting there in the brain. I see this neuron do something. What is the set of things in the outside world that this is pointing to as being possible?” And of course that’s a set of things. It’s a distribution of things in the outside world. And so thinking about it probabilistically was quite natural.
And we pointed out that you could think about that—the probability distribution of signals in the world conditional on observing a single action potential. But you could also think about observing patterns of action potentials, and how, as you changed the internal structure of the pattern, you pointed to different signals in the outside world. And our favorite one was, what if you had seen nothing happen? Well, that actually isn’t trivial. So if nothing happens from the neuron—if the neuron is silent for a very long time, that also tells you something. And so we were very proud of measuring the information carried by nothing. [laugh] But it was all very intricate.
And so what Tony and I realized was that there was a regime—precisely the regime in which you might think—which was as far as possible from the kind of classical view of neurons having a rate of generating action potentials that was slowly modulated. So imagine that now the individual action potentials are actually quite far apart, and so the signal could change in between the action potentials. So now you can no longer think about, well, just average over time and forget about the fact that they were discrete action potentials, [that] there’s just a rate at which action potentials are being generated. So what we showed is that there is a regime in which you can sort of do perturbation theory, to do the decoding of signals in this class of models. And in that regime, the decoding algorithm became very simple. Basically, every action potential stood for something, and you added up the contributions of the action potentials. The next term of perturbation theory was to let the action potentials interact in pairs, and so on. It was a kind of more sort of cluster expansion, if you want. So that, and then we did a bunch of other stuff, too. But that was the insight.
And when I finished my time in Santa Barbara, and I moved up to Berkeley as a faculty member, Tony and I finished that work. I started talking with a couple of graduate students, and I realized, oh, wait, you could really just—so this class of models that we were studying were surely wrong, in detail, but the idea that there was this perturbative regime for decoding neurons, that could be right. So you could make a more complicated model, and then that more complicated model would also have a perturbative regime in which the same general structure would appear. And so if you want, it’s sort of the difference between saying, “I'm in a regime where all that matters is a certain correlation function,” and in this particular theory I can calculate the correlation function. But the idea that I'm in the regime where all that matters is this correlation function is more general.
So what we developed was an approach where the theory tells you the form of the general structure, and then you use the data to determine what the correlation function is that matters. And that was how we finally read the neural code and were able to show all these results about the precision of things, and so on.
So again, there were the experiments with Rob. There were the first efforts to think probabilistically. That then led to work with Tony on these models, which then with Fred Rieke and David Warland, we went back to Rob’s original data and showed that, oh, maybe this neuron is actually in this regime. And that’s when it all—so that paper finally came out in 1991. So you get some sense for how long—
It’s a long gestation.
Long gestation, yeah. I sometimes look at the long gestation of these things, and I feel like, “I should have known better.”
[laugh]
I should have had a way of short-circuiting it.
Bill, I'm not sure if it’s possible to ascribe any grand plan to all of these things. And as you describe it in sequence, it does make sense how you can shift from these collaborations with Rob to working with Frank Wilczek and Tony Zee. Only in you describing it does it sort of all make sense. And so I guess my question is, did you know that you would be maintaining this long-term collaboration with Rob? And one of your motivations was—
It was clear in the period when I was in Santa Barbara that we had unfinished business. And we wrote some papers which were sort of first versions of things that would eventually become papers. [laugh] I mean, it took—actually that work finding its way not only to intellectual completion but to publication took quite some time. So the first really big paper that we ended up writing came out in late 1988. And I remember, because Rob had come to Berkeley for a visit with his family, and we had various adventures as families. And one of the things we did was to go to the library and get the copy of the journal.
I guess what I'm asking is, as you're standing at the blackboard with Frank Wilczek, is part of the motivation “I need to be here doing this to take my collaboration with Rob to the next level”? Mentally, are you separating these projects, or are you seeing it as sort of one large sweep of research, and there’s different ebbs and flows that need to happen to get to where you want to go?
[pause] I think that I had already the hopes for a more coherent view. A sort of coherent physicists’ view of the phenomena of life. So look, from the very beginning, I was interested in these very molecular things, and then I was interested in these things about sensory systems. And so that was somehow schizophrenic. And when I went to write my PhD thesis, the way I put them together was—I thought that what was really important was that they were bound together by the importance of quantum effects. Which is wrong. But of course, it made it possible to write a coherent thesis.
There’s a very important—the last really important interaction I had with Alan in his role as my thesis advisor—well, actually, it was when he pushed me to go to Santa Barbara, but in the thesis—was I knew that it was getting time to finish, and so I went and talked to him about it. And he said, “Okay, so you set up your qualifying exam, and you do this and that. And why don’t you write an outline for your thesis and the introduction?” And so I did that. And the introduction was very grand and philosophical and about how were we going to think about all these problems, and why was quantum mechanics going to be important and everything.
And as I say, some of that turns out to be wrong, but anyhow, there was this very grand view. And then the outline was much more technical. Let’s understand how to calculate the—under what conditions will you see coherent versus incoherent behavior in an electron transfer reaction, right? In some very specific context, very specific model. And Alan said, “This is very nice. There’s only one problem, which is that the introduction that you wrote is not the introduction to the thesis that you outlined.” And I said, “What do you mean?” And he said, “Well, you've written this very grand introduction, and this very technical outline.”
He knows you better than you know you.
Absolutely! And he said, “I will sign either one—
[laugh]
—but you have to pick one.”
[laugh]
And I said, “Well, then it’s obvious. I have to do the outline. Because I can’t sustain the introduction.” I can’t get to the end of the—and he said something which was incredibly empowering. And so this is an example also of—sometimes you get—anyhow, he was in many ways a fantastic advisor. He said, “Well, this is your thesis. You don’t have to get to the end. You can explain how far you got.” And so that’s what I did.
Now, as I say, some of the places I thought I got, I was wrong about. But on the other hand, trying to articulate a vision of where I was trying to go, even if in the end, some of it was wrong, that was incredibly empowering. And so I think that I maintained this idea that somehow all of these were supposed to connect together in some way. And yeah, did I have a very concrete view of how the things that Tony and I were doing connected to the things that Rob and I had done? Actually, no. At that very moment, no. It was we had done these things looking at real data; let’s go think about models and explore the models. And then once we saw the results from the models, we realized, “Oh, but wait, we could go back and look at the data in a different way.” And that’s a—yeah, there’s some faith that it’ll all connect up. And of course there are dangling things that don’t connect up. That’s okay, too.
How had the ITP changed over your time there? Was it really in growth mode during those years?
Growth in those years—no, it wasn’t growing. It was full. It had grown over the five years before I got there. It filled one floor of Ellison Hall, which was an incredibly ugly building. And when they renovated the floor, they did one marvelous thing, which was that they put this thick blue carpeting down. I don’t know if you've ever talked to anybody else about the early days of the ITP; they had this fantastic blue carpeting. And of course, it was Santa Barbara, so it was very tempting to walk around barefoot on the carpet. There was a bottle of mineral oil that was left in the room with all the mailboxes, so that if you had been walking on the beach, you should wash your feet with the mineral oil, because there was tar on the beach.
Let’s see, what else was there? Oh. A very important part of it was that there was this remarkable woman who was essentially the administrative director of the Institute. And during the period that we were there, so the University of California had some complicated list of job titles. And there was a tendency that men who had senior administrative positions in the departments had managerial titles, and women had secretarial titles.
Of course.
So it was a big breakthrough when Bonnie Sivers became a Management Services Officer, was what it was called.
Ah. [laugh]
That happened while I was there. Most of the postdocs were terrified of Bonnie, because she was the one who enforced the rules. So the first time I went there—again, I went for a little workshop or something—I mean, I had already been offered the position and had agreed to go, but then had the chance to visit. By the way, I went to the Netherlands—I had decided to go to the Netherlands without even having been in the country, let alone having been at the university. I decided to go to Santa Barbara without having visited there. I mean, before I visited there. Anyhow, it was a different time.
Maybe you were looking to make up for it by going back to Berkeley.
Well, yeah, right. So then I went to Berkeley. Anyhow.
Familiar ground.
So I was on my way to Santa Barbara for a workshop before I moved, and I thought I would use that opportunity also to take care of a few administrative things, which I would do. And coincidentally, before going, for some reason which I now cannot remember, I wanted to look something up in the book on scattering theory by Goldberger and Watson. That’s just sort of a classic of its time. A time now passed, I believe, but still. Goldberger and Watson is this thing that’s about this—you know, 600 pages long or something.
Now it turns out that Goldberger and Watson were both very close friends of my father-in-law, and so I was curious—and I guess the first theoretical paper that Keith [my father-in-law] wrote was jointly with Murph Goldberger. So I was curious whether they had had any conversations that led to what was in the book, and indeed—so I looked in the preface, and indeed after—the preface also was pretty long, as I recall, more or less in proportion to the size of the book. And indeed, some chapter of Goldberger and Watson is said to owe a great deal to the conversations they had with Keith. That’s fine. But then it went on.
And so the acknowledgements go on, and I kept reading. And it said, “And finally, we would like to think Mrs. Bonnie Sivers for typing the manuscript.” And here I had this letter in my hand from Mrs. Bonnie Sivers, now the administrative director of the ITP. And so I arrived, and I went into her office, and she was a stern woman who established her authority very quickly. You know, none of these bratty kids who were showing up were going to give her any guff, and that was clear. And so she told me all the things I needed to do in order to be sure that I would be gainfully employed and get a paycheck and everything. And at the end, I said, “And by the way, are you the Bonnie Sivers who typed Goldberger and Watson?” And she sort of melted.
[laugh]
“Oh yes. Actually, I am.” And she started to reminisce about her days as the secretary of the theoretical physics group in Berkeley during this time, and all the characters who passed through. And then life goes on, and she moved with her husband to do something else. And just as she was getting to the stage in life where she thought, you know, “I’d really like to go back and do more professionally,” she saw an article that they were opening this Institute for Theoretical Physics in Santa Barbara, and they were looking for staff members.
Perfect. [laugh]
So yeah, it’s just kind of remarkable all around. So it was not only a place where there were people like Frank and Tony who represented a young generation of theoretical physicists, and people like Bob Schrieffer or Walter Kohn who represented a kind of link to generations past, but Bonnie herself was a bridge between these generations, which was quite marvelous.
Bill, I think that’s a great narrative turning point for my last question for today, and that is—to return to this question of career orientation and how you were looking to define yourself, you have done, at this point, a terrific job, between the Netherlands and the ITP of delaying these sort of existential professional questions, right?
Yeah.
And so before the question of how the opportunity to come back to Berkeley materialized, as the postdoc was winding down, maybe—with or without consideration of what open faculty positions were available at the time, after all of this postdoctoral work, what kind of a scientist did you see yourself as becoming, in an academic environment? Whether they were open or not, what were the kinds of faculty positions or departments that you thought at this point you would be most comfortable and effective in?
So as it turns out, I somehow was encouraged to think about this early. So in those days, the standard postdoctoral appointment was two years. So I had done a year in the Netherlands and then I was going to be doing two years in Santa Barbara. But people said, “Well, you've already been a postdoc for a year. You've done stuff. You should at least look around, immediately on arriving, rather than waiting.” And so I applied to UC San Diego because that was one of the few places in which there was a well-established biophysics group in the physics department, some descendants of the people that my father-in-law had hired. In particular, there was a fantastic experimentalist, George Feher [mentioned above], who indeed worked on the primary events of photosynthesis and played a huge role in understanding them. Anyhow, that produced nothing. I don’t even think I got a response from applying. Tony Leggett had been visiting Santa Barbara, and I think was very intrigued about what I was doing, and encouraged me to apply to Illinois, which also had a real biophysics group inside the physics department. But it was very circumscribed. They did a very—
Not like the Netherlands.
Well, they had many faculty members, but they did things in a sort of small circle at one level of biological organization, so it was very molecular. I had overlap with some of those things, but I also was interested in other things. But Tony was of course a great figure already and I was really honored that he took an interest, so I pursued that. And then there were these rumblings about trying to bring me back to Berkeley. And so basically, I applied for three jobs. Nothing happened in San Diego. In the end, I was offered a position in Illinois, and for a variety of reasons decided that that wasn’t quite the right fit. And then the thing at Berkeley worked out. I also knew that I could wait, right? Because this was the first year of my postdoc at Santa Barbara.
Bill, I'm curious if you ever thought about a career at one of the national laboratories.
Mm! When I was a PhD student—we should set a cutoff, because you've been very disciplined about this. So let’s say 4:30, just to—
We'll get you up to Berkeley.
Right. So when I was a PhD student, I had given an invited talk at the March meeting of the APS, and I think I may have mentioned that this was the March meeting during which Ronald Reagan gave his Star Wars speech. So it was a kind of surreal experience in many ways. A consequence of that was I was invited to give a colloquium at Sandia National Laboratory, more or less to give the—so Sandia at the time, as you may know, they were responsible for all the non-nuclear parts of nuclear weapons.
Right. It’s the weapons lab that’s not Los Alamos or Livermore.
Right. That’s right. It did other things, too, but that was the raison d'être. And actually this led to some discussion among myself and my cohort of, “Should one even go and visit such a place?”
[laugh]
And the conclusion was that, well, I was going to give a talk which I had given in public at a meeting, right? An extended version of that. I wasn’t doing anything for them that they couldn't have had access to, right? And the truth is that although all of us had strong feelings about the weapons laboratories, none of us had ever been to one, so we didn't really know very much. And so the conclusion was, well, maybe it would actually be interesting to go. So the first weird thing is that—they tell you this as you're giving a colloquium—they give you an honorarium. They name a number which is not far from your monthly salary as a graduate student, right? So that’s really weird. But I didn't turn it down. So I go. And at that point, there was no white in my beard, but it had more or less the same structure that it does today.
[laugh]
I had much more hair, and it came down to my shoulders. And I think they figured out right away that I wasn’t planning on working there permanently.
[laugh]
But over the course of a two-day visit, I was passed through a series of group leaders and directors. And I don’t remember what the structure was, until somebody was telling me that, you know, “We know that you're planning on going to Europe for a year, but you might want to come back to the States for some portion of the—whatever portion of the summer you’d be interested, and come spend time here as a consultant.” And he actually said, in so many words, “We can make it worth your while.”
Which meant what, exactly?
I presume it meant that there was a very large salary associated with this. That was at least the impression that I took away from it, at this time. His phrasing was such that—I don’t know if you remember the movie The Godfather—I was expecting to find a horse’s head in my bed that night—
[laugh]
—back in my hotel room. “We can make it worth your while”?
[laugh]
“Who talks like this?” Interestingly, not long after that, I was back in Berkeley, and my father-in-law, who spent quite a lot of time—well, he was one of the founding members of JASON, and spent quite a lot of time consulting—I told him this story, and he looked at me, and said, “What are you thinking?” And I said, “I just didn't find it very appealing.” I mean, I could talk about the details of what I learned there that made it not appealing, but doesn't matter. And he said, “Well, that’s good” and proceeded to explain how it was that spending more time consulting was important for him because of his complicated personal situation, but he ended by saying, “You have nothing more valuable than the time to think about the problems that you think are important.” Which in some ways was a sort of sad thing to hear, because clearly he had felt that he had to give up some of that time to earn more money. But he said, “As long as you can manage not to, you shouldn't give that up.”
That’s powerful.
It was powerful. So you ask about national laboratories; Sandia isn’t the one that should be at the top of the list. But somehow I had very romantic ideas—I wanted to be a professor. The image of—again, harkening back to this article by Bernstein characterizing how all these guys conducted themselves—yeah, I thought that was what you were supposed to do. That was where you would go to lead an intellectual life. And so that was the kind of job I was looking for. It didn't occur to me to look in other places. I had various prejudices about institutions on the East Coast, which were perhaps not uncommon among Berkeley alumni. And I don’t know whether—there was nothing that led me to reexamine those at the time.
So somehow, the thought of—I think a lot of students go to Berkeley and come away thinking that the best thing in the world would be to go back and be a faculty member there, because it’s such a wonderful place. So that was certainly how I felt at the time—that, what could be better? And then when there started to be little hints that that might be possible, then, well, that was kind of the dream. And I remember actually Jim Langer was particularly clear about asking, “Have you thought through what you want? I want to be supportive of whatever you want. What is it that you really want?” And somehow I had my heart set on going back to Berkeley, and that proved to be possible.
Did you have a sense who at Berkeley was really driving the interest in you coming back?
Well, my advisor was pushing from the biophysics department, which was part of the biological sciences. And inside the physics department, I think the person who was pushing was a condensed matter theorist named Leo Falicov.
Ah. Yeah!
So the offer came to be joint between the two departments.
Did you have any concerns? The last time you were there, you were a student, right? Did you have any concerns that that was a shell that would be difficult to outgrow, even though you had grown since you had left? In other words, you go to a new university; you're a new entity altogether. No one knows you as a graduate student.
Right. [pause] I'm trying to think. I don’t remember that as being something I worried about. Again, I had a kind of naivete about these things. I sometimes wish that I could transmit some of that naivete to the next generation—
[laugh]
—because it is in a certain way liberating. Of course it is by definition naïve; you're ignoring real things. But many of them are real things, and some of those are real things that would be worth worrying about. Except, it’s not clear that you can do anything about them.
Well, one that would be even more concrete than sort of ephemeral notions of whether you're all grown up or not would be simply your appreciation or lack thereof of your prospects for tenure. What was Berkeley’s track record on hiring from within?
They did quite a bit of it. They continue to do so. That is a conversation for next time. Thereby hangs a tale.
[laugh] There you go.
I don’t know if you knew that, in advance, but—or could just guess that it might.
Well, but that also—that history at Berkeley, which obviously is much different than a Harvard or a Stanford, where it of course is the opposite—I do appreciate that history, but I do know that it has waxed and waned over the years.
Yes.
And your timing is certainly—
Well, so for example, this was also the moment at which the—so the appointment was going to be half and half, half in physics and half in what was then a biophysics department, which would become, as biology was reorganized, a biophysics division of a large molecular and cell biology department. And that reorganization of biology was happening during this period. And I was under the—and it turns out, then, that all—so I—I can’t remember whether I actually had an offer letter, or whether I just had the informal statement from the department chairs, and so I now don’t remember all the timing. I had gone to visit Los Alamos, actually. I don’t remember what brought me there. But I was sitting in George Zweig’s office. And I had gotten to know George because of his interest in hearing. Remarkable character. Many fond interactions over the years. And he had asked—he had taken interest and wanted to know how things were going. And he had his affections for Berkeley as well and so was happy to hear that things seemed to be working out there. And probably had written a letter on my behalf? I don’t remember. I'm told that because it was a very unusual appointment, they actually got more letters than they ordinarily would for an assistant professor.
And while I'm sitting in George’s office, the phone rings. He excuses himself to answer the phone. And he says, “Ah” and he puts down the phone and, you know, “I really need to take this.” So I go out of his office. I come back in, and he said, “So tell me about what the situation is in Berkeley again?” [laugh] And it turned out, he had just been on the phone with somebody, possibly with Dan Koshland, but at least with somebody who was on the Chancellor’s Advisory Committee on the Biological Sciences. It turned out that one of the features of trying to reorganize the biological sciences was that there was this advisory committee to the chancellor, who essentially took autonomy away from the existing biological departments. So mostly, they did their thing, but then anything they really wanted to do had to be passed upon by this committee. And so my case had evidently gotten to this committee [laugh] while I was sitting in George’s office. [laugh] And whoever it was—I think it might have been Koshland—decided that the thing to do was to call George. [laugh] So that was kind of funny.
So in that sense, it never occurred to me, for instance, to ask, “Well, I have a joint appointment within two departments, one of which is going to be reorganized, so can you tell me something about what is going to happen in this reorganization, to assistant professors?” It is a perfectly sensible question which I did not think to ask!
[laugh]
Nor did anybody volunteer!
[laugh]
So yes, I don’t know.
It fits with the theme. The theme of naivete.
Absolutely. Yeah. And, as you'll learn next time, it worked, but there were some bumps.
And on that note, we'll pick up on those bumps for next time.
Okay, thanks.
[End 201016_0344_D]
[Begin 201023_0350_D]
This is David Zierler, oral historian for the American Institute of Physics. It is October 23rd, 2020. I am so happy to be back with Professor William Bialek. Bill, it’s good to see you.
Good to see you too, David.
So, we left off last time where we're going to continue on this theme of naivete as an assistant professor. And you left with the cliffhanger where the naivete, in regards to being committed to the research and the science and not being overly concerned with career considerations served you well generally, but there were some bumps in the road. So let’s develop that theme now of—
[laugh] Okay, yeah.
—when those bumps started. You had obviously a honeymoon period right at the beginning at Berkeley. And then tell me about these bumps. What were some early signs that these bumps were in your near future?
So let’s see. The beginning was a little funny. I arrived—so I went, and it wasn’t entirely clear whether they wanted me to be a faculty member as of the Fall of 1986 or the Fall of 1987. I had a fellowship to be resident in Berkeley, but not officially as a faculty member, for ’86 to ’87.
This was soft money? What was the funding source?
No, this was the Miller Fellowship. So there’s the Miller Institute for Basic Research at Berkeley which provides funds for postdoctoral fellows, visiting professors, sabbaticals, and so on. And this very funny thing happened where the chair of the department, who was then John Reynolds, quite senior experimentalist, gave the first colloquium of the year in the Fall of 1986, the sort of state-of-the-department address, and he announced that new faculty had been appointed, and he listed my name. And so I went up to him afterwards, and I said, “That’s really interesting. I didn't know that I had actually been officially made a faculty member as of July 1, 1986.” And he said that in 35 years at the University of California, this was the first time he’d come across a case where someone was appointed as a faculty member and they didn't tell them.
[laugh]
Of course, they weren’t paying my salary, right? So maybe—I don’t know. Anyhow. So that was weird. Let’s see. What was the first sign of a glitch? At the time—and I think this might still be true; I haven't looked—assistant professors at the University of California were appointed, actually, to two-year terms. So that meant that as your second year began, they had to discuss your reappointment, which also means that they had to evaluate how you were doing. And apparently, the physics department’s view was, well, I had been on leave for that first year, so they didn't really need to evaluate me.
I see.
Whereas in the biophysics department, I think they didn't see it that way. Anyhow, somehow—I sort of shrugged my shoulders. I don’t know what’s going on, right? And then the deans came back and said, “Well, he was a faculty member. He was productive.” In fact, I guess it was actually a pretty good period. I had gotten a lot done. And certainly, relative to the time in which they had decided on making the appointment, I had done quite a lot. And so the deans came back and said, “Not only do we think he should have been evaluated, we're just going to step in by hand and promote him two steps rather than one, through the ranks, through the salary scale of assistant professors.”
And Bill, if we can just hit “pause” on that for a second, because it’s important, for context, since your productivity was a consideration here, let’s go back just for a few minutes on what you were working on, right at this time, or what multiple things you were working on.
So, [pause] it’s funny, because my view is that things scientifically crystallized in the couple of years just after this. So I had kind of spread out, and I was working on things that were about coding and computation in the brain that derived from my interactions with Rob de Ruyter back starting when we were in Groningen. I was thinking about a variety of more molecular biophysics problems, some inspired by the early events of photosynthesis, some sort of scattered around. I was trying to think a little bit more about neural network models in general. So it was quite broad. And I was also in a period of walking around and talking to experimentalists trying to understand what was going on around me, having learned that talking to the experimentalist next door is a good idea. Even if next door is the next building instead of the next office, that was probably—the principle was probably correct.
Were you working closely with Rob during this time?
Indeed. We were corresponding, as one did in those days. And he would come for a visit, which was quite memorable, in I guess the Fall of 1988, so that’s a little while from then. And as I think I might have mentioned, it’s partly memorable because he and his family decided to take a kind of trip around California while they were with us in Berkeley, and that involved renting some very large recreational vehicle. I guess they thought they wanted to have the full American experience.
[laugh]
And they memorably parked it at the entrance to Chinatown in San Francisco when we met for lunch there one day. Anyhow, this is really not so relevant.
[laugh]
But it does stand out. But one of the reasons that I remember it is that I remember going with him to the library on campus to see the journal in which our first major paper came out. So that’s the Fall of 1988.
Oh, that’s great.
So that must mean we were working together before this, [laugh] quite intensively, because a paper would come out. Anyhow, and I was starting to think about other aspects of signal processing in the nervous system and whether some of these ideas about optimization that we had been beginning to think about in fly vision were more general and so on. So there was a portfolio, and I was exploring.
The important point here is that the people who would be in a position to make decisions about your promotion prospects, they were aware of your productivity. This was obvious to them.
Yes. It’s hard to know whether they were right, at the time. As I say, when I look back, I felt like the most important papers were yet to come. But yeah, I was doing a lot of stuff. Right. So I arrive in the Fall of ’86. So in fact, I am a faculty member starting July 1, 1986, but I'm on leave for the first year, which is fine. I start doing things that would result in graduate students coming to work with me. I wouldn't describe what I was doing as recruiting graduate students. I mean, I would talk to them in various contexts. There was in those days a system of—maybe they still have it—the incoming PhD students in physics took a course called Introduction to Research or something like that, and all the faculty streamed by and talked about what they were doing, which kind of levels the playing field, which I think is actually a very important thing. So in a department with some very distinguished and accomplished senior people and very well known, the new kid on the block still had a crack at all the students, which was great. And I have to say, the truly extraordinary thing from my time at Berkeley—I mean, I've had wonderful students and postdocs ever since, but getting access to really great students when you're a beginning assistant professor is really an enormous gift. So that was incredible.
Even from the beginning, were the kinds of graduate students who were attracted to work with you—was part of that attraction that they also didn't fit in particularly well in any one neat box? Is that sort of a commonality with your graduate students?
Well, I think what is fair to say—so, in the years that—I was officially on the Berkeley faculty I guess—I arrived in the Fall of 1986, and I left at the end of calendar 1990. So let’s say—so I don’t remember, I might have officially been a faculty member but on leave for the following semester. I don’t remember exactly how it worked.
I remember that my—I guess this is at the tail end of the question you began with, about what were the first signs that maybe the road wasn’t going to be so smooth—I remember that my efforts to be a faculty member on leave for a longer period of time as a courtesy so that I could finish up with a couple of graduate students, that was turned down, which was interesting. So yeah, is it the case that students who were interested in working with me somehow didn't fit with other things? I don’t know. One student who came to work with me actually started finding out what I was doing when he was still an undergraduate at Berkeley, and then he stayed as a PhD student and started to work with me. This is Fred Rieke, who is now a professor at the University of Washington. And he’s to the manor born. He is the son of a very prominent astrophysicist, who—expert in detector technology. In fact, I believe it can be said of him that he wrote the book on some class of detectors. I want to say in the infrared, but I would have to check to be sure. And he [Fred] did his senior thesis working in Paul Richards’ group, who at the time was working hard on measurements of the microwave background with his graduate student, Andrew Lange. So Fred was part of that. And somehow his interests in how do you measure very small signals in the laboratory resonated with the things I was talking about, about how does a visual system count photons and so on. And so he shifted. But he certainly wasn’t a misfit. He was recognized as one of the strongest undergraduates. And I think there was a path for him to do something in a very traditional core of the subject. Interestingly, with me, during his PhD years, he did almost entirely theoretical work, although he dabbled a little bit in experiments.
To fast forward, when I left Berkeley and moved to NEC, I can’t remember exactly how the timing worked—whether he finished his PhD while he was at NEC or—anyhow, so he spent a little time there, and there was a little gap where he was kept on at NEC as a postdoc, and the idea was that he was going to work in an experimental group.
Albert Libchaber had come to be a fellow at NEC and to be professor at Princeton. At NEC, he was going to build a lab to do various things in biophysics. He was shifting fields at that moment. And actually moving to NEC was part of the opportunity for him to shift fields. And my interaction with him is something we'll come back to, that was very important for me, and I think helped very much with building the kind of special community we had at NEC. But in any case, the idea was, okay, he’s building a new lab. Fred has some window of time until he goes off to a postdoc, in which he will be doing experiments. He had, of course, as an undergraduate, done experiments at a rather high level. So he’s going to go hang out in Albert’s lab and help him try things out, figure out which direction he wants to go in. It was kind of low-cost for Fred, because he didn't need to get something done for those six months. He just could explore with Albert. And I think after about two weeks, Albert walked into my office, closed the door behind him and said, “You weren’t actually going to try and make him into a theorist, were you?”
[laugh]
And you know, Albert, he’s not always generous in his praise of his fellow experimentalists. [laugh] Anyhow, so I had these extraordinary young people come and work with me. That was fantastic. So you were asking about the bumpiness.
I want to just stay on the graduate students for a second, because I'm curious—your fluency in naivete, self-directed, is obvious. But I wonder if you passed that along to your graduate students from the beginning, or if you were more self-conscious or more careful in the consideration of the future careers of the graduate students under your care.
[pause] That’s a hard question. Let’s see. The results were great.
As were your own. But the question is, was it the same path or not?
Yeah. I think that I probably demanded of my students rather more—so I was young. I had a lot of energy. A lot of students came to work with me. Maybe too many. There were lots of ideas floating around. I had lots of ideas. So there were plenty of things for people to do. There was a real group camaraderie. Not everybody loved each other, but they all got along, and there was a sense of community. Between the group who were working with me and then people who did things in different parts of biophysics on campus that weren’t necessarily explicitly theoretical but they did experiments that had theoretical import, and had a little bit of that style, you could put together an audience for a seminar, which was important. Did I pass—was I more professional—was I less naïve in dealing with my students than I was in shaping my own trajectory? I find that hard to believe.
There’s also the difference between ability and intent. Even if you didn't necessarily have the tools, the career tools, as a young assistant professor, perhaps you sensed a degree of responsibility for others that would have been less cumbersome for yourself.
Yeah, so I took my responsibility as mentor for the students very seriously. For example, I can remember a student who was kind of drifting and hadn’t been super productive, and I was worried. And I remember spending a lot of time thinking about how to—what do you say? Do you say to someone, “I don’t think you're going to make it”? What do you do? I remember what I came up with, which I think was fair, was “I don’t believe that this level of productivity will make it possible for you to continue as an independent researcher, as a theorist.” Now, there’s lots of things—so something has to change. And I don’t know what it should be, right? And it’s also a condition of the world, rather than a condition of you, right?
And I wasn’t being—I'm not a psychologist, right? It caused me to think about that—what is going on here? And not long after that conversation, he formed a plan for what direction he wanted to take, and he has been extraordinarily successful in his academic career, doing things that are very different than what he did for his PhD, which is fine.
So I think an interesting feature—so what is true of the students who worked with me is that the ones who have been—so let’s see, there were nine of them who finished their PhD from those four years. Which is, as I say, maybe too many. But in retrospect, it’s hard to know. So one fellow, sadly, had mental health issues. And he really wanted to be a theorist in the hard-core powerful sense and do hard calculations, so he sought out particularly technically challenging problems in the more molecular parts of biophysics. He thought that my interests in things that were more related to neuroscience and the brain were too soft. He was openly critical of my joy in doing back-of-the-envelope calculations. Which is fine. And interestingly, he ended up doing a postdoctoral fellowship in visual psychophysics. So he went all the way to experimen…not only—he bypassed neuroscience and neural networks, and he went all the way to doing experimental psychology.
[laugh]
And worked with some of the best people in the field, and landed an excellent assistant professorship in a psychology department. And sadly, could not hold it together. So that is a tragedy. And that’s one where I think about whether—should I, or more broadly we as a community, been more sensitive, more aware, had more tools? It could also be that during the period when he was a graduate student, he could hold things together, and things changed. But I certainly never had a conversation with him about mental health issues when he was a student.
Also, 30 years ago is not a trivial amount of time in terms of it being—it’s a different time—
Yes, that’s correct.
—in talking about mental health issues.
And I think both in terms of students’ ability—well, by the way, during that period, there was also a woman who was an undergraduate at Berkeley, had done very well, from very complicated circumstances, namely of the two separated parents. The one who had the resources to help thought that she should go to Texas A&M, which was not very far from where she grew up. And she was someone who, when she finally ran away from all that and moved to Berkeley, she kept pet snakes, worked in the herpetology store, and would sometimes be seen walking around campus in sort of clownlike whiteface. So she didn't really fit in at Texas A&M.
[laugh] “Come to Berkeley.” [laugh]
But, you know, was very comfortable at Berkeley.
[laugh]
It was remarkable. I mean, she moved to Berkeley with some of her pet snakes. And finding an apartment that you can afford when you're a student and you have pet snakes is pretty difficult. So she actually lived in her car for some period. Anyhow, [laugh] and did fine, anyway. I don’t know how she did it. She was just an extraordinary woman. And so I and a couple of other faculty members encouraged her to stay and get her PhD, which she was on track to do, and she was coming to group meetings, and everything was wonderful. And then there was some small episode which might have been disruptive, but then you got the sense that it was more disruptive than perhaps it should have been. And at some point, she came into my office and sat down, explained that she was bipolar, and that she really didn't like using the drugs to control it, and that she had hoped that as a graduate student, the stresses would be manageable enough that she could deal with them without using the drugs to control her condition, but that was now clearly not going to work.
And an important part of that conversation was also that—so, you know, this is the mid- to late 1980s; she had a very good friend who was dying of AIDS. And she had been basically nursing him during this period, which no doubt—from my point of view, I would have said, “Oh, well, of course you're having difficulty. You're going through this incredible stress.” And she said, “You know, I've discovered something remarkable. I can do this.” She could—
Be a caregiver.
—take care of someone who was terminally ill in a painful and heart wrenching way, and deal with it. But graduate school, she could not. So sometimes I think about the experience that we offer our graduate students and I think, “Well, let’s see. So somebody decided that—” And she went on to a career in Hospice nursing, okay? That was the exit path. That’s what she went off to do. Incredible.
The larger point here of course is that success in graduate school requires a much broader skill set than just being good at the research you're involved in.
Right. And also, what does it say about the nature of the graduate school experience if caring for someone who’s dying of AIDS is less of a challenge to your mental health than—
That’s right.
—than being a PhD student?
That comment will resonate across all disciplines. That’s universally true.
That’s right. So the reason I brought this up—first of all, this is an extraordinary person, and one of those moments where you think, “God, being a professor, part of the incredible privilege is you meet these young people who are remarkable along many axes.” And contrary to the image of the ivory tower, of course you interact with them as human beings, and as a result, you get a view of life in some much broader sense. So interestingly, she could sit in my office and tell me about her struggles with mental health issues, and her contemporary could not. I mean, he certainly had—I mean, I would learn that issues did not suddenly appear when he was an assistant professor, that they had been there all along. But somehow, it was neither something that he felt he could talk to me about, nor was there a structure that catered to that. Or maybe I just wasn’t experienced enough to—or educated enough myself to realize that maybe—just recommending that he talk to somebody would have been useful. Obviously, you can’t solve the problem yourself; you're not a trained professional.
But when similar problems would reoccur when I was on the faculty at Princeton 20-some-odd years later, and a student came to me and told me that they were having difficulties, I felt much more capable of knowing something about the university apparatus for dealing with these issues, for assuring them about how—putting the problem of whether you can turn in your problem set—[laugh] that can go very far down the list, right? There are much bigger issues here.
Which speaks probably to your maturity as a mentor and to the fact that more resources were simply available on campus, and there was a culture of openness.
I think that’s true. So in those 20 years, a lot of things happened. I grew up, I learned more, but also institutions grew up and learned more, and devoted more resources to it. And I think it’s probably also true, one has to admit, that Princeton probably has more resources to devote to this per student than Berkeley does. So anyhow, we could spend 15 minutes on each of the students. You were asking about did I communicate—I certainly think that I maintained this view that, “Do what you think is interesting.” And that that’s your best shot. And I still—I now know, “Oh, if you want to finish your PhD on this date, you need to apply for a postdoc on this date.” “By the time you apply for a postdoc, it would be useful if some of those papers were actual papers, instead of ‘in preparation.’”
[laugh]
And I know that it’s my responsibility to communicate that to the students. Not fanatically, not—don’t write bad papers so that you can say you have some. But I'm a little more aware of the calendar than I think I used to be. Maybe you get more aware of the calendar as you get older, for other reasons, too. [laugh] You sense that time is passing. But I still maintain, and I still communicate to the students, that I think the thing which is under their control that has the biggest impact on whether they will continue to be able to do something interesting is their productivity at doing something interesting. That some larger more complicated calculation about how to optimize their chances—I don’t know how to do that calculation, and if that’s the approach they want to take, then they need to get advice from somebody who does know how to do that calculation. I've never felt I did.
So in that sense, I probably passed along to my students some of my own naivete, and—it’s interesting to ask—I mean, we could ask them how well that served them. I don’t know. But, one of my best friends in science, Steve Kivelson, a condensed matter theorist at Stanford now, when I congratulated him on something, and I don’t remember now exactly what it was, his response was that the greatest honor that we receive as scientists, the only one that really matters, is when bright young people come and decide they want to work with us. Because that—
It’s everything.
It’s everything. Yeah, right. And so by that measure, I have been honored many times over. When I look at that first group of students—okay, so one guy had mental health issues. Two guys moved back and forth between jobs in industry and jobs in academia, and started companies that did interesting things, and they found very rewarding, and then came back to do things as academics, and so on. Another one had a faculty position and decided that the sort of entrepreneurial part of being a faculty member, particularly at things that touched biology a bit more, was not for him, and he now is the director of a kind of computational support facility for a very large biomedical research center. Another one had a fine career as a research staff member at Los Alamos. Another one, he’s Greek, and he wanted to be closer to his family. There were health issues, not for him, but for other members of his family, and so he took a position in Cyprus, where he has done quite well. He has been very active—the level of support for science at the University of Cyprus is not so high, so maintaining his activity has meant very frequent leaves to go in various places all over the world.
And then there are three students of that nine who have had very successful academic careers in the United States. So I've mentioned Fred Rieke who is a professor at the University of Washington. Leonid Kruglyak is professor and chair of Human Genetics at UCLA, investigator at Howard Hughes Medical Institute. And Mike Crair is professor of neuroscience at Yale, and the vice provost for research of Yale University. When you discover that your students are now vice provost, you feel a little bit old. [laugh]
[laugh]
But I guess we were both quite young when he was a student, so that’s not completely implausible. So I'm not quite sure if I'm getting everybody. So these nine people, they did extraordinarily well.
To develop the theme, going back to the bumps in the road, you commented perhaps nine students in four years is a lot. And there’s three ways that we can go with that. It might have been a lot in terms of how well you could have served all of those students. It could be a lot in terms of how much that pulled you away from your own work. And it could be a lot, perhaps more relevant to where we're going in the conversation, that perhaps some of your colleagues, some of your more established colleagues, thought, “He’s getting a little too big a little too quickly.” I wonder if that’s part of the equation as we go into these bumps.
So I learned, somewhat indirectly, that there were criticisms of how many students I had. I cannot remember a single conversation in which a senior colleague directly talked to me about this subject. So my feeling is—
For better or worse, you mean? Allies or enemies.
Nobody. I know that that criticism was leveled at me. I will point out that many of them had fancy fellowships, and so in fact I managed to have all these students without being particularly good at raising money. So did that create jealousies? I will say that it genuinely didn't occur to me to worry about that. This is again speaking to the naivete.
There it is.
I've thought long and hard about, did I somehow deep-down know that that was—? What am I supposed to do?
But Bill, who would have been the mentor figure to you at that point in your career, to put the proverbial hand on your shoulder and say, “Bill, slow down a little here. Maybe don’t take another graduate student this year. It’s not a great look.” Even if items one and two—you're serving them well; they're serving you well—but just on the basis of optics and politics—
Right.
—who would have been that person?
There wasn’t a person. This was a period in which I became an assistant professor without anybody being designated as your mentor.
But ironically, you came up yourself at Berkeley. It would seem like if there was any place where there would be that faculty member to serve in that role for you, it would be the place where you did your own graduate work.
I had a joint appointment in physics and biophysics. I would say in the biophysics department, I felt I could talk to—my PhD advisor was still there. He was careful about not being too hands-on with me as a junior faculty member, which of course is a good thing. Geoff Owen, with whom I had interacted, who sat on my thesis committee, and with whom I would go on to collaborate, in fact, with Fred Rieke as part of the team, he would become chair for some part of that time, and we had a very good relationship. So we could talk. This issue of how many students never came up, and my impression is that that was a resentment that resided in the physics department, and that was never communicated to me until very late.
So as I say, there were plenty of oppor…I would have happily—well, okay, I don’t know, would I have listened if somebody had told me? Maybe not. Who knows. Nonetheless, if you think one of your junior faculty members is doing something that isn’t a good idea for their development as a faculty member, then you need to tell them, while there’s still a chance for them to do something about it, right? Anyhow, it’s more fun to talk about the students and the science, but I made this comment about there being bumps in the road, and it took me some time to decide what level of detail I was going to give you. And I would like to—
Let’s get to it.
So as I say, in ’87/88, the physics department’s view is, “He doesn't need to be evaluated, because he’s on leave.” The deans come back and say, “I don’t know, we think he’s doing a great job. Move him up a couple of steps in the ladder.” Then ’88/89—"Well, you've just received an accelerated promotion, so—and in any case, it’s the off year, so we don’t need to do anything.” And I bought that.
And then I guess we're in the Fall of ’89, and it’s time to evaluate again, and the Department does something, and I don’t remember the exact time at which—probably the evaluation comes in early calendar 1990, and the department chair of physics, who was Buford Price, calls me into his office, and he basically tells me that it is the opinion of the physics department that I have not been particularly productive, that I have not done anything especially interesting, and says in so many words that since I have a joint appointment, it is of course possible that the other department will have a different view, and that before the really hard issue of tenure comes up, if I would prefer to move all of my appointment to the other department, that would probably be possible, and that I should not view that as a failure.
Bill, I want to understand the dynamics of your appointment here. In order to gain tenure with the joint appointment, does that mean that there are essentially two masters that you have to serve and impress?
Yes.
Also probably on the theme of naivete, did you appreciate this upon accepting this arrangement?
No. No, but as you will see, in the way the story turns out, [laugh] it is difficult for me to—in the end, I have a rather odd view of this, because in fact, I am saved by this fact. So Price gives me this evaluation, and he sort of ticks off that, “Well, yeah, you did this, but people weren’t impressed by such and such, and you did that.”
And your sense is that Price is legitimately representing a significant group of people? He’s not just speaking for himself?
I don’t know. I mean, I wrote a report [laugh] like you're supposed to do. I assume the physics department did its job. The department chairman calls me in and he basically tells me—I mean, I don’t think there is another interpretation—which is, “You're not going to get tenure in the physics department. Your option is to move to the other department.” And I go home shattered, right?
Did you have any sense if there was any crosstalk with the other department? Was there any coordination before coming to you? Because that’s a big question, in terms of what he might have thought your prospects for tenure in the other department might have been.
I had no idea whether he had actually talked to them or not. It would seem natural for me to then go talk to senior members of the biophysics department, whom I knew very well. And I honestly can’t remember what I did. The next thing that I remember—I mean, I really was shattered. Especially since it’s the Fall of ’89, and I think that we've done a bunch of things that—it’s going to take another year-plus for everything to come out, but I think we've been doing amazing things. The next thing that I remember is Geoff Chew, one of the great figures of Berkeley physics, complicated figure but a great figure, he is at that point the dean of physical sciences, and he says, “Bill, do you have some time to talk?” And I say, “Sure.” By the way, I don’t remember the timing of this conversation with Geoff with my visit to NEC and Princeton. These all come in that same period. So I'll tell you—
Your talk with Geoff is after the devastating talk with Price.
After the devastating talk with Buford Price, yes. And so he comes and he sits in my office and he says, “You know, you have a joint appointment, and what you may not realize is that in the classification at Berkeley, physics is of course a physical science, and so the chair of the physics department reports to the dean of physical sciences”—Geoff—“whereas biophysics is a biological science, and so it reports to the dean of biological sciences”—Professor Beth Burnside, wonderful cell biologist. And it is only at that point that the two reports come together. So he and his counterpart, Dean Burnside, got together. And I remember Geoff’s words, which were “Had it not been for the names on the files, it would have been difficult to recognize that the two reports were about the same person.”
And so I'm just sitting there, right? I don’t know what to think. And he says, “It is our responsibility as deans to—we have to—there’s two of us, but there’s only one of you, so we have to reach a conclusion about what to do. And what we figured out between us is that we can only reach a conclusion if we learn more about what you're doing. And so”—he gestures to the blackboard—“Can you tell me what you're doing?”
So now, when I think back about this—my feeling was, “This is fantastic. One of the most senior and distinguished members of the physics department, who also happens to be the dean, wants to sit in my office with me and talk to me about what I'm doing.” Which, by the way, no other senior faculty member in the physics department had bothered to do, at this point.
But you're beyond saving your job, at this point? This is just an intellectual engagement?
That is my view, right? When I think back, I realize, wait a minute, the dean interviewed me about my scientific work in a way which would have consequence for my employment, sort of without any notice. I mean, he didn't say, “I want to see you on Thursday so that we can resolve—” No, he just walked into my office and—and I don’t think it was—so again, sort of naivete, right? [laugh] Maybe if I were a little less naïve, I would have said, “I think I should go home and prepare this before I have this conversation.” And that didn't occur to me at all. And to be honest, I don’t think it occurred to Geoff, either. I think he actually—I mean, he came in as the dean, but at that moment, I perceived him to be my colleague. And as I say, this might be naïve on my part; he just wanted to know what I was doing. So I proceeded to tell him what I was doing. And I was really excited about all the things [laugh] we were doing. And as I say, it covered a lot of territory, from ideas about physical principles that shape neural coding and computation, to these very molecular things. I want at some point to talk about the fantastic interaction I had with Judith Klinman, who had been doing these amazing—the beginning of these amazing experiments showing that tunneling was important in enzymatic proton transfer. And my first PhD student was working on this problem.
So I was really—I thought we were just going gangbusters, and so I told him what we were up to. And at the end, he asked questions. You know, it was a conversation between colleagues. And he thanked me. And he said, “You've given me a lot to think about.” So yeah, I don’t remember—he should have said something that related to the fact that he was there because of job issues, but I don’t remember what he said about that. What I remember is sometime later, the physics department chair, the same Mr. Price, asks if I have some time to chat with him. And so I go into his office, and he is holding a letter in his hand. And the first words out of his mouth after he closed the door were, “I am obligated to read this to you, but I am not obligated to give you a copy.”
This is like out of a movie.
Yeah. He needs to meet the guy from Sandia who told me he could make it worth my while.
[laugh]
Anyhow. “Okay.” [laugh] “What the hell?” “Sure,” right? And he proceeded to read me a letter from the College of Letters and Science, the Office of the Deans, which explained that my appointment was renewed, it was with some step increase, I don’t remember what. That it explicitly chastised the physics department for having done a sloppy job of evaluating me, and instructed them that next year—since they did such a bad job, next year, they would not be allowed to evaluate me without getting outside letters, which basically said—I mean, the letter, if you read between the lines, which I couldn't do, because I wasn’t actually looking at the letter, but it basically said, “Next year, you should consider him for tenure, and you better get that one right.” That was what the letter said.
This is all Chew. This dawns on you that this is all Chew’s doing.
I've told you everything I really—well—
What else could it be?
Yeah. And obviously I presume—I infer from what Chew told me that the evaluation from the biophysics department was positive, maybe even enthusiastic. And so that was part of what gave rise to the need for that conversation. Yes, I believe that—I would leave Berkeley not long after this, but I believe that the reason I left with my dignity intact was because of Geoff Chew.
Bill, let’s look for the bias in here. Is the allegation that you're not being productive—it’s significant that it’s coming from the physics department, one. And is the interest that Chew shows in you—are both of these stemming from a place where—for example, if you were working on string theory, for example, perhaps you would have—and you were as productive as with your own research agenda, would Price have ever had that conversation with you? Would Chew have ever requested this level of detailed explanation for exactly what you were working on? In other words, how much of it is simply that they didn't understand what you were working on, and they weren’t accurately quantifying how productive whatever it was that you were doing actually was true?
So look, my view of this is—it also comes against a background in which no senior faculty in the physics department took time to talk to me during the time that I was there. The physics department was very much organized into groups, as physics departments tend to be. I was not part of a group, so that—I mean, I did my best to make an intellectual community for my students and myself, but that was entirely on me. There was no support for that. I have often wondered how much do I want to know about the department’s reasoning, and I'm pretty sure I don’t want to know who all the actors were, because I can’t imagine that I would feel better about anything by knowing who said what.
I would eventually learn that there were discussions in the department of whom should we ask, and very quickly anybody with whom I was perceived as having a friendly relationship was crossed off the list on the grounds—“Well, they're just friends.” And the one unassailable source of expertise in the field, namely John Hopfield, somehow could never be reached. So as I often say when we talk about bias in letters of reference, if you cross off—in a small field, if you don’t allow yourself letters from their friends, you only have their enemies. [laugh]
[laugh]
And, you know, I had made a couple. I had scientific—there were people who I thought had done really interesting experiments, but I had different views of what they meant. And we could talk about that, but I don’t find those things very interesting. I think that part of my naivete—I was not naïve in thinking that the place of biological physics in the physics community was precarious. I knew that. I knew that that wasn’t a well-established subfield. I took at face value that by hiring me, the physics department at Berkeley had decided that it was the field of physics that they wanted to have in the department. And evidently, they had not finished that conversation amongst themselves before hiring me. Somehow, they managed to hire me without resolving that issue. And whatever concerns they had would come back and they would come back in the personal form that somehow I was inadequate in doing my job as opposed to—so for example, if the physics department had come to me and said, “On reflection, we don’t want to do biophysics in the physics department, and that is not intended as a judgment of you”—
When they say, “We don’t want to do biophysics in the physics department—"
Well, they didn't say that; they said, “You're not doing a good job.”
No, but my question is, to what extent is that statement or that perceived statement about you—in other words, how much history did they have with biophysics before you?
It was presented to me as a statement about me. It could be, right? It could be that some combination of—that biophysics is fine, but the particular things I was doing was not what they thought was interesting. I don’t know, whatever. Anyhow, I've told you what I know. And what’s important to know is that in parallel with all of this, as this is unfolding in Berkeley, I get a note from somebody who I had known slightly from the sort of neural networks world which intersected mine, a guy named Eric Baum, who had moved to this new thing in Princeton called the NEC Research Institute. And he said, “We're recruiting, and we’d like you to come out and give a seminar.” And I remember telling him that I wasn’t looking for a job, so that must be before—I can’t imagine that I would have been as straightforwardly disinterested in the idea that it was an interview [laugh] if the request had come after my first conversation with Price.
Bill, just at this point for a second, before we're even thinking about interviews and jobs, the big question here is, why not just retreat fully into biophysics at Berkeley? They're happy with you. It might be sort of intellectually your natural environment. Why is that not the immediate response? Why even bother with going into another discussion with Price and whatever letter he has in his band?
Why would I want to be a member of a club that doesn't want me?
[laugh]
Joining a club that doesn't want me as a member? I can’t remember exactly how that goes.
“I’d never be a member of a club that would have me as a member.”
Yes. Anyhow. So yeah, there are variations on this. I remember Price’s phrasing, that were I to retreat—I mean, he didn't say “retreat”—that were I to transfer my appointment fully to the biophysics department, that I should not view that as a failure. And I remember answering that I would view that as a failure, because I thought that what I was doing was physics. And so I think that already—
Although isn’t your entire research agenda built to not put up those walls? Why would you ever say, “I thought what I was doing was physics?” Well, of course it’s physics, but it’s also biophysics. It’s not one or the other.
Right. But I think that I—
You were looking to assert that biophysics is real physics; that’s part of it.
Exactly, exactly. So I felt very strongly that what I and a handful of other people at the time was doing was physics in the strict sense, right? Not the application of physics to problems outside the field, but rather it represented a new direction for physics itself.
The irony here, of course, is that you're on your way to Princeton, which is probably the classic example of a physics program coming to biophysics quite late in the game.
Well—okay.
We'll come back to that. I just wanted to put that in here.
Let me put things in historical order. So I get a note from Eric Baum inviting me to come out to NEC. And I sort of disavow any notion that this is a job interview, but I've never been in Princeton; I would be happy to come for a visit. I go out there for a visit. So this is happening—all these things are happening at the same time, and I'm not exactly sure—as I say, I know that the invitation must have come before the first conversation with Price. And probably the visit came before the first conversation with Price. Maybe. Or maybe it’s around the same time. I'm not sure about that. I'm guessing—there must be a copy of the report that Price was working from somewhere in the files.
So I go out and I think, “Well, look, I'm not going to go all the way across the country for a day.” So I send a note to Bob Austin, who I knew because the community of physicists interested in biology wasn’t that large, and I had been fascinated by some of the experiments that he had done over the years, and was even working on things that might or might not have been related to them. And so we had corresponded about a few things over the years, and so I wrote him a note and I said, “I'm coming out for this thing, and can I stop by and see you?” And he says, “Sure, come by. Spend a day at NEC. Stay for an extra day. Come out to the university.”
How well were you aware of what was going on at NEC before all of this? Was it on your radar at all? Nothing.
Zero. I didn't know it existed. What’s relevant to this discussion is not my experience in visiting NEC, although that would be of course very important for what happens later. But let’s talk about that when we talk about the period of my being at NEC. The important thing that happens is I go to visit Bob Austin, we hang out in the lab for a little while, and then it’s lunchtime. And he says—as was very common among physics faculty in those days; it’s less common now—well, of course, now, nobody’s on campus, but there are more places to eat now—but, “Let’s go eat at the faculty club.” Prospect House.
So we troop up to the Faculty Club, and Bob and I are chatting away, and David Gross walks up to us. Now, I knew David because David has come to visit in Santa Barbara, and [pause]—as will become clear, I am extraordinary grateful to David, because he was incredibly supportive. But even he would admit that he was not always the most attentive to the social graces. So he more or less dismissed Bob, right? [laugh]
Bill, there’s a theme that’s developing here. From Geoff Chew to David Gross, there’s something about bootstrap and string theory, that these people like you.
It is an interesting fact—
[laugh]
—that three people who played a key role in my development as a scientist are actually in a line. [ed. Gross was Chew’s student and Wilczek was Gross’s student]
Right, right.
So David basically dismisses Bob and sits down with me and says, “So what have you been doing?” And, you know, we eat our lunch, we finish our lunch, and we keep talking. And the Faculty Club empties out, because—we sat and talked for three hours!
Wow.
Now—
For historical context, it’s important to note that David Gross is—he does not suffer fools. So for three hours—
Yes, and I knew that. I knew that. But you know, what really stood out for me was I realized that none of my senior colleagues in the physics department at Berkeley had ever had that long a conversation with me. And so what I realized—and so now I think that this must be after the first conversation with Price—as I say, this is all happening in the same semester—I realize, “You know what? It’s not me. If David has three hours for me, to talk about physics—
That’s right.
—and he takes what I'm telling him seriously as being interesting science”—you know, we didn't have a detailed conversation about where the boundary of physics is; we just talked about the science, and that was that. “If he thinks this is worth his time and my colleagues in the physics department have not even taken the time to find out, then it might be that I'm just in the wrong place.”
And Bill, it’s a purely speculative comment, but I asked before, and you said that you didn't really give much thought to it, but there is that issue, when you join a faculty where you used to be a graduate student, there is always that intangible perception that you never, in the eyes of your colleagues, grew up from your role that they originally knew you as. There’s no way to quantify that, but it could very well simply be part of the equation.
It’s also true that I was probably—there’s not being aware of political issues, and there’s also maybe not having the sensitivity that you're supposed to know your place. So there’s a fantastic interaction I had in a faculty meeting at Berkeley, which is really one of my favorites. I had enormous admiration for Dave Jackson. For example, there was this fantastic thing when they finally got money to name something after Oppenheimer, which was the Oppenheimer Center for Theoretical Physics, which was basically a conference room at that point. Now, it’s a somewhat larger enterprise. And Dave stood up and—and maybe he was chair, then? I can’t remember. Or he was just in charge of shepherding this thing along. No, I think he had been chair long before. He was just shepherding this thing along. And he said, “You know, I've been at Berkeley a long time. My vision of the history of Berkeley physics is there’s the experimentalists led by Lawrence, and the theorists were led by Oppenheimer. It’s so interesting, you know—there are so many things named after Lawrence, and nothing named after Oppenheimer, and I always wondered why.”
[laugh]
It was fantastic. Because of course, Dave knew perfectly well why!
Of course! [laugh]
And so he was just poking. And I loved that about him. And once, in a faculty meeting—and I cannot for the life of me remember what the subject was that we were discussing, but the discussion was threatening to become unbounded, right? You know how things can go. And so somebody made a kind of parliamentary maneuver to move that the discussion be restricted in some particular way. And at that point, the chair, who might well have been Price again at this moment, starts passing out pieces of paper for us to all vote. And I said, in what must have been a very obnoxious and exasperated tone, “I don’t understand why we need secret ballots for a matter of parliamentary procedure.”
[laugh]
And Dave Jackson turns around and looks at me and he says, “Bill, it’s because some junior faculty might be shy to express their opinion in front of their senior colleagues.”
[laugh] Hint, hint. [laugh]
It was great, right? He didn't say, “I think you're wrong.” [laugh] He just—“Take it easy.” Right? That is, by the way, the only hint I ever got, from any senior faculty member in that department, that I should be careful about politics. That’s the sum total. So if I extrapolate wildly [laugh]—and you know, also knowing myself, it is possible that I was not the easiest of junior faculty members, and that I rubbed some people the wrong way. But, I don’t know, I was doing my thing. I had these students who were writing interesting PhD theses. They were winning fellowships. Anyhow.
So I figured out from all of this that—so that conversation with David was just unbelievably important, because that was when the light went on that said, “No.” That basically, as I said to you before, I think I went to my fantasy of the place, and the reality was very different, and I had to figure out what to do about that. So not long after that, the folks at NEC made me an offer. And of course I went to my department chair to tell him that I received this offer. And this is in the intermediate period between the first conversation and the second conversation. And I remember him saying something like, “Wow, that sends a very different message than the one that we delivered to you a month ago” or something like that. “They have a different view of you than we do,” or something like that. So yeah, that was the Spring of ’90.
When did this begin to morph into a formal offer that became attractive to you?
So I would then get a formal offer from NEC, and I had to figure out what to do. And I honestly didn't know. I had grown up in Berkeley. I had grown up intellectually. I mean, I grew up in San Francisco, but obviously feel that I had grown up intellectually in Berkeley. And so the dream was always to go back there as a faculty member. It never occurred to me that I would work for a company. I always thought that I would work for a university. It never occurred to me that I would live on the East Coast instead of the West Coast. Princeton was really a small town, whereas Berkeley was a city.
There’s the two-body problem as well to consider.
Well, and that has interesting structure, because we were starting to worry about the education of our children in Berkeley.
How old were they at that point?
In the Fall of 1990, our son would start kindergarten. Our daughter was younger.
Big year, sure.
So it was all about to hit the fan. And the Berkeley public schools were in decline, and I would say in part because—well, okay, the Berkeley public schools were in decline. Like many places, you have a community of people who profess very progressive values, who have done well for themselves, and who don’t understand that supporting public schools is an essential part of turning those values into reality.
It was also the aftermath of Proposition 13 was finally being felt. And Proposition 13 in California which limited property taxes—so I guess for the record, for people who don’t know this, in the United States, I believe fully across the United States, but certainly in California at the time, and in New Jersey, a substantial part of the support of public schools comes from local governments, through the collection of property taxes. And in the late 1970s, there was a major ballot initiative to cap the ability of municipalities to raise property taxes. This was called Proposition 13.
It was in fact an initiative not of the association of poor grandmothers who were on fixed incomes and won’t be able to stay in their houses if taxes go up. It was an initiative of the large real estate concerns, publicly led by somebody named Jarvis. And I know about this because I lived through it, but also, my father was an employee of the city of Oakland, and I would eventually find, among his papers, the letter which explained that he had a choice of retiring or being fired in the wake of Proposition 13. So he retired and was brought back as a consultant, which shows you—how well it all works.
And the immediate aftermath of Proposition 13 wasn’t so bad, right? Because the state was rich, so all the municipalities went to the state, hat in hand, to make up the difference, and that worked. But if you poise yourself at the edge, eventually—then I think there were ideas which eventually—the state was forbidden from running a surplus. So that meant that if you had cyclic—you had economic cycles, there was no way of using the positive parts of the cycles to buffer you against the—anyhow.
So, finally, the bill was coming due, and public schools in California were declining generally, and in Berkeley in particular. So that was an issue. This problem didn't exist in Princeton. So that certainly played a role. Eventually, there would be caps, not on property taxes, but on the budget increase of the schools in New Jersey. But in any case, that would happen later.
So yeah, so I think in terms of family—so look, there’s a scientific—there’s one more issue about scientific life that I have to talk about in the decision to move, but in one stroke, you could change almost everything about our lives. We’d change coasts. We’d go from working at a university to working in an industrial research laboratory. We’d go from living in a city to living in a small town. Everything was going to change.
Did you see NEC as—I don’t know what the right word is—an entrée to a faculty position at Princeton? Did you really look at it as a self-contained offer?
No, so the offer was to—so being at an industrial research laboratory, they actually provide direct support of your research in an amount which is knowable in advance. [laugh] And it was generous. And furthermore, when they drew up the plans for the place and they looked at, well, what are we going to do in the broad category of physics, it is an interesting feature of the wisdom of the people who drew the plan that they said, “Well, we should do some condensed matter physics, and we should do some optoelectronics and we should do some materials science. And, you know, if we only do those things, which everybody else also knows are good things for companies like us to do, then we’d have to be awfully lucky in order to really make a name for ourselves. So we should take one quarter of the physics division”—there was also a computer science division—"and we should take one quarter of the physics division”—and it was more or less in the original organizational chart labeled “something else.” They had some more polite version, but it basically was “something else.”
What were your impressions of NEC, Bill? Having known nothing about it, did this seem like minimally a lateral move as far as the science was concerned?
So it would become clear to me that what they were offering—so to finish the thought that I just was getting at, they also—because they had set aside one quarter of physics to be “something else,” there was space to grow a biophysics group. And they had invited me out, and they made me an offer. And they had been talking to Albert Libchaber, and not only did they make him an offer, but he made it clear that—by the end, the offer for him was, he was being offered a faculty position at Princeton, but he was also being offered this position at NEC. And his idea was that he would use—if you're going to have—particularly if you're an experimentalist, if you have two groups, you have to figure out why. You don’t just take what you're doing and split it in half, right, because that just makes your life harder. So the goal would be to do something at NEC which was different. And at Princeton, he set up—at least at the beginning—to do more things in fluid mechanics and turbulence. And at NEC, he set out to try and explore biophysics. That was his idea. And so what they realized was that not by design, they had ended up making offers to two people who both had interest in biophysics, one a very senior and distinguished experimentalist, one a relatively young theorist, and so maybe the “something else” should be biophysics. And so I realized that there was suddenly the opportunity to build a group, which wasn’t happening at Berkeley.
A group of peers, not just a group of graduate students and you.
Yes, that’s right. And in fact, that’s an important feature of the structure of industrial research laboratories, at least in—so all of the management had come from Bell Labs.
Which means, Bill—if all the management that comes from Bell Labs—that the research culture baked into that is basic science. In other words, you are already primed not to be thinking about corporate bottom lines and applying your research to any kind of—
Absolutely, absolutely.
So that much you knew. That was clear to you, that basic science was the name of the game.
Right. And there was a structure—of course, any industrial research laboratory, you're an at-will employee. There’s no tenure. On the other hand, the structure and what we were told about how things were going to be evaluated and so on, my reading of it was that we had ten years. That obviously if they were just lying to me, then—well, if they were just lying to me, then I have other problems.
Bill, we don’t have to get specific with numbers, but I'm curious if they made an offer that was tantamount, via salary, to a promotion.
Oh, absolutely. And the same chairman, Mr. Price, when he would eventually announce that I would be leaving at the next state of the department colloquium, in the Fall of 1990, his description of the opportunity to which I was moving made explicit reference to the fact that I would be given a much higher salary. Just to give you a sense for his tastefulness. He did not choose to mention the scientific resources that I would be given.
[laugh]
So my feeling was that what I was being offered was a ten-year position. Who knew what was going to happen after that? I mean, we could discuss this. And so my feeling was not that I'm doing this because then maybe someday I'll be a professor at Princeton. My feeling was that given the level of support that I'm going to be given for a decade, that, if at the end of that, I can’t get a job to do what I want to do, that was my fault and not theirs. And I remember thinking that very explicitly. Now the thing that was not clear to me and was the sticking point in my own mind, was I like teaching. I like having students. Now, moving to NEC with more financial resources, it would become possible to have postdocs. So that would be a new chapter, which was great. But I liked interacting with students. And I didn't know what I was going to do about this. Of course, there is a university nearby, namely Princeton, and—
But of course, a place like Bell Labs would have encouraged graduate students and postdocs.
Absolutely. But of course, it’s not—you need the partner. So it turned out that that summer, when we would have to decide what we were going to do, I had already agreed to go spend a month in Aspen. And I talked to everybody, what they thought. Now, Aspen is probably—well certainly was then, perhaps, a bit East Coast heavy, and so it’s likely that the view I got was a little biased. But, the really—so there were a number of interesting conversations.
The really important one was I was wondering whether it would be possible to occasionally teach a course at Princeton, maybe teach a special topics course for the graduate students or something like that. And I thought, “Well, okay, I know what I have to do.” I've already told you about the conversation with David Gross when I first visited. “I should just ask David whether this is possible or not.” So we're in Aspen, and there’s one of those picnics on the lawn, or barbecues or whatever, and David has just arrived, and I see him across the lawn, and I walk over to him. And I'm sort of getting ready to—“Hi, how are you?”—and I'm getting ready to ask him about this. And he puts his arm over my shoulder and he says, “I understand you're going to be moving to Princeton” and basically offers that if I would like to teach a course or have graduate students, that would of course be fine. Now, typically of David, it’s not really up to him; it’s a departmental decision. [laugh] But never mind, it was enough for me. And so that was actually the last piece. There were many other things that had to be true, of course, but—
But it sealed the deal, having this assurance.
Yes, yes. And so I am aware that David is not an easy man, but I have great fondness for him because of these two conversations. I mean, we've had many conversations since then. [laugh] But those were really important. So yeah, so that’s the saga of how I would come to leave Berkeley. And it has, like all moves, there are pushes and pulls.
Besides this conversation with David, how formal an assurance did you have that you would have an academic life on the Princeton campus?
I just told you the conversation. [laugh] What? Did it occur to me that I should get a letter?
[laugh]
No, of course not!
You still haven't learned.
I still haven't learned, no.
[laugh]
Even after the episode—right. Yeah, I don’t know. Also, did I know what it would mean? I mean, how many times—I could look it up—how many times did I actually teach a course? How many PhD students did I actually have? I had a few, over that period. They were marvelous. We can talk about that when we get to the right part of the story. But what would it mean to be a staff member at an industrial research laboratory and have some sort of visiting position at the university that made it formally possible to have PhD students? I didn't know!
Right, and there’s a real range there, between being an adjunct who shows up on Tuesday afternoon for a seminar, and being a member of the faculty, even if not a tenured member.
And so I had no illusions that I would be a member of the faculty, and I wasn’t. But I was made welcome when I was there. I taught a biophysics course a few times. I did have PhD students. I don’t know, get invited to the cocktail parties? I don’t know, what’s the [laugh]—what do you measure? It felt—it was what it needed to be.
I was very happy at NEC. We built a fantastic group. And we're not done with the science at Berkeley, so I want to be sure we get back there. But yeah, so that’s the episode. There’s this shattering evaluation from the physics department. There’s an invitation to go—so the sequence is—an invitation to go to NEC at a moment where I certainly don’t want a job. This shattering evaluation from the physics department. The visit in which I found NEC itself to be quite—the number of people who were actually there fit around the seminar table. It was an empty office building, with some quite interesting folks in it. The vice president for physics, this guy named Joe Giordmaine, who—longtime—spent his career at Bell Labs, where among other things, he invented the optical parametric amplifier, which before there were dye lasers, that was actually the only tunable coherent source. And he had been a student of Charlie Townes back in the early days of masers, and I believe that Joe was the first person to use a maser as an amplifier for making an astrophysical measurement. So he was a scientist of considerable substance himself. And let me give you just those quick impressions. There are deeper things to be said about the personalities. The president, whose brain-child it really was, was a guy named Dawon Kahng who—if I remember correctly, is the person who figured out how to get the oxide layer into MOSFETs. Which made him, as he said, the holder of the patent for the object which has been produced the most times.
[laugh] Wow.
If you think about it, right? [laugh] It’s a little bit of a cheat, right, because you make them a million at a time.
[laugh] Right.
But, I mean, he had a sense of humor. And there were people there doing quantum optics and soft matter. It was an interesting group. And, at the very beginning, there was a physics division and a computer science division, but in order to have a seminar in which you would interview somebody who was coming for a visit, everybody would go. Because if not everybody went, the room would feel empty. [laugh] So I found myself talking to people who worked on things in computer vision and all sorts of stuff. Theoretical computer science. It certainly made an impression on me as a place—that they were serious about trying to do something different. And that was appealing. And I felt like by the time the offer was fully in hand, I was really being told, “We believe in you. Come here. Build something.” And I think I started to say—I said “build a group,” and you made the point “build a group of peers, not just a group of students.”
An important feature of industrial research labs, in the Bell Labs mold at least—I don’t know if this is universal—is that you don’t have big groups. If the thing you want to do requires more resources, you should team up with somebody else. Which, from the point of view of the lab, increases the chances that if this turns out not to be interesting anymore, at least one of you will figure it out. But it also meant that, yeah, instead of—I can remember thinking, when I was an assistant professor at Berkeley, and I got all the students who were working with me and who were curious together, and we had a seminar, and we had a visitor who would give the seminar, or we had journal club, I’d look around, and I thought to myself, “This is lovely. There’s a dozen to 15 people in the room. The only thing wrong with it is that I'm the only grownup.”
[laugh]
And, you know, you can go look at the data; you will find that I became an assistant professor when I was fairly young, so grownup might have been stretching it. I wasn’t that much older than some of the students. And I can remember thinking that it would be wonderful to have this level of community around issues that I think are intellectually interesting but I wasn’t the only grownup in the room. And NEC seemed to offer the prospect of—it was going to take some steps, but I could envision that there. And yeah, I was leaving one of the world’s great universities to go to an empty office building in suburban New Jersey.
[laugh]
I mean, when you get right down to it. Now, of course, you could say, “Oh, well, you knew it was going to be okay, because they had all these resources.” I don’t know! I had a couple of people who were telling me stuff, in this empty office building. So yeah, I don’t know, I guess it took some courage to imagine what it would be like. Anyhow, so that’s the sequence.
Let’s get back to the science at Berkeley, to close that loop.
Yeah. So let’s see, what are major things that happened while I was there?
And this I assume you're talking about your lame duck period at Berkeley.
Well, it seemed like everything came to—was coming—things were coming together and falling apart at the same time, right?
[laugh]
Which is part of why it was so weird. I thought we were doing great. [laugh] So what was growing out of the collaboration that started with Rob in Groningen when I was a postdoc and then had this addition from my interaction with Tony Zee on the theory was we had this new approach to understanding how to—so in most neurons in the brain, information is represented by sequences of discrete action potentials, each of which lasts about a millisecond. And so to not burden—this is the American Institute of Physics, not the American Institute of Neuroscience—so to leave aside a long, complicated history of ideas in neurobiology, what we did was we understood that you could—that if this sequence of discrete action potentials was encoding a continuous dynamic signal at their input, at the input to the system, you could decode and recover an estimate of that continuous signal. And so I think I talked a little bit about this maybe the last time.
And so what happened at Berkeley was, we made that work, with the data that Rob had collected actually some years before. And that was work by two PhD students, Fred Rieke and David Warland. And not only that, but when you actually looked at how precisely—so the system that Rob was studying was one that encoded—that computed motion across the visual field. And so you could ask how precisely could we estimate the trajectory of motion across the visual field by looking at the output of this one neuron. And so we did this decoding strategy. And now, you had an estimate of the actual velocity of movement.
Before this, in some sense, you're stuck with the question of—there’s somebody out there waving his hand moving around, and you see this sequence of action potentials, and you ask yourself, “Well, was that the right answer? How do I know whether it was the right answer?” But now what we could do is we could decode that information and come back and give you an estimate of how fast the hand was moving, as a function of time, and show that that precision was within a factor of two of the limit that’s set by noise in the photoreceptors, which was mostly photon shot noise, plus the blur due to diffraction through the compound eye.
And so in one stroke, we had given a new approach to thinking about the neural code, so how information is represented in the brain, and demonstrated that the computations that are being done by the brain, and the many layers between the photoreceptors and the neuron that Rob was recording from, in fact are essentially performing the optimal estimate, at least under the conditions that we were looking. And so that was a big step forward, and was I think the thing that—I don’t know—probably the first really important thing that I did, or thing that I look back on as being very significant. At the same time—
You did so much, Bill—wait, hang on a second—you did so much up to this point. Why does this stand out as this—you give it this special significance as, quote, “the first real important thing I did.” What are the measurements there?
I feel like all the ideas that were there were ideas that had been percolating, and this brought them all together in one place. And the things that grew out of it. This whole idea of being able to decode information in spike trains is something that kind of swept across—it became part of the standard toolkit for doing a certain part of neuroscience. It set an agenda to ask, “Well, what is the structure of the optimal computation?” It showed that these ideas about optimization could be carried—it’s not just when you're counting photons one by one, but even when you're estimat…when there’s lots of photons, signal to noise ratios are actually reasonably high, but you're trying to estimate something that’s deeply embedded in the signals that you're collecting.
So yeah, in some sense, it was so significant because—it finally appears eight years after I first arrived in Groningen. It is finally delivering on those first conversations that Rob and I had. I mean, there was a paper which I'm very fond of, that appeared before this, but certainly didn't have the impact that this did, in part because I think this was simple. We had understood things better, and they became simpler.
Bill, given that this is a coalescing moment for all of your research, I wonder if the significance of this dawning on you was sudden or gradual. Because I can actually see it going both ways. [pause] Obviously now you're speaking retrospectively—
—and it’s very hard for me to tell. It’s also complicated, because one of the things we did was then immediately run around campus and talk to people who were recording from other neurons in other systems and ask, did it work in those systems too? And so both David and Fred got involved in experiments that were happening in the frog auditory system and the frog vibrational sensors and the mechanical sensors in crickets, and a whole literal zoology of different systems. And in fact, the answer was yes, it worked in all these cases.
In particular, we started to think about not just how accurately things were being done, but how much information they contained in bits. And that was the beginnings of showing that the amount of information that was being conveyed was close to the limits that are set by the entropy of the neural responses themselves. That would take a couple years to unfold, but the nucleus of it came from then.
Fred collaborated with Geoff Owen, and we thought about the problem of, what is the structure of the computation required for optimal estimation in the case of motion? It’s actually a bit complicated. Much simpler was the question of what if you are in the limit where you're counting photons one at a time? In that case, what are the steps you should take to try and insulate yourself from various forms of background noise that you find in the photoreceptors? And essentially, what we argued was that the first thing you wanted to do was a kind of matched filtering. And we were able to show that Geoff’s data on the responses of the second-order neurons in the vertebrate retina—the so-called bipolar cells—that they seemed to implement the matched filter, whose structure we could calculate from the signal-to-noise characteristics of the photoreceptors themselves and the photon counting limit. And this was all one big effort.
At the same time, I hadn’t given up being interested in the more molecular parts of biophysics. I had one student who was thinking about—well, a couple of students who were thinking about the very early events in photosynthesis. And in particular, there was one paper where we realized that—I feel like I might have told you something about this before—if you think about trying to transfer an electron, or more generally trying to change the electronic state of the molecule, if the relaxation is very fast, then you should think of this in perturbation theory. You're going into a state which is substantially broadened by the relaxation processes, and so you can use Fermi’s Golden Rule to calculate what the transfer rate would be. And that is a certain body of theory of electron transfer including the pioneering work by Hopfield, is in that limit. And in particular, if you think about it in that limit, if I make the relaxation out of the final state faster, that broadens the state, but there is only one state, so the density of states goes down. And so if you apply Fermi’s Golden Rule, that means that the reaction becomes slower. Now go to the other limit where the relaxation is very slow. Well, if the relaxation is very slow, then you would actually coherently oscillate back and forth many times while you're waiting for the relaxation to happen. And so then the rate limiting step actually becomes the relaxation itself, and the rate of irreversible transfer becomes proportional to the relaxation rate. And so this tells you that there’s actually an optimum in between where the mixing and the relaxation rates are comparable.
And so Spiros Skourtis actually worked out—I had had this idea—well, I had had some version of this idea all the way back in my thesis, but had never really figured out how to connect it to an experiment. And so Spiros actually did an honest calculation of all of this in a regime that has a chance of being relevant to the very first events in photosynthesis. And we were joined in this by my friend, José Onuchic, who I think by that time—I think he was still in Brazil, but he may have moved to UCSD by then. And this was very fun, especially because as we were finishing the paper, experiments started to come out showing that if you perturb the system in various ways—the photosynthetic reaction center—you could actually uncover a kind of incipient coherence. Which would be consistent with the idea that you were sort of almost on the edge of being able to observe it, right? If the relaxation rates and the coherent mixing are on the same time scale, then it should be almost coherent, but not quite. So you should be able to push it over in various ways.
Now, because the experiment was done sort of by the time we published the paper, it doesn't go into the books as a victory for theory, but we loved it. We thought we had really figured something out. But actually the most important molecular part of biophysics thing that we worked on, at the time, was Judith Klinman’s group had done these spectacular experiments showing that if you look at enzymes, so proteins that catalyze chemical reactions that involve the transfer of a hydrogen atom, and you look at the isotope effects when you exchange the hydrogen for deuterium or tritium, in some cases, you saw really big isotope effects, much larger than you would expect if the proton were being transferred classically over a barrier, as a result of thermal activation.
Classically means what, in this context?
As the result of thermal activation. So the ordinary picture of chemical reactions, you have a molecule jiggling around. There’s some local equilibrium. If you go off in one particular direction, you climb an energetic barrier, and then if you go over the barrier, you fall into the product state. And so the rate is exponentially suppressed by a Boltzmann factor related to the height of the energetic barrier relative to kT, relative to the thermal energy, the Arrhenius law. These reactions obey the Arrhenius law. They were strongly temperature-dependent. In contrast to electron transfer reactions, where if you cool the molecule down, you can actually see temperature independent—so in the photosynthetic reaction center, the work from Hopfield and others about so-called electron tunneling has its origins in an experimental observation that if you cool the system down, you end up with rates that are temperature independent. And in fact, some of the rates are temperature independent, almost temperature independent, even near room temperature. That’s a separate problem. But that was all part of the discussion. You had anomalous temperature dependences.
In the case of Klinman’s enzymes, what she and her students saw was that you had perfectly normal-seeming temperature dependences, but then completely anomalous isotope effects. So the isotope effects were big in ways that you might have expected if the proton were actually tunneling through a barrier, which of course is very sensitive to the mass of the tunneling particle. But then everything was thermally activated. So what was going on? So my first student, Bill Bruno and I, went off to think about this. And the first thing I suggested to him, the details of which now escape me, didn't work at all, which is great. [laugh] But he came back, and we ended up with this picture in which you essentially have the protein, the enzyme is fluctuating, and the enzyme is big and soft, so those fluctuations are relatively classical. And as it fluctuates, it modulates the barrier through which the proton is tunneling. And so it is thermally activated, because you have to wait for the protein, the enzyme, to get into the right configuration, but once it’s in the right configuration, the proton tunnels. And we worked out a simple semi-classical theory of this, which I think we might have called vibrationally assisted tunneling or something like this. And if you did it right, you could reduce everything down to a very small number of parameters, and it fit the details of the data that were coming out of Klinman’s group spectacularly.
I had forgotten how controversial this was at the time. Many years later, Judith would come to Princeton and give a colloquium in the Chemistry Department. And I went to see it because I hadn’t seen her in years, and I was curious, and she was going to be talking about proton tunneling. And to my surprise, and also that of all of my colleagues in the Chemistry Department, she said, “So, what I can tell you is that Bill and his student were right.” Which is a fantastic moment. And I went back and I looked, and I realized I think it had taken two years between submitting the paper and actually getting published. So that was a very controversial view at the time—that quantum mechanics was so essential to the function of this molecule.
And so that was another thing we were doing. And that led—that was a different kind of interaction with an experimental group. I always regretted that Judith and I never actually wrote a paper together. I was incredibly inspired by the things going on in the group, and the students went back and forth and talked to each other, but the theory stood kind of on its own, side by side, with the experiments. It was also inspiring to talk to somebody who was—I always get this wrong—I think the correct statement is she was the first tenured woman in physical sciences at Berkeley. And I could never shake my disbelief at this. Because I was a kid, right? I was very young. But I think Judith is 75 now? So she was not an old woman then. You felt like the person who was described in that language should be someone very old, and she wasn’t.
This would be before or after Mary K.?
Well. So, Mary K. [Gaillard] arrived around the same time that I did at Berkeley, and Judith had been there before. There’s a beautiful biographical essay in one of the Annual Reviews volumes, by Judith, which is very much worth reading. So yeah, so that was the spectrum of things. I don’t know if that exhausts everything, but that was the range. I was also trying to work on sort of neural network models and various interesting things. So Mike Crair, who is now the vice provost at Yale, was one of the people who worked on that. And Leonid Kruglyak, who I told you about, who became one of the great modern mathematical geneticists in the genomic era, also worked on those problems. I don’t think the papers that we wrote together in those times were especially important, but it was fun. So yeah, so that was the range of things.
I think that’s a great narrative end point for our talk today, and I want to close with kind of a broad question, and then we'll pick up with NEC next week. In physics, of course there’s this classic binary, really a labor binary, where when something fundamental happens, either the theorists do something that sparks a push forward by the experimentalists, or vice versa. But I wonder, for you, as all of this is coming together for you, as your time at Berkeley is coming to a close, how are you approaching that divide with your own research agenda? Where are the advances both within your own body of work and the fields that are represented more broadly? What are the theoretical advances that allow the experimental side of this research to advance? And what are the experimental advances that really allow the theory to advance? Because I know for you, the boundaries are not nearly as neat as they would be for so many of your colleagues.
Yeah. I think that the interaction between theory and experiment was very intimate. So for example, experiments on the responses of neurons to sensory inputs tended to be done in a very structured way, where you created a background of complete silence, and then you had a brief sensory signal, and then you looked at the transient response of that neuron, or you turned on a sensory stimulus against a background of nothing, and held it on for a very long time and looked at the steady state response of the neuron.
And we had this view that what you really want to do is understand how neurons—how the brain processes and how neurons encode the signals that are in the outside world, which are of course not conveniently broken into discrete trials, and not presented against blank backgrounds, and do not have properties which stay constant for long periods of time, and so on. So in order to make the most out of the theoretical picture that we had developed, you didn't need a new technology for recording from neurons, but you needed a new design for the experiments. And in many cases, that new design leaned on exploiting the available tools for computer-assisted delivery of signal stimuli and recordings in ways that were not so commonplace in neuroscience groups.
And so the fact that this was grounded in my experience with Rob, who was an experimental physicist through and through, that was crucial. It made sense to Rob that we should try doing it in a different way, and so he did. And that’s because he could go in and rebuild things himself, whereas for some groups, that was harder. So that’s an example, I think, where it wasn’t that—in that case, it wasn’t that theory was telling you, “Oh, you need to go out and invent a new kind of microscope to see something nobody has ever seen before.” In some sense, what you need to do is to exploit the tools that are at your disposal in a way that nobody has before.
And of course conversely, as I think I mentioned, I had this longstanding fascination with the role of quantum effects in biology, much of which proved to be a bit misguided. It had never occurred to me to think about enzymes—learn something about biochemistry—until I saw these experiments from Judith. And I remember—I actually went back and read, not that long ago, that original paper that Bill Bruno and I wrote. It actually has a discussion of, if you could measure—if you could make the error bar a factor of two smaller, would you actually distinguish between these different theoretical points of view? And I remember a certain amount of back and forth about that. I don’t remember where it all ended up. But it certainly set a style that you should—that the dialogue between theory and experiment is not theorists propose theories, and experimentalists go up and test them. Or experimentalists discover things, and theorists explain them. That it is a dialogue. And there are moments in that dialogue, as you hinted, that drift toward one extreme or the other. But I think I found the closeness and the interactivity of that dialogue to be especially exciting.
Bill, I think we're good for today!
Okay, thanks!
That’s a good narrative point. We'll pick up next week. Princeton, here you come.
Yes, right. Well, NEC to start.
NEC, yeah. With a little bit of Princeton built in.
A little bit of Princeton mixed in, yes.
We'll see where that heads. [laugh]
All right. Thanks.
Very good, Bill. Have a good one. See you soon.
Great. Take care. Bye-bye.
[End 201023_0350_D]
[Begin 201028_0355_D]
This is David Zierler, oral historian for the American Institute of Physics. It is October 28th, 2020. I am so happy to be back with Professor William Bialek. Bill, it’s good to see you again.
Good to see you too, David.
So let’s just pick up on the theme of naivete, because I want to get your mindset as you accept this offer. And to go back to this amazing conversation that you have with David Gross, even as you're leaving Berkeley and trying to figure out in your mind what this new opportunity is going to look like, to what extent are your defining success as, “NEC, I certainly have in the bag, but being a part of the physics community at Princeton, however that might be defined, is going to be part of the equation for me to determine if this is working out”?
No, I don’t think so. I felt reassured by the idea that I would have some engagement with students. And of course, it’s not the randomly chosen local university, so that mattered. But there was no plan. The plan was to go do something at NEC. To go specifically to a place that was an empty office building, which nonetheless was saying that they had enough confidence in me that they would give me considerable resources. That there was space to build something. I was excited about interacting with Albert Libchaber who was moving at the same time. There again, it was somebody who was a senior figure, a substantial figure, from a generation senior to me. And his excitement about the things that I was interested in meant a lot. So yeah, I think it was very much on its own terms.
So Princeton was gravy. Anything that you had there would be—
Actually, I would be even stronger. I think it was instrumentalized in my mind. That is to say, that was the path to be sure that I could still interact with students. And the truth is there were students finishing up from Berkeley, some of whom would come with me. And so it wasn’t that I had a desperate need. We also—the resources were such that it would be feasible to hire postdocs, which was a new thing, and so that was exciting. So it was incredibly important to know that that was possible, but I don’t think that there was any more grand plan than every once in a while I could teach a course or have a student.
And very quickly, actually, there was added to this—I think the first time was in the first year after we moved—teaching in a summer school. I had taught in one in Santa Fe in ’89, which I enjoyed. It was the second of the Complex Systems summer schools at Santa Fe. And there was a relatively new summer course that had been started at the Marine Biological Laboratory on theory in neuroscience, and I was asked to come and lecture there in ’91, so just after we decided to move. And it would turn out that I would teach in that summer course for 25 years—
[laugh]
—and basically after the second year, we decided we were going to spend a whole month, the whole month of the course. And so that other kind of teaching became important, and easier to do, because I was in an industrial research lab, so I didn't have day-to-day teaching responsibilities, so I could commit to doing that during the summer. We started—part of moving to NEC, I saw that there was a need for a summer school for young physicists who were interested in problems at the border of biology. And so we did something which was small in the sense of not lasting very long, but substantial in the sense of being able to fit lots of students. So we had a one-week series of lectures, which we actually partnered with Princeton, so they provided dormitory space for the students, and we hosted things at NEC.
Some of those were—it was a time where if you look back, maybe twice, there had been a Les Houches summer school on biological physics over—reaching back to the sixties. There really wasn’t anything. Now, it’s commonplace, but there really wasn’t anything that was aimed at physicists who were starting to think about biological problems. So getting people together for a week and listening to an interesting selection of talks—the first couple of times we did it, we spread over a wide range of topics, and then there were a few follow-up sessions that were more focused on particular things from year to year. So that kind of teaching became important, and as I say, easier by virtue of not having a regular teaching responsibility. So no, there wasn’t a sense of “Oh, if I'm a good boy, maybe my friends at Princeton will think that they should have more of me than they do now.” I was just, I don’t know, I wanted to take the opportunity at NEC in its own terms. It was a great opportunity.
I know that we talked last time—you knew next to zero about NEC before you really got there and understood what was going on. But let’s do a little bit of institutional history as you got there and were putting together the larger picture. How did this new endeavor—this industrial research lab, this empty office building—how did that fit into the overall mission of NEC?
So, Dawon Kahng who was the founder of the lab, as I mentioned, he had spent his career at Bell Labs, very productively. If you go back, I think it is the case that when he was a PhD student, his roommate was a Japanese fellow. They shared speaking Japanese. You might not think that would be a bond, since Dawon was Korean. The reason he spoke Japanese was because Korea had been occupied by the Japanese.
[laugh]
But anyhow, they're two foreigners in the U.S. in graduate school and they really were very good friends. I'm blanking on the name of the Japanese fellow. We could look it up. And Dawon went and had this very successful career at Bell Labs, and his friend went back to Japan and went into the establishment of NEC, where he rose quite high. I don’t remember exactly where he was. But they would get together, and Dawon had this dream of running his own research institute. And when the time came, that’s what they did. They established one. And it was at a time—so there are several things going on. One is it’s a time when there’s a great deal of criticism in the West, of Japanese industries were somehow profiting from—so I don’t know, I—
They're going to take over New York real estate, is the concern.
Well, they're going to take over the—you know, corporate Japan is going to take over the world, a threat which turned out to be vastly exaggerated. It is, after all, a small country, which was not emphasized at the time. There’s this very large country not very far away—
[laugh] Right.
—turns out they have a much more powerful economy. Which somehow, at the time, seemed not to be on the radar screen of the pundits, which is interesting. So there was criticism of the Japanese somehow sort of profiting from innovation in the West. They were good at making stuff, but they weren’t good at inventing it. And so there was this concept of the cheap Japanese imitation, which we grew up with. And I think they chafed through this and wanted to do something about that for public relations reasons.
I think they genuinely believed that there was something different about the way science was done in the West, and they wanted to have a window with which to observe that. And so building a small institute that was specifically devoted to basic science—basic science in areas that you could imagine having an impact on the company, but not in a goal-direct…but not specific.
That sounds straight out of the Bell Labs playbook.
Exactly. There was a glitch in the plan, and that is that the Bell Labs that we know about that was focused on basic science sat physically in the middle of a much larger Bell Labs, which grew incrementally more and more relevant to what the company actually did. In our case, the part of NEC that did things that were relevant to what the company actually did was in Japan. Mostly. There was a little bit on another floor of our building, but the ties weren’t very strong. So there wasn’t really a plan for making the connections. And I think Dawon actually felt that that was important, because he felt that if that plan was too well specified from the beginning, we would be swallowed. And he might have been right. So the idea was to put something specifically in the U.S., I think not just because of the friendship between Dawon and his Japanese fellow student, but because the West was seen—the U.S. was seen as doing something that they weren’t doing so well in Japan. That was the perception. And we were given a great deal of freedom. And again, Dawon was very good at explaining why that was essential.
Just to give a contrast, as the institute grew some reputation, it was noticed in Japan, and so there was a moment where whatever it’s called—the Ministry of Education and Science, or the Ministry of Science and Research—I can’t remember—I don’t remember exactly how the organization works in Japan—anyhow, some ministry was going to build a very large institute. I will not mention the name of the institute, because eventually it did get built, and there’s all sorts of fine things that happened there. And somebody got it into their head that they should send someone over to our place and talk to us about what we thought of their plans. And so, they did. And many of us were completely shocked by their plans, because it was not only that they should build an institute to do X; they should build an institute to do X that has divisions X1, X2, X3, and X4. And in Division X3, there should be a department of X3A, X3B, X3C. And in Department X3C, there should be a laboratory of X3C1, X3C2, X3C…and so there was an enormous level of detail, completely inconsistent with the idea that you want to build a new institute, you want to do something generally over here, find some really exciting people who were thinking generally over there, and let them do what they want. Not at all. They had it planned several layers deep in organization.
And so I think Dawon understood this about his corporate sponsors, right? That this wasn’t the same people, but that was the mentality, right? You laid out the organization. And so what was crucial was to protect our institute from that kind of thinking. And it worked! We had enormous freedom. We had budgets that were generous but not crazy, I think. If you were an experimentalist, you could afford to hire a technician and have a postdoc work with you. And probably if two of you worked together on something between you, you could have three postdocs. Something like that. That was the scale.
Bill, was the research environment, as you understood it before signing on and then however that actually played out, was the commitment to basic science so complete that the question—“What does theoretical biophysics have to do with this?”—is that obviated in and of itself, just because that’s really what basic science means?
So, as I think I said, my feeling was that unless they were just flat-out lying to us, we had ten years. And I honestly don’t remember—there must have been official reviews of the progress of the Institute after five years, or something like that. How did I get to the estimate of ten years? I don’t remember exactly. That sticks in my mind. It also sticks in my mind that I stayed for 11 years, and trouble only began somewhere around year nine, ten. So I was proud of myself for getting that right. They were very clear that—so first of all, “they” wasn’t very many people. There was Dawon and there was Joe Giordmaine, whom I spoke about last time. And there was also a vice president for computer science, Bill Gear, with whom I didn't interact that much at the beginning. And so, what they presented to me was a vision of a place in which the kind of science they were going to invest in were things that they felt made sense, given what the company did. You know, it’s a company that builds hardware and software for moving information around.
So connect the dots to your area of expertise.
Well, they weren’t particularly interested—well, as it turns out, there were probably more connections between some of the molecular biophysics things I was trying to do, and some of the questions that would eventually be interesting in quantum computing, and things like that, than any of us thought of at the time. But this is 1990. People are fascinated by neural networks as a different model of computation that’s somehow grounded in what we know about brains but could be turned into technology. The idea that maybe you should invest in learning more about brains themselves, but from a point of view that aimed at understanding principles of how information is represented and processed, that maybe were generalizable and applicable elsewhere—I think those things made sense as things that could have an impact on things that are generally relevant to what the company does, but obviously not specifically. They're not going to build a robot fly and fly it around, but if it turns out that there is some interesting strategy that flies and other organisms use for representing information, maybe there’s something generalizable there, and maybe you'll just understand something about how things work.
I've always felt that there was another factor, which is that the people who were running NEC at the time were the people who had built it from being the local telephone equipment manufacturer to being a multinational giant. So that generation was still in power. And they looked at their compatriots. So, who were their compatriots? Their compatriots were the people at AT&T, the people at IBM, the people at Xerox. And AT&T had Bell Labs, and IBM had Zurich and Yorktown Heights. And Xerox had PARC, right? And however tenuous the relationship of those things were to the profitability of the company, you pick up The New Yorker in those days, and you would see an ad from AT&T, listing all of the great—talking about all of the great discoveries that have been made at Bell Labs. Now, this is really interesting. You know, why are they advertising? For what? I'm going to change my telephone for—? Actually, I'm not even sure this was a time where you could change your telephone carrier.
[laugh] Right.
Right? I guess it must have been. But anyhow, yeah, I'm going to switch from Sprint—I'm not going to switch to Sprint because of the cosmic microwave background? I mean, really? I don’t know. But obviously somebody thought so.
[laugh]
And I think that there’s a component of the folks high up at NEC thinking, “In order for us to be taken seriously in this rather small community, we need one of those, too.” And so there was actually a big effort to build a research facility in Japan, in Tsukuba. And then there was this sort of outpost in the U.S., much as IBM has research facilities in the U.S. and outposts in other places. Now, of course, it’s commonplace, but in those days, it was unusual. So I think there was a component which was if you want to be cynical, you would say public relations. And if you want to be sort of dime store psychology, it’s part of the cachet of being a leader in the world of technology is to have a certain amount of visible science attached to your company. And so in that sense, it didn't matter so much exactly what we did. It mattered that we do it well.
And even Bell Labs, which I guess had the distinction of doing great science which was the most distant from their nominal business—remember that’s not how they started that, right? Penzias and Wilson quite specifically did not go out to learn something about the universe. They were trying to find what was the right band for satellite communications. So, yeah. So the vision that was presented was of a lab to do basic science. As I say, we were given generous but not ridiculous resources. The structure was such that those were, barring disaster, quite stable, so you knew, you didn't worry about where the money was coming from.
And there was this idea, as I think I mentioned last time, that we as an Institute should do some of the things that anybody would do if you were going to make up an institute like this, and then we should also do something else. And at least on the physics side, the something else got focused on doing biophysics. So Albert and I came. Over the next few years, we would make two more appointments of experimentalists doing things related to biophysics of neural information. One was a fellow named Hanan Davidowitz who spent some time and then moved to let’s say more applied industrial problems in startups. And the other was Rob.
I'm not sure exactly in what sequence to tell this, but in the summer of ’91, so really the first year I was at NEC, we went to Europe, because I had a conference to go to. And it was exciting because the sort of next major paper in what would become a very long collaboration with Rob was about to come out in Science. This was the one where we did the reading of the neural code. And so it was accepted, and then there was an issue of when it was going to come out. And we suddenly got this panicked fax or something about, you know, we had to hurry up with some copyright transfer agreement or something, because they wanted it to come out sooner rather than later. And as it happens, they had decided that the gorgeous photograph of a fly’s eyes that had been taken by one of Rob’s colleagues in Groningen was going to be on the cover. And so we always thought that that was probably because they needed a cover that week.
[laugh]
So that was what determined the scheduling of the [laugh] publication. I mean, it was going to come out at some point, but—so that was actually—I think that was actually happening while we were in Europe. We went up to see Rob and his family. And you know, they were kind of happy, but his position, if you read the writing on the wall, the balance between doing—he had one of these positions in the medical school under the professor who did things related to hearing, and that was fine. He was happy to think about hearing instead of vision; that wasn’t the issue. But it looked like, if you extrapolated, the fraction of time he spent doing research versus doing things that were immediately useful in the clinic was going to shift significantly. And so he was a little—yeah. I don’t know if I could say unhappy. It seemed to me that it would be sort of a shame if that happened. He was such a good scientist.
Besides this funny image I have in my mind of a nice California boy, born and raised in Berkeley, essentially your entire life, going to this empty office building in suburban New Jersey—beyond that very jarring juxtaposition of environments, to what extent were you looking to recreate an academic culture? Or, was this a place not to do that? Was it an opportunity not to be in an academic culture, which has all kinds of issues that are not germane to just doing good research?
I think many of us felt strongly about the importance of community at NEC. I think many of us felt strongly about the importance of community in the way we did science. And so in that sense, building community at NEC was important. There were also efforts to build community in ways that connected to other local institutions. But it was clear that if we wanted the place to—I mean, all of us, in some sense, were taking a risk. We were all coming from places that were better established. And so I certainly felt that part of my job was to help put the place on the map. And so that’s why, for example, running a summer school seemed like a good idea, because, well, it would be good for the field, it would be good for me and for the people who would be working with me, because we’d have more people around, at least for a week, and those connections last. And, it would be good for the Institute, because it puts it in people’s minds as a place where these kinds of things are happening. So I think we all had that communal sense.
There were obviously different subcommunities in the different fields that were represented at the Institute, but I think we all had a sense that part of our job, especially those of us who came in kind of the first wave—and although I was pretty young, I was given a reasonably senior position, so I felt that responsibility. Academic versus non-academic is interesting. [pause] Obviously we had no responsibility for the canons of our field, which I think is one of the issues in the university environment.
And perhaps also the lack of people who see themselves as the holders of the canon.
Right. Now, I've heard it sometimes said that the strength of hierarchy and everything is less in an industrial research laboratory than in academia, blah blah blah. I don’t know that that’s true. I think it depends more on the individuals involved. I think some people are very concerned about their status in an organization. So one idea is, you could do science in an entirely professional environment. So everybody is a staff member. There’s no notion of some people being permanent or senior, and other people—at the end of the day, somebody’s going to have to be in…you know, the organizations can’t be completely flat, so there’s always something.
I think everybody felt early on that it was important to have postdocs. I think this was just borrowed from Bell Labs, right? You could imagine running an industrial—first of all, [laugh] science ran without postdocs until, roughly speaking, 1950, right? So I don’t know, there’s no uniquely right way to do things. But the idea that we would present ourselves to finishing PhD students as a place to come and be a postdoc for three years as an alternative to being a postdoc at a university, that was an idea from the very beginning, and obviously was—it’s also the alternative to going to Bell Labs or IBM. And so in that sense, there was an emulation of the academic style, but that academic style already existed at the competing industrial research laboratories. Or Exxon, for that matter, which in many ways played a crucial role in forming the American soft matter community. So, I don’t know that it was consciously academic.
And I don’t know that [pause]—I mean, obviously there must be differences. But I think once you have the idea that you are going to run an organization in which there are young people who are only there for fixed and relatively short terms, and part of succeeding means mentoring them so that they succeed in the larger world, you've imported a very big chunk of what it means to be an academic. What you haven't done, as I say, is take responsibility for transmitting a canon to much younger students. You know, many of us had some connection to a university, and therefore could advise PhD students, but nobody was advising more than one of them, and most people were advising zero. On the other hand, there were a lot more postdocs around.
So the shift was very much toward—so in that sense, it wasn’t—it’s not an educational institution, right? So in that way, we distinguished ourselves from academics. But I don’t know, we also did not feel that there was much pressure to show that the things we were doing were immediately related to the bottom line of the company. So therefore, the sense of what was successful science—sorry, what was success—was very similar to, let’s say, at least the research part of what it means to succeed in a university.
Bill, was your sense that at this time—did your research agenda have its own momentum to it so that the things that you would go on to work on at NEC, you would have worked on had you stayed at Berkeley, had you gone to Los Alamos, had you gone to Harvard? The question is basically how much did the environment itself matter? Or, were you self-consciously looking to adapt your research agenda to this new and unique environment at NEC? Obviously it’s a counterfactual questions, but I think it’s interesting to ask.
Well, the most important thing—the most important thing—is that very early in my time at NEC, within the first year, the possibility of recruiting Rob to join me was there. And we knew that the place wasn’t filled up, right? There was some notion of how many people should be hired. So there was a chance to make a pitch for doing that.
And obviously at Berkeley, you would not have been able to bring him there. I mean, you couldn't stay yourself; how would you have been able to bring him? [laugh]
In fact, it’s more interesting than that. Something came up while I was there about whether we would search in biophysics, in the physics department. And I don’t know where I got that idea. And so somewhere I have a memo that I wrote to my department chair—I don’t know if it was the same Professor Price who played a role in our story last week, or if it was the one just before him—probably the same one. And somehow, I was involved—well, I was involved in searching—I was on a search committee that eventually was aimed at hiring somebody who did modern condensed matter theory. And we hired Dan Rokhsar, who would eventually change fields and do biophysics. And you should get him to describe the evolution of his relationship with the physics department. But I don’t think he was treated very well by the physics department. And eventually—I mean, is now happily a faculty member at Berkeley, with most of his appointment in one of the biological departments, having spent time also at one of the DOE labs, for a while. But anyhow. So maybe it was in those conversations?
Somehow it occurred to me that I should write a note about whether—if we were going to do something in biophysics, what would we do? And so I wrote that I thought that it probably wasn’t going to happen for a while, because my own appointment seemed like an experiment. I mean, this is before I had any reason to think there was anything wrong. That my own appointment represented the kind of first foray, and I could understand waiting for a while to see how things turned out, before we did anything else. But, you know, people are moving around as we're having these conversations, right? So maybe it would be worth at least somebody knowing who the names are.
And if I am not mistaken, I made a list of five people who were, roughly speaking, my contemporaries. I mean, some a little older chronologically, some a little younger academically, but we're all roughly contemporaries. I think it was four experimentalists and a theorist. All of us—now all of these five people have gone on to positions at fine universities. Three of the five are members of the National Academy of Sciences. It was a good list! I don’t think I even got a thank-you note for sending the memo. Or even an acknowledgement, right? That might have been one of the first signs that there was something wrong. But anyhow.
So, no, it was totally unclear how—I should say that one of these people was at some point offered a job in the biophysics department at Berkeley, so we sometimes joke that had he chosen differently, maybe I would have too. Or, I don’t know. So, no, certainly there didn't seem to be the opportunity to grow a biophysics—so it was becoming clear that part of the problem was that the physics department was organized in groups. There wasn’t a biophysics group, and there was no prospect of growing a biophysics group. On the other hand, in moving to NEC, there was quite explicitly the possibility of doing that, and to some extent, it happened immediately, in that Albert and I went at the same time. And although Albert and I never wrote a paper together, his presence as such a visible person in the physics community generally—so, by coming to NEC, he added a lot of luster to NEC in many people’s minds. And by deciding that what he was going to do while he was at NEC was things that were related to biophysics—for many people, that gave a stamp of legitimacy to the field that it couldn't get by virtue of any of us actually doing anything.
[laugh]
Right? [laugh] But rather having somebody who was famous for having done something else come and say—
[laugh]
—“Aha! Now’s the time for us to all move.” And, you know, that is what it is. Albert does a particularly charming version of that, so I appreciated it. In some objective sense, I would—so hearing it parroted by younger people, I would find it a little annoying. In the version that Albert had, I didn't have a problem. In part I think because he often had the view that the thing he just noticed was the most interesting thing in the world, and then on reflection, well, yeah, it doesn't lead very far, and that’s okay, let’s go on to the next thing. [laugh] And that was great, because he was very clear about that. And then he would be careful, because of course in pied piper-like fashion, he had led some number of young people to be excited about [laugh] this thing, and their continued success depended on him continuing to express some enthusiasm for it, but privately, he could say, “Well, they can go off and do that. It’ll be fine. I'm going to go think about something else now.”
So, right. So not only was there the prospect of building something, but in some way, it was just kind of an instant group upon us arriving. And then there was the possibility of hiring more. As I say, we would recruit two experimentalists, the first being Rob, and that was—did we think in 1984 that we would be reunited six, seven years later, and spend a decade next to each other, which would give us the chance to fulfill most of the dreams that we had in that early year? I don’t think that had occurred to any of us, so that was a—
And it’s worth noting, as a historical marker, this is before Zoom, right? Where physical proximity really mattered. You needed this.
Yes, yes. And Rob’s lab was across the hall from my office. I don’t remember where his office was at NEC.
Which is telling in and of itself, since he was probably—that really wasn’t important, where his office was. It was the lab that counted.
Yes. I think one of the things about life at industrial research laboratories is because you're not teaching, because you're not writing grants, and because groups are small, I think it keeps experimentalists in the laboratory longer.
Ah!
Because if you want to retreat to your office and direct a group, then you become a manager. [laugh] Whereas if you want to remain a research scientist, then you go in the lab and you do things. And so pretty much everybody, all the experimentalists at NEC—I shouldn't overgeneralize, because I don’t know the details, but it was certainly my impression that senior experimentalists held a soldering iron and went to the machine shop. And that’s harder to do in a university environment, I think. Groups are bigger. Raising money is a bigger issue. You have to teach. There’s more sort of community responsibility. Anyhow, that might be it. So if you wanted to investigate the difference in environments, that might be an important one. The problem is that I'm a theorist, and so my impressions of this are a little cloudy, right?
[laugh]
They're anecdotal. As a theorist, either you calculate or you don’t, right? There’s nothing to manage. So yeah, that opportunity, both the vision—so in agreeing to move, there was a vision of building a group, but the idea that Rob would be part of that wasn’t there, because I didn't know what his state in life was at that point or anything. And then that happened, and it felt kind of serendipitous, that it should turn out this way. And he came, and we just had this fantastic run of ten years. Which had consequences after that too, obviously, but every day, we were next to each other, and that was very special.
Another thing that happened, and this is looking ahead to the sort of evolution over that decade, I think—you know, Rob came, and Albert was there from the beginning. Rob came, Hanan came for a while, and there was a kind of liveliness about our community that was noticeable. And so people asked, would we give some lectures or run a journal club or something, about what we were doing, and people would come visit? People who were nominally there doing other things would come in and have a look.
And so during that period, there were two condensed matter theorists, Ned Wingreen and Chao Tang. So Ned had made a name for himself in mesoscopic physics, these sort of small devices where adding one electron produces—things like the Coulomb blockade, and conductance through wires that are thin enough that you basically have one scattering channel, and things like that. So he had been a postdoc with Patrick Lee and had come to NEC. Chao Tang was one of the coauthors of self-organized criticality. And so, they had come to do things in condensed matter physics and statistical physics. And they did. Ned, by the way, I had met when I was a postdoc in Santa Barbara. And the reason is that his thesis advisor was John Wilkins, at Cornell. John would eventually move to Ohio State. And John went on sabbatical for a semester or a year; I don’t remember, to the ITP, and he brought all of his graduate students with him. And so Ned was one of the younger graduate students. So we've known each other—is this 1984 or 1985? Anyhow, it doesn't really matter, I suppose. [laugh] It’s getting to be a while ago, right? Thirty-five years. So yeah, so Ned and Chao started to come to group meetings and figure out what was exciting in biophysics. And I was interested in all sorts of things, but I was very focused on things that connected to the experiments that were going on in Rob’s group.
And Bill, how intensively were you writing during this period? Absent grant writing and teaching and some of the other trappings of academic responsibility, was this an opportunity for you to write a lot and also present at conferences and things like that?
Well, I traveled, both with my family in tow and by myself, and that was wonderful. That freedom, of course, is greater in an industrial—let’s put it this way—the rhythm of teaching does force your travel into fixed blocks. That said, I can’t—well, at the moment, nobody’s traveling. But I can’t complain, right? I don’t think I've ever felt—the fact that there were 24 weeks out of the year in which I must be physically present in order to teach my classes could not be viewed as—I mean, that’s out of 52, right? [laugh] And there are some other weeks where you should be around. But if you actually ask when would you have a problem if you weren’t there, it’s pretty minor, right? And if you know that—everybody knows these tricks, right? If you know that you have something coming up, then maybe you should try and compress your teaching into the first half of the week, and that leaves you a couple days free on the end. So it’s not as if the constraints of teaching are—actually, let me make a positive statement. I really enjoy teaching.
That’s obvious. That’s clear.
Well, thank you. And so I have never thought that the responsibility of teaching was somehow in conflict with—
No, not that it’s a burden. But it is a time suck.
It does take a certain amount of time. So the major writing project of this period—so other than the usual business of writing papers, the major writing project of this period was a book called Spikes: Exploring the Neural Code, which Rob and I coauthored with two of the graduate students from the Berkeley period, Frek Rieke and David Warland. And in fact, we were the coauthors of that Science paper, “Reading the Neural Code.”
And the evolution is interesting. We wrote this paper in Science in ’91. It’s a short paper. And there were other papers using those ideas to look at other systems, and then we moved on, quickly, to do things, right? And we always had the thought that when you write a short paper, you're supposed to write the long paper that goes with it. And there were two PhD students involved, who had also written PhD theses, and they had done marvelous jobs of placing the work we were doing in some larger context of the sort of evolution of our thinking about what the problem of coding is in the nervous system and what that has to do with information theory and what it has to do with questions about the timing of action potentials, and what it has to do with adaptation and evolution, and what it has to do with various physics issues. The whole—and there’s a marvelous history to the field. And we kept trying to write sort of the longer versions of the paper, and we kept putting them aside.
And at some point, we realized that our point of view had sort of—after all, by the time we get to, let’s say, 1994, that’s ten years since Rob—or ’93, ’94—that’s ten years since the year that Rob and I spent together in Groningen. And in this period, through a whole variety of things, some of which were represented in papers along the way, our point of view on the subject had evolved very far from what was the conventional point of view. And so trying to figure out how to write a paper that would explain some of the results that we had that were beyond what fit into the short format of the Science paper was pretty challenging. And so at some point, we got the idea of writing a book. And there was a marvelous editor at MIT Press named Fiona Stevens who would eventually move to Oxford University Press, and then I lost track of her. She was great.
Is she credited with the awesome title Spikes?
No, that was us. There was a book designer. That book—[laugh] so Rob has a marvelous aesthetic sense, and I learned an enormous amount from him about how you present data. And also just remembering that there is a beauty to science, in the sense that there’s beauty in art. And so, for example, Spikes contains a humorous but actually I think quite useful sketch of an example of coding, which is facetiously referred to as bar coding, which is to say, how you order drinks in a bar. And it’s a series of sketches done by Rob of all the different ways of holding up different numbers of fingers. And the point is that you could imagine that which fingers you're holding up actually matter, which is sort of like worrying about the timing of individual action potentials. But in many cases, like in the bar, all that matters is how many fingers you hold up. Of course saying that it doesn't matter which finger you hold up is not exactly true, and so you get to make that observation. Anyhow, so there’s aesthetic sense that runs through all of this. And so it’s used to make a point about the nature of coding information and things like that. But it is fundamentally a piece of art that Rob did, and also humor.
But when it finally came time to worry about the design of the book, he had something which many people have seen in other contexts, but he had a particularly beautiful version of it, which is the slightly old fashioned oscilloscope recording of the electrical signals from neurons. And the convention is, the background is very black, and the signal—because it’s actually being drawn on a cathode ray tube—is green. And you see these pulses, which are the spikes of the title. So the book designer decided that that would form kind of the body of the design for the jacket of the book. But the really brilliant thing was that he noticed that of the four authors, one of us had a very long name, namely Rob, right? He has this compound last name. So if you take our list of four names and you turn it on its side—
Ah! [laugh]
—you get this pulse, which stands up—
That’s great. [laugh]
So no, the folks at MIT Press did not contribute the idea for the title, but their cover designers did quite a bit of clever aesthetics. It’s also true that the logo of MIT Press—I feel like I've told this story to somebody recently, and I'm hoping it’s not you.
No. No. But it has—there’s one line that sort of goes above the others.
Exactly. So the logo of MIT Press is a series of sticks, which stand for the legs of the “M” and the “I” and the “T” and the “P,” right?
Oh, cool. Very cool.
And I think one of them stands up. Yes, one of them stands up to make the “T” and one of them sticks down to make the leg of the “P.” And what the designer did was to color the one that stands up in green, to echo the green of the oscilloscope trace. One imagines that they needed quite a lot of permissions to dither with the logo. Anyhow, so yes, so that became sort of the big project of the period, which stretched out quite a bit, I'm afraid.
To contextualize the themes of the book within the larger research world, who else out there that’s not part of this group is connecting neurobiology and mathematical theory and informational theory, statistical analysis, sensory neurons? Where else besides this? Or, is that part of the point, is that all of this very new field is sort of self-contained at NEC, and NEC likes that? It likes the idea of there being this association with “all of this amazing stuff that’s happening is happening here.”
So I think [pause]—so this is a period in which the idea that the exploration of the brain should become a more broadly quantitative field, that mathematical approaches—and mathematical approaches span trying to make realistic models for particular pieces of the brain, to trying to make abstract models, to trying to use more powerful tools for data analysis—it covers a lot of ground—that idea is spreading. That’s partly why there’s a course on what would be called Methods in Computational Neuroscience at the Marine Biological Laboratory. It’s why there’s a conference on neural information processing systems that was meant to be the interface among all these subjects but would eventually evolve into today the premier conference in machine learning. Hopfield’s papers on neural networks are ’82, ’84, and so this idea, this sort of nexus between neuroscience, computer science, statistical physics, has been explored. So I think that there’s a kind of growing field which you could see as part of neuroscience.
In fact, one of the sociologically interesting things, I think, is that physicists had a big impact on neuroscience both with theoretical ideas and with experimental methods and their intertwining. And the neuroscience community, for its part, was—there were moments where if you were a young—nothing’s perfect, right? So one could always focus on the challenges of people coming with different points of view being accepted by an established field. But I think the neuroscience community saw itself as constituted from many traditional disciplines.
So if you go back to Steve Kuffler, who founded the neurobiology department at Harvard Medical School, he quite consciously thought, “Well, the exploration of the brain, it’s got anatomy in it, but it’s not all anatomy. It’s got physiology in it, but it’s not all physiology. It’s got pharmacology in it, but it’s not only pharmacology. And what we care about is understanding how the brain works, not whether we're an anatomist or a pharmacologist.” I think that that style of thinking, for all the usual internal problems of turf wars, for all the problems, I think that that style of thinking meant that the neuroscience community was relatively open to the idea that people would come with mathematical and theoretical approaches with quantitative experiments that were grounded in physics technique and so on. After all, some part of this field rests on the work on Hodgkin and Huxley and Katz in the late forties or the late fifties, where we really understand how nerve impulses are propagated and how signals are transmitted across the synapse, by virtue of imaginative mathematical analyses of very quantitative experiments that rested on your ability to measure what were at the time very small electrical signals. And so there was a sort of classical biophysics which had launched some part of modern neuroscience.
And so the idea that you could come with let’s say even if we weren’t directly the intellectual descendants of Hodgkin and Huxley, you could come with that same mathematical style to questions more at the systems level than at the cellular and molecular level, that was appealing to a lot of people. And there were people who came much more from statistical physics. There were people who came much more from information theory. There were people who came much more from dynamical systems. And this all sort of mixed together in interesting ways.
And so the first part of your question was, were we the only people doing this? And I think the broad answer to that question is no. There was a whole world of people who were trying to do a more theoretical or computational neuroscience. When it came to problems of coding, I think we were a bit more out—we had a very definite view, in particular about the problem of how information is represented. But we also had interest in the problem of how precisely it is processed. And I would say that our views about how precisely it’s processed did not sweep far and wide through the more biological community. It had an appeal for some physicists, but I don’t know that it was as successful in changing the minds of biologists.
But the ideas about the representation of information, the ideas about coding, that there was a sort of probabilistic, statistical language that connected to information theory and so on, or that was the right way to think about these problems, I think that did have an impact. And that set of views, there was—so I think the answer to whether we were unique was no, but the answer to was NEC happy because there was an identifiable thing that we as a small community represented is yes. That is to say, within this larger whole, we I think were widely seen as the standard-bearers for a particular point of view.
And especially I think a lot of people found the nature of the collaboration that Rob and I had to be interesting. That I would for the most part—it’s not so easy to find places where you can write reasonably abstract theoretical papers, although I tended not to do that so much, because I would find that having written one, it raised a whole bunch of questions that were immediately connected to experiment. Since my best friend, who was an experimentalist, was right next door, the thing to do was to go find out [laugh] whether we were on the right track. So this combination of abstract theory, new ways of looking at the data, really impressive experiments that Rob was doing, in terms of the quantity and quality of data that he was collecting—I mean, when we started, nobody thought that you would be able to do experiments like that anywhere except in a fly. And so for some people, that was an objection to the whole research program. So quite explicitly, I remember. I remember somebody actually saying, “Will you ever be able to do this in a real animal?”
[laugh]
Meaning presumably one with fur and a backbone.
[laugh]
Or at least a backbone. And it’s not like we were hailed as conquering heroes. There were other points of view that were competing, and so on. But certainly, we represented an approach, rather, and so if you made a list, there weren’t so many groups out there. In particular, the nature of disciplines was that finding a place where somebody who had some credibility as a theorist was interacting directly on an everyday basis with somebody who was doing experiments on a real nervous system—it just wasn’t so easy to find. What department would they be in? Maybe they'd be in two departments.
So another example from this period was Eve Marder and Larry Abbott, at Brandeis. Larry had gotten interested—particle theorist, actually also interested in the interface—an early entrant into ideas about the interface between particle physics and cosmology. He got interested in these things through reading Hopfield’s papers about neural networks and going in that more abstract and formal direction. But then he found out that there was this amazing laboratory across campus, run by Eve Marder, where she was looking at—again, at a small invertebrate nervous system in the crab. The little piece of the nervous system that generates the rhythm by which crabs grind their food. And so, together, they realized that among other things, understanding how that works is really a problem in dynamical systems. And it was also a place where, once upon a time—in fact, I don’t remember the details, but I think one of the proponents of this view had been Eve Marder’s thesis advisor was, “Oh, you'll never understand the brain, because it’s too many neurons. Let’s go to a place where clearly some interesting function is carried out by a number of neurons that we can count, and someday we'll record from all of them, and we'll see how they're connected to each other, and then we'll understand what’s going on.” And the focus was on being sure we understand how they're connected to each other. And then there was a well-known paper where he declared defeat [laugh] by saying, “Ah, we now know how everything is connected to everything else, and we still don’t understand how it works.”
[laugh]
And of course, what was missing was that at each node of this network, there was a small dynamical system. And if you put a different dynamical system at every node, you’d get different behavior out of the network. And really that understanding came from Eve and Larry working together. I mean, lots of things would happen, okay? They had a fantastically productive collaboration. And they've since gone their separate ways to continue to do amazing things. But they again had this very special period.
Bill, obviously both the group and the book are born multidisciplinary. They're born to make connections and to break down barriers. But of course, it’s a book that’s going to be released out into the real world where the trappings of these barriers exist. And so questions like, what kind of department, or what kind of professor would assign this book in their class? So I want to ask, given that reality, did you expect that the book would have a specific impact on any one field? For example, was this a book where you wanted people interested or who knew a lot about neuroscience to think about information theory, or vice versa? How did you imagine the kinds of impact that the book would have?
So let me answer that in part by finishing an earlier thought that I left dangling, which was that lots of physicists had impact by getting excited about problems in neuroscience, and the neuroscience community, I think, was in the great scheme of things, quite open. As I say, there are parentheses and exceptions and complications. But what happened was that success in many ways meant having an impact on the mainstream of neuroscience. And so, neuroscience sort of—having been constructed as a field built out of many classical disciplines, they had the habit of absorbing—I mean, a bit like the nation of immigrants model—they absorbed things, right? And so the fact that many of these things came out of physics would get lost. And they would be lost—that they're lost to the neuroscientists, I don’t know whether that matters. I mean, who invented the tools that we use? Does it matter? That’s a matter of historical interest. Or, where did ideas come from? Well, if you're interested in the history, they have a trajectory. But the field moves on, right? And so maybe for the neuroscience community, it doesn't matter that they forgot some of these things came from physics. More disturbing is the physics community forgot that some of these things came from physics.
[laugh]
So in some way, it was sort of like—there was this period where I felt as though—there at least was a view in the physics community that doing the physics of biological systems, if perchance the biologists actually thought that what you did was interesting, and they started to use it, then that made it not physics anymore. Which is sort of a really weird—sort of outsourcing of the definition of the field. Whereby having a positive impact outside your field, it was as if your net impact was conserved. And so, if you were having an impact outside, then that must mean that you're not really doing physics.
But you know, most of the people who were doing experiments on these systems were not in physics departments. So if you were having an impact—if you were a theorist and you were having an impact on people doing experiments, then you were almost surely not having it inside the physics community even if those people themselves had started as experimental physicists. So there was something funny about the dynamic, which relates to the question that you're asking.
There are successive waves of physicists deciding that the physics of biological systems is interesting. And in some way, people whose attention was captured by the brain in the eighties up to 1990 and who had decided to make that move by then, that kind of got lost from the next generation of physicists who got interested in the brain. It was kind of funny. I don’t know. So in that sense, did I think—? I think we were self-consciously writing—I think we had in mind this flux of people who were coming from physics, applied math, computer science, electrical engineering, who were excited about the brain, who had the tools to think the way that we were thinking. And we also felt like there were a lot of people who decided that they were more interested in the brain than in their parent discipline and shifted at the graduate level. And so they hadn’t honed those skills quite as much, but they had a good foundation. And so if we could help them, they could move further.
And so we wrote a book that had its presentation of our view of the field, but then also had a long series of appendices in which we tried to develop—which was a little bit more textbook-like. Not to the point of having assigned problems, but to the point of really working out the derivations of things. And so we saw our job as I think our recognition—maybe this continues to be true for me—that my nod to—our nod, then—to the interdisciplinary nature of what we were doing, was an emphasis on pedagogy. That we did not feel that there was a well-defined community out there who knew all the things that we knew going in, to whom we could just speak, as we thought. That it was essential to explain how we got to things. And not because our—not in the style of claiming the things we got to were so startlingly original or exotic, but rather because once you knew a whole bunch of stuff, then this seems like the obvious way of thinking about it. But if you don’t know that stuff, then it’s not. And in fact if you don’t know certain things, then what we were doing might seem arbitrary, and it might seem like, “Well, there’s a gazillion other things you could have been doing. Why is your version particularly interesting?”
And I would claim that one of the things that theory does for you is to make certain ways of thinking natural. The theory could then be wrong, and then you will discover by following this natural way of thinking, that you get stuck. But that’s one of the things—how should you look at your data? Well, as soon as things become complex enough, this is a non-trivial question. We associate this with very modern times, right, when we collect very large quantities of data. But, in particle physics, this was true by the 1970s, and I would argue that in certain parts of neuroscience, this was true already when Rob and I started working together.
If I give you an hour’s recording of a continuously varying sensory input and the sequence and the 100,000 action potentials that are generated by one neuron, what are you supposed to plot against what? Right? An hour of wiggles, punctuated every 30 milliseconds or so by a blip—what do I plot that’ll tell you anything about what’s going on? And so I think that we sometimes forget that you need some theoretical framework within which to answer that question. It is true that once you have a particular theoretical framework, there are then statistical questions about what’s the best way of squeezing—estimating things. How do you decide whether the thing that you saw is significant or not, and so on. But the question of what is the mathematical structure that you should be using, that’s a natural science question. That is a theory of the world, not something that is internal to mathematics and statistics.
Bill, your commitment to theory and basic science is so well established. But I want to ask, with regard to this book and your capacity to reach a much broader audience, it’s clear you want to keep things like clinical or therapeutic value or biomedical engineering or all of these things—I know you want to keep these things at arm’s length personally, in terms of how you establish your own research agenda. But if a neurobiologist picks up this book and sees clinical value in it, or therapeutic value in it, what is your reaction to that, and how might that change your own understanding about what you are attempting to do?
Well, it doesn't change my understanding about what I was trying to do. What it does is provide data on the sort of complex interrelationships between basic science and—between understanding the world and changing it.
Good.
To remind ourselves of a well-known political philosopher.
[laugh]
It is worth going back to see—he was actually talking about epistemology, not about politics, when he said that, but anyhow.
[laugh]
So yeah, I know that a certain amount of what’s happening in the world of neural prosthetics is grounded in the things that we were doing. It has gone way further, and what I would say is the community that was influenced by the work, particularly on the idea of decoding, if you ask where is that problem most alive today, it’s in the community of people who are trying to build brain-computer interfaces as tools for assisting the handicapped and disabled. And that never would have occurred to me.
And that’s gratifying to you personally, but it doesn't necessarily make you want to jump into that world yourself.
That’s correct, yeah. And I think in fact it’s probably true that my impulses are all wrong for that. I'm not an expert on how that field has developed, but I think that, if I remember back to early discussions, it was much more about how if we understood more about the nature of the neural code, we would be able to generate—we’d be able to do a better job of reading out the information that was there and using it to—if I could read out from your motor cortex your intention about how to move your arm, then I would move the robot arm in that way. True, but it’s also the case that if everything is intact, I'm also constantly adjusting my internal model of the mechanics of my arm.
And so if I were to substitute—so for instance, what command do I have to send to my arm in order to pick up this coffee cup? Well, if I put a weight on my arm, I have to send a different command. But I can do that. And in fact, if you put a weight on my arm and I don’t know about it, the first time I do it, I get it wrong, but very quickly, I learn to deal with it. So if I now transfer to the problem of brain-computer interfaces, I read out the signals from the brain, I build an algorithm for decoding your intentions about how to move your arm, and I move a robot arm. And of course, my decoding scheme is wrong; I don’t exactly know what your intentions were, and so the arm goes to the wrong place. But you know that it went to the wrong place, and so the brain can learn—the same way that it learned how to do it with your actual arm, within reason, you can learn to do it with a robot arm.
And so in some way, I think the impulse that many of us who are interested in the problem of neural coding as a kind of maybe a problem that is relevant how real brains work, has an abstract formulation—it has a box around it, so that we can try and make progress—maybe that box is in the wrong place for the biomedical engineering problem of building prosthetic devices. Because the role of learning is vastly more important. I mean, we all know that these structures, the coding strategy of the brain are plastic and so on. But for convenience, we try to separate the problem of learning from the problem of coding. At least we did back in the nineties. And maybe that’s not a productive way to think about it, if what you're trying to do is build brain-computer interfaces.
So in that sense, not only am I not attracted to the problem because I have this aversion born of peculiar history for things that are medically relevant, which I don’t hold up as a positive—I mean, it’s just a fact. It’s probably a failure rather than a—I don’t hold it up as ideological purity. It’s just true. I have that—it rubs me the wrong way. It’s not only that I wasn’t interested in those problems for those reasons; it’s even possible that the kind of thinking I was doing would have been badly suited to making progress beyond having said, “Oh, it is possible to look at discrete sequences of action potentials and read out continuously varying signals.” And actually that can be extraordinary precise. And so that was one of many stimuli to people saying, “Oh, well, maybe we can do this with a real person sitting there, and move a robot arm around.” But after that, probably it’s maybe just—it might be just as well that I wasn’t interested. I don’t know that I would have made progress.
Being given the mandate when you started at NEC—“You have ten years; go accomplish great things”—that is impossibly vague and who knows what the measure of success there is. Except for yourself. To what extent do you feel like your time at NEC—that you fulfilled that mandate?
Um—
And what feedback might you have gotten from NEC along the way? I mean, are there end-of-year reviews? There’s no tenure review, right? There’s no “Rate My Professors.” What feedback are you getting?
They would review you every year. Eventually I think we convinced—did they review the senior people every year? I don’t know. They set our salary every year. At the beginning, [laugh] they had bonuses, which I think I said openly in some meeting makes sense if you're selling cars. I'm not really sure I understand—so much of what we do integrates, right, that every once in a while you can see that somebody did something in this small interval of time that’s gonna—and even that—you think about an experimental discovery; do you really make experimental discoveries in one year? You have to build the experiment, right? [laugh] So I don’t know. What was weird about it was on the one hand, they were saying that one of the things that was really important about the Institute was it was going to provide long-term, stable funding for basic science. And on the other hand, they said, “We're going to give you a bonus every year, depending on your performance.”
[laugh]
Well, I don’t know; these views don’t quite go together. But that’s because we inherited a certain amount of structure from our corporate sponsors. So yeah, they have the usual way of telling you whether they think you're doing well, which is how big your raise was. Promotion was possible, so there were research scientists, and there were senior research scientists, and there were fellows. Research scientists actually had a provisional period. So there wasn’t tenure, but you went from having a finite term to having an indefinite term. I was hired as a senior research scientist, and at some point in there, I was made a fellow. I had no complaints about how I was being treated.
In terms of feedback, I would say the most important source of feedback that I got—let me be careful. I valued—the senior people, people who were senior to me, they were an accomplished bunch, and I respected them a lot. And so the one-page evaluation you'd get at the end of the year or whatever had content, and I usually thought they were pretty much on the mark of, “This was exciting, that we don’t quite see where it’s going yet.” Mostly I think the view was they'd made their bet, nothing dramatic happened to [laugh] convince them that they'd bet incorrectly, and it was—I look back on the period as being, on the one hand, one in which I just had an enormous amount of fun, day to day, doing science, and on the other hand, also thinking that there’s this sort of long uphill battle in the larger community for the kinds of things I was doing to be taken seriously by everybody.
It was enormously important that a small number of people whose opinion I really cared about took things seriously. So locally, the most impactful one was Libchaber. And so he was on the one hand—so this is something that the most articulate version of it, I heard from Larry Abbott when we were talking once, and so I want to give him credit. I don’t think it’s unique to him, but he was the person who crystallized it for me. Which was, when we're growing up, assuming that we're not to the manor born, what matters to us is that the senior people whom we respect take us seriously enough to spend time talking to us, on the hypothesis that they have limit…their most precious resource is their time. So in some way, if people who have accomplished things that you admire and whose taste in science you respect, independent of their view of you, if they want to spend time talking about what you're thinking about, then you're succeeding.
And so that was always true with Albert. He was always interested in what we were doing. And I say we because almost everything at that point was Rob and I together. There were some things I did independently of Rob, and similarly he independently of me, but the whole program was something we were doing together. That meant an enormous amount, especially by contrast with my experience in Berkeley. I think what I learned was—I think a reaction to the situation at Berkeley was that, “Well, since I'm striking off in a new direction, what do I need a mentor for? I know what I want to do. And I'm grown up enough to do it.” And that could be true, but it’s still true that it’s immensely valuable to have somebody who has seen new ideas emerge, who has shepherded new ideas themselves, talk to you about what you're doing and why it’s interesting.
I would say that my interactions with Albert, we became quite close, I think. Since you weren’t shy about starting with personal things, let’s have a kind of parenthesis here. As is a well-known part of his history, Albert grew up in Paris. He was a kid during the Second World War, was hidden in the south of France, so has the sort of first-hand experience of the Nazi occupation of France as a Jewish kid. As you extracted from me in our first discussion, my father grew up in Paris, survived the war there, had many related experiences. I mean, as an American academic, how many people do you meet who had related experiences? It’s not that large. First of all, not that many people survived.
And in fact, there was an extraordinary dinner—not long after we had moved to Princeton, my father came for a visit. And we invited Albert and Irene Libchaber to join us for dinner. And at some point, I think—I think the stimuli were gentle, to begin with—the conversation turned to the period of the war, and who survived in which family, and who didn't. You've probably seen these conversations. It is of course a very personal and painful set of topics. And then people who share history, even if they don’t know each other, somehow find it very easy. So that’s very interesting. And so, to some extent, I had the privilege of just watching this.
And Irene was—I think she was ready for the conversation to be over. I honestly don’t remember exactly her personal history during the war. And she said something about “speaking of difficult things.” And there was a pause. And then Albert looked at my father and he said, “You know, I grew up—I spent from birth until”—whatever it was—“growing up in Paris, minus the years when I was hidden in the south, with priests.” The family was very embedded in the Jewish community of Paris. “I never met anyone who survived staying in Paris. How is this possible?”
And this led to my father saying something which he would repeat but I had never heard him say that before. And he said, “Not enough trains.” And Albert said, “What?” He said, “Well, there weren’t enough trains. If they had more trains, they would have gotten everybody.” And so this was, for me, one of the—I mean, eventually, my father and I would have many more conversations about this, but this was one of the first—it was like I now had an entrée to conversations with him about his experiences in the war, as an adult, which I hadn’t really had before. Because I don’t think I had—I guess categorizing it as survivor’s guilt is a little trite, but there cannot be a narrative of what happened in which you survive because you did something right, because that would suggest that the people who didn't survive did something wrong. And all of a sudden, that was clear to me, as a result of this interaction between Albert and my father. Now, again, as with my crediting Larry for articulating the importance of having conversations with the people you respect, I'm not saying that this was original on Albert’s part. It is true that Albert has had over the many years that I knew him, a knack for getting people to open up about things, and for crystallizing things. But this is a component of the closeness that we had over that period and the years that followed that I can’t discount. And let’s see, what, he’s born in—Albert is now 85, perhaps, something like that? My father would be 101. So Albert is actually—
Half a generation.
—of a generation that could be my parent, but because of my father’s complicated path, he was half a generation older, still. So there was some interpolation going on here. Anyhow, so Albert and I became very close. And some of it is this shared family history, and then there was the delight in science. So to circle all the way back, Albert was always very supportive. But he was never satisfied. So there was no—what’s the saying?—“Except for being called a mensch, there are no compliments in Yiddish.” Right?
[laugh]
So everything has an edge, right? And sometimes that edge would be articulated very clearly. And so when you ask, “Did we succeed?” I hear some of that, and I'm not able to give an unambiguously positive answer. I would say that in our various enthusiasms, we left some things unfinished. Now, we're all still at an age where we can be productive, so I think that some of the things that were left unfinished, we could still go back and finish, and it would be worth doing, and then I would be happier. On the other hand, we launched lots of things, and that set of ideas, I think, was incredibly exciting.
We've talked about my naivete and lack of careerism, but if you ask did we succeed, one answer I can give is, in 1990, I was an assistant professor in a first-class physics department that wasn’t sure that what I was doing was physics. After ten years at NEC, I was invited to be part of a great physics department, which obviously had decided that what I did was physics, and in fact invited with the expectation that again something would grow around me—
And Bill, just to editorialize a little on that narrative, the department you're talking about, Princeton of course, was perhaps even more famous—or infamous, if you will—than Berkeley, for earlier rejecting the idea that biophysics was real physics. This is not accidental.
Right. So when John Hopfield was at Princeton the first time—he has told this story on many occasions, so I think it’s okay to repeat it—that when the offer came from Caltech, he went to his department chair—if I'm not mistaken, it was Val Fitch at the time—and Val said, “It might just be better if you went. Maybe you'd be better off—would be better off—” I think there was an enormous respect for John as a scientist, but there just wasn’t the vision of how it fit.
So yeah, I can remember during the period at NEC in one of those semesters where I happened to be teaching—I can’t remember now when this conversation happens. Maybe it happens later when I'm actually on the faculty and we see each other on some occasion. I said to David Gross, “If I had thought about going to Princeton when I was an assistant professor, I would have my head examined.” Right? It would have seemed completely implausible.
[laugh]
He said, “Yeah, well, you know, things change.” [laugh] So I don’t know. I feel like part of what was extraordinary about the NEC period was that on the one hand, there were close associations with people who were extraordinarily accomplished and respected as senior figures. On the other hand, it was as an institution, nowhere. And so it was what we made out of it. And that combination was incredibly freeing. Because it sort of got both worlds, right? Like I said, at Berkeley, where I had no mentorship, I convinced myself that I didn't need mentorship. And I think that was wrong. I don’t think I probably—I can look back at things that I did that were a waste of time, and perhaps, maybe, with mentorship, I wouldn't have done that. Things that I thought might turn into interesting collaborations that were sort of more of the same, and didn't pan out to be very much. But it’s too easy, right, to do more of the same.
Again, so somehow at NEC, there was this freedom born of it not being a place with a storied history and, “Are you as good as the previous generation who came through here?” or whatever. You know, you don’t have the portraits on the wall of all the people who have gone before you. On the other hand, you had these people around, either because they were there—Boris Altshuler would come after some time. Peter Wolff was one of the founding figures. So Peter, again, had a long career at Bell Labs before going to MIT.
I remember we were discussing—just to close the loop, we were discussing the process by which we would make decisions about the research scientists who had been hired, and the provisional status—how would we decide whether to make them just regular research scientists? Again, there’s no tenure, but also, people weren’t really being fired, so whatever—as I say, there’s a difference between being told, “You've got three years,” from “Congratulations, you work here now.”
[laugh]
The truth is, in both cases—I think the guy who has the contract that says three years cannot be fired tomorrow, whereas the person who says, “Congratulations, you work here now” can be fired tomorrow. But, okay, there are community standards, right? So there was some notion of this transition from provisional to permanent member of the research staff. And there was a question about how to do it, and this was one of several occasions on which Peter’s wisdom of having been in fantastic places, and of course having done exciting science, was so important. And I can picture him leaning back in his chair and saying, “I remember when we had to decide at Bell Labs whether to make John Hopfield a permanent member or not.” And there was a long pause, and he said, “It wasn’t completely obvious.” So it was a reminder not—it wasn’t a dig at John; it was a reminder that you look back from—you look back on careers that have changed multiple fields, and you think, “Well, it must have been obvious that this is a person you wanted to hold up.” But at that moment, he was kind of drifting around, and it wasn’t clear what he was going to do.
The other moment that—an interesting thing, by the way, about NEC, I should say—scientifically, there was this combination of freedom from history but access to people who had contributed to history. That was special. It’s hard to do that, right? The people who have shaped the history of science have a tendency to congregate in places that have a deep history themselves. So access to that comes with baggage. I think that was part of what was wonderful about the ITP in Santa Barbara at its beginning. But UC Santa Barbara was not taken very seriously as a place. It was known for being a party school. But then this handful of people--actually it was more than a handful—had congregated there and built this thing which attracted more people, and so it was this very special place, sort of both young and excellent. And that’s a hard combination. So NEC had that scientifically, and I found that incredibly attractive.
As part of the inaugural class present at the creation of NEC, I’d like you to reflect on your legacy as you continue to look at what NEC has gone on to do after you left, both in terms of not only what you did, but how you did it. How was your work and the work of the group, how did that sort of set the tone for what NEC would go on to do as it would continue to reimagine itself as this industrial lab?
So what I was just about to say, actually, was that going to NEC was not only an opportunity to have this interesting combination of factors in my scientific environment, but it was an opportunity to see something about how an institution gets built. To be there when a lot of the decisions get made about structure and organization and so on, and to contribute to some of those. To listen to people who had wisdom about these things, born of experience. It was an interesting thing to work for a company. I think academics have—we have a variety of views about these things, often, at least in my own case, not grounded in any real data. [laugh] We grow up in a certain environment. We have ideas. I would say—and furthermore, they're ideas which are in common circulation.
For instance, there’s the idea that in the real world, in contrast to academia, success is objective, because either companies make money or they don’t. Well, so let us leave aside any questions about whether the market values things correctly or not. And let us assume that by definition the market values things correctly, which I think is the view of certain economists. There’s still a problem, which is the problem of credit assignment, inside the company. So I have to convince you that this particular person, or this particular group did something, which is what led to this thing being profitable. And that problem is the same problem as assessing scientific impact, right? Today a field is transformed from what it was a decade ago. I mean, we were just having this conversation, right? Did we succeed? Well, I don’t know. I think the field looks different than it did when we started. I think we had something to do with that. How much is us and how much—and I contrasted examples, right?
I think in the more applied side, when you think about brain-computer interfaces, I think we provided some inspiration. And maybe the inspiration was to the people rather than to the work. There were some people who were attracted to those problems because of what we did, and that was the important thing, rather than the particular result that we had. I don’t know. In other areas, I think that just showing that you could take these abstract ideas that seemed natural for physicists and pull them all the way down to the point where you use them to describe particular real biological systems, and the results come out, and they look kind of interesting and reproducible and regular and orderly, that in some sense showing that it is possible to do physics in this fully functioning complex organism, that was the important result. But I don’t know. It’s complicated. So I think that the credit assignment problem in corporations is no easier.
And so I saw how stories get told about the utility of various things even on the more applied sides. So I learned something. I got glimpses of how an actual corporation functions. For example, I learned that—we were told that last year, NEC made—and I'm just going to make up a number, because I don’t even know anymore what the numbers are—three billion, 472 million, 393 thousand, 423 dollars and 67 cents. Now, you realize that NEC the corporation does business in 50 different currencies. And there is a strategy—there is a procedure for arriving at this number. But it is clear to anybody who has a physics education that this number has too many significant figures. There’s no way that [laugh] the income of a multibillion-dollar-a-year corporation can be specified to eleven significant figures. That doesn't make any sense. So there’s something fictional about accounting on that scale. Which I got a glimpse of. And lots of other things.
Again, a Peter Wolffism about—the whole issues about intellectual property and how it should be protected. The lawyers were always pushing for more restrictions. And again, Peter Wolff came back with, “Sometimes I don’t know I've had an idea until I talk about it.” So how do I—this of course is a response to, “Well, you shouldn't give a talk about something unless there’s an internal technical report that the lawyers have been able to look at, to figure out whether it has any potential value.” “Well, I don’t know what I'm going to write about until I go and talk to people.” So I got a glimpse of lots of these things, and that was very interesting.
The specific question you asked about—does what we did play a role in shaping what NEC would become? Well, unfortunately, NEC kind of fell victim to the problems that I think many industrial research laboratories fell victim to. And again, I can’t remember whether I've said this in so many words, but if what you're interested in is your quarterly report, then don’t invest in basic research. There is no way that spending money on basic research is good for your quarterly report.
[laugh]
No scenario. I mean, we can discuss whether investing in basic research is good for the long-term health of a corporation. And I'm not even sure I know the answer to that question, but that’s a discussion we can have. We don’t need to have the discussion about the quarterly report; the answer is no. So as more of corporate governance shifted toward being interested in quarterly performance, as the generation of device physicists and engineers who had built NEC from the local telephone equipment manufacturer into the multinational giant started to retire and be replaced by MBAs, it was clear that push was going to come to shove. Something was going to happen. And, I would say that I learned that this can be managed very badly, as it was, at NEC.
It was not—so, in, gosh, must be early 2001—so there were some transitions before this that were a little worrisome, but in 2001, I think—no, it’s earlier than that—anyhow, I don’t remember. Anyhow, let me give you one snapshot, so you get a sense. The president of the Research Institute, who by this point is Dave Waltz, a computer scientist who unfortunately passed away relatively young, as did Dawon, actually, our first president. Bill Gear was in between. He managed some rearrangements, not all of which were very attractive, but okay. So Dave Waltz calls all of the scientific staff into a meeting and gives a speech about how, you know, once upon a time, we were given free rein, and we could do whatever we wanted, and all that mattered was the quality of science. But now, you know, brave new world; we need to explain why what we do is relevant to the company.
And by the time we get back to our offices, there are the red envelopes marked confidential sitting on our chairs. And if you opened one of these, it says that by this date, you should produce a letter or report explaining why your particular research program going forward is relevant to the business of NEC. That it is important that you deliver it by this date, because that is the date on which we will begin making budgetary decisions. And, a copy of this letter has been placed in your personnel file.
So my first reaction was, well, it’s time to leave. And the second reaction is—and that was correct. The second reaction—well, something which would evolve out of that is that—and I think this had become a little clearer with some earlier upheavals—those of us who felt confident about our ability to form a reasonable exit strategy had the responsibility to use some of our reputational capital to push the management to give time to people for whom spending an extra year of being well-supported would make a difference in their ability to find the next job.
And I went home and started writing letters to my colleagues, inquiring about other positions, of course. But my first thought was that actually I should just ignore this, because this is such nonsense. I mean, we're adults, right? If a reasonable manager would have said, “This is what we're being told. We are a group of people who have been very well supported by our corporate sponsors. They have asked very little from us in return. They are now asking for this”—there’s two broad views. One is, this isn’t for you, in which case we have a responsibility to you to make sure that you exit gracefully. And the other is that you see a future for yourself in this plan, in which case we should all work together to make a coherent response to this request from our corporate sponsors. Instead, they threatened to fire all of us if we didn't write something that they found satisfying.
And so because of that bad management, my first reaction was, “Well, I should just ignore this.” Then I thought about it a little more and I said, “Well, wait a minute. For ten years, I've been supported very well by this company. However clumsily they're doing it, they're asking me to explain why what I do—and they've hardly ever—they've almost never asked me for anything that is useful, directly useful, to them. Many academics consult for companies. I work for a company. They could ask me to do something useful. If they asked me to spend five days a week doing things that are useful, then maybe that’s not the job I signed up for. But that’s my choice. It’s not unreasonable for them to ask for something. So, maybe I should take this request seriously.”
So I go to the vice president, who was Jim Chadi at the time, and I asked him. I said, “Well, the truth is, the only thing I know about what NEC does is what I read in the newspaper, because there is no path for informing scientists who work at the NEC Research Institute what NEC does, and more importantly, what it is planning on doing over the next five years, so that we could figure out whether anything that we are doing would be relevant. So it’s a little weird for you to ask us to explain how what we do is relevant when you have no history of having told us what it is that the company does. I mean, I know that they make monitors. I know that they make chips. I know that they make consumer electronics. But, you know, I don’t know.”
Literally he’s got a stack on his desk of papers that’s two feet high. He reaches into the middle of it, pulls out a report that’s got “company confidential” stamped all over it, and says, “Here, this is the vision for the future.” And [laugh] I looked at him, and I'm like, I don’t know, “There’s two feet of paper here. Why should I believe that you pulled out the right thing? But, okay, I will take this exercise seriously.”
And of course what I learned was that in fact through my research experiences over many years, having been interested in many different parts of biophysics which touch different parts of physics, in fact I knew something that was relevant to many parts of what the company was doing. Should they deign to ask me, I don’t—since I didn't know before this all these things the company did. So I wrote this report in all seriousness. I don’t know if I still have a copy of it somewhere. And so I wrote it, and I wrote a cover letter. And the cover letter said, “This is in response to your request. I would like to be sure that a copy of my report goes into my personnel file, and that you should acknowledge its receipt, and that I regret that conversations among colleagues have become so legalistic.”
I never get a response to the report that says, “Oh, item four, please call so-and-so at this division of NEC in Japan and talk to them about this, because in fact, they really do need advice on this.” So I have no idea what they did with all these reports. But Dave Waltz, the president of the Research Institute, calls me up, and he says, “Bill, what do you mean, you regret that conversations among colleagues have become so legalistic?” And I said, “Dave, you threatened to fire me. You said, ‘Write this report in time for us to make budgetary decisions for next year, and a copy is going in your personnel file.’ What did you think that meant?” “Oh, no, no, no. That’s not what we meant at all.” So, if I had any doubts about whether it was time to leave—
[laugh]
—I mean, would they have eventually fired me? I don’t know. I remember from somewhere around this period, and I don’t remember exactly when, we were invited to dinner at John Hopfield’s house. Marvelous evening. And among other guests is my old friend David Tank. And David is at Bell Labs, and has been, this whole long period. David and I almost met when I visited Cornell on my way to Europe after finishing my PhD, but he had left a couple months before. We did eventually meet and have been friends ever since, and now colleagues and collaborators, I'm glad to say.
We traded stories about how things were going in our respective industrial laboratories. And David had a marvelous summary. He said Bell Labs at that moment was a great place to work if you could get a job anywhere in the world. On the other hand, he did not feel that it offered to young people the kind of support that he got when he first went there. And so as a result, despite being a department head and having authorization to search for young people, he wasn’t doing it. And so I think we all knew that meant that he was not going to stay there very long.
And so it was a different view of how things were—I mean, Bell Labs was obviously a much larger enterprise than NEC, and so the unraveling was more complicated. But yeah, and it’s funny, I would say that at Berkeley—so when you ask how did we shape what NEC would become afterwards, the answer is that NEC decided to become something different. And I think that that for instance, there are many foundational—there are many deep and interesting problems in computer science, broadly defined, that are of much more immediate relevance to high-technology companies, and the sort of distance between—in the same way that at some moment in condensed matter physics, the methods for making ultrapure high-mobility heterostructures were useful for building things that went into chips that were actually useful, but also made it possible to discover the fractional quantum Hall effect—the distance between the things that were profoundly interesting for basic science reasons and profoundly useful for the corporation—there was a period in which that distance in condensed matter physics wasn’t very large. And so although people might segregate themselves, in the view of the company, it was much more fluid.
So similarly I think, at the point when we started leaving NEC, there were problems in computer science that had that feel to them. And so I think that the company has continued to invest in those. I think they have some presence in quantum computing. The truth is that, at least for me, I very quickly—there wasn’t much there for me to—it took some time for the group of us who were all there together to exit. But then there wasn’t much left for me to visit. So I don’t really know what happens there. And I think every once in a while, I get a glimpse, but most of the things that attract my attention are on the computer science side. Although I believe there is still some more physics things, there isn’t anything immediately connected to biophysics, as far as I know.
Bill, let’s set the stage for what obviously is going to be our next talk, and that is your entrée to physics at Princeton. So, let’s go from this conversation that you have with David Gross at the beginning of this NEC enterprise—how did this story develop between this very non-committal offer that you will be involved in some way with the community at Princeton, to what would become this proper and solid offer ten years later?
Look, I think that in the same way that I didn't have any thought of one being connected to the other, I think that in practice, there wasn’t. That is to say, I think that over the decade that I was at NEC, I want to say three times I was appointed as a visiting lecturer so that I could teach for one semester a biophysics course. I had during that period maybe four PhD students which was great. These are all wonderful things, right? I'm not complaining. That was plenty. Four PhD students over ten years means there was always one around. And in one case, we hired as a postdoc somebody who, when he had been a PhD student, actually sat in my biophysics course that I had given at Princeton, so that was his entrée to the subject, and that’s how he decided to change fields. So it had an enormously positive impact, one person at a time, but there was no notion on my part that this was part of a long-term plan. I would leave it to my senior colleagues, where that—at what point that idea crystallized for them.
What I can say is that on the red envelope day at NEC, I sent notes to colleagues at a number of institutions. Literally that weekend. I went home and we talked about, and—you understand, right, that at the moment, where you start writing to people about looking for a job, that also means kind of throwing your life up in the air about where you might end up. So, that became—obviously, we weren’t actually doing that yet, but it was something that had to be talked about at home. So I don’t know how many people I wrote to.
My basic—I can’t remember the exact words, but I think what I said was, I had always imagined that sooner or later—I mean, I would not spend my entire life as a research scientist at a corporation, and that sooner or later, I would go back to being a professor again, because it feels more natural. And I think what I have just learned is that sooner or later is going to be sooner, rather than later. And so I am curious what’s possible. And I spent some significant component of my time over the next few months talking to people and traveling. And some of the people that I wrote to were friends at Princeton. And things happened.
I think that things happened quickly enough that many people assumed that there had been a plan. Again, I'm blissfully unaware of these things. Nobody had ever said—yeah, nobody had ever said anything. Again, I don’t even remember—I guess the person who played a crucial role was Curt Callan, who was department chair at this point, as well as a friend and collaborator. And his own interests in things related to biophysics kind of grew during a sabbatical where we spent quite a bit of time together, although the thing we actually managed to get done was not especially biological. But so, right. I guess to the extent that there were thoughts of doing anything more in the area, then I might well—the list of plausible people wasn’t that long. Hopfield had moved back to Princeton, but in the molecular biology department, where he had quite some impact in their hiring more sort of biophysics-y people, particularly in neuroscience. More quantitative, systems-oriented neuroscientists. So no, they were—the sort of informal relationship during the decade at NEC I think in my mind is wholly independent. It’s not like they were trying me out, or it was a ten-year-long interview or something. I never had that sense.
Bill, it strengthens the case—and I think this is a great place to pick up for next time—that because this was not a tenure interview, because you were not sort of building a case for yourself at Princeton, the micro-story here is that the powers that be at Princeton were paying attention to what you were doing, and they were becoming in part convinced that in fact, as you said earlier, biophysics belonged in a physics program. And so I think for next week, we can pick up on that micro/macro distinction, the extent to which your hire at Princeton reflected a broader trend in physics to accept biophysics as real physics.
Yeah, we can try it. [laugh]
[laugh]
Yeah, I don’t know. Let’s see. I should say also that an important component of that period at NEC was that—I talked about the way in which neuroscience had embraced ideas from physics, from math, and so on, and there was the emergence of a more quantitative computational and even theoretical neuroscience as part of the mainstream. So in the early nineties, so not long after I had gone to NEC, the Sloan Foundation decided that they would try to stimulate growth in this area. And they did that by establishing six centers around the country, mostly in places that were chosen for the quality of the neuroscience in the hopes that everything else would follow. And so one of those was built at UCSF.
And my colleagues there asked me if I would be willing to be a kind of visiting member of their center. And so this was interesting, because—and so what would be my role? A large part of what they were going to do with the money was to support postdoctoral fellows who came from, let’s say broadly speaking, the more mathematical sciences, who would be dropped into a highly interactive but almost entirely experimental environment of systems-level neuroscience. And part of what I would do is my record of collaborating with Rob showed that I had a view of how this kind of interaction between theory and experiment in neuroscience could work, and also, as we all joked, I could help them read the CVs. Because these people were coming from a different world, right?
So I started to do this, and it had many positive features. One is it was a different collection of experimentalists to interact with, with a center of gravity which was more toward the mainstream of neuroscience. Whereas with Rob, Rob was a physicist. And some of those colleagues were really marvelous and became very good friends. Interestingly, around the time that this is established, UCSF had a tradition of an annual symposium on neuroscience in which they would invite three speakers, and they would usually ask one of the younger faculty members to organize this in the hopes to keep the view fresh. And UCSF had just hired Ken Miller, who had a long meandering path but had really done remarkable things in theory related to the development of circuitry in cerebral cortex from the theoretical side, although he also was running an experimental group at this point. And he put together a program for a day which was me, David Tank, and a fellow named Roger Traub, who was at IBM. Roger actually is an MD, if I remember correctly, but also has a rather strong dynamical systems background. David and I both physicists. It was amusing that a symposium on neuroscience at a medical school presented three people who were working at industrial research laboratories.
[laugh]
So it was an interesting day, right? And it was marvelous. That was the first time that I met Allison Doupe, who was a remarkable experimentalist who worked on songbirds as a kind of model system for thinking about the processing of complex sounds and learning vocalizations. It was an introduction to a different part of science, and Allison and I would become very good friends. That visit and then this thing with the Sloan Center would lead to me visiting quite regularly. Of course, this was a very attractive thing for me to do, because my parents still lived in San Francisco, and we had many friends in the Bay Area. So not so long after having left Berkeley, there was kind of an excuse to go back with some frequency. So this continued through the NEC years and was an important part—a way of—so sometimes I think the worry of going to NEC, to the empty office building in suburban New Jersey—it really was, okay? It’s not just that it sounds good—
[laugh]
[but it] was a funny provincialism, right? What if your world actually ended at those borders? At the walls of that building. I think that really would not have been good. And so that was part of the lifeline to Princeton—that, well, if you interact with students, that always opens up your world a bit. So I could do that, at least some of the time. I ended up going to teach in the summer school in Woods Hole, which eventually Rob and I would spend five years as directors, and it was a fantastic experiment in interdisciplinary education, and also it’s a different place, with a different community. You invite 30 people to pass through to lecture in the course. It was again a broadening experience. And then I also had this thing of going out to UCSF to see this community of neuroscientists, but of course I was also—I was at UCSF, and so I would see people who did other things that I knew for other reasons. I was in the Bay Area, and so sometimes I would see colleagues in Berkeley or Stanford, and of course there were personal connections as well.
The one thing that I would say was systematic in my thinking about how to function at NEC was to make sure that my world didn't end—my world wasn’t confined to that office building. Because I think that would not have been good for me. I don’t know, maybe to play the contrarian, as Albert Libchaber would have—I can imagine him, hearing me saying this, and he might respond, “Yeah, if you had spent more time inside the walls of the building, maybe you would have been more focused and finished more things.” And he might be right. I don’t know.
Also at UCSF certainly, there were things that I tried to do that didn't work. And was that knowable in advance? Maybe. On the other hand, they involved people that I liked very much, and I learned a lot in the process. And maybe not everything that you try works, and that’s just—I mean, I don’t think anybody suffered as a result. That is to say, we all managed. So yeah, so that’s part of the story that I had left out, just because we didn't get to it. I wanted to be sure to mention it.
Also, this community of half a dozen centers around the country that were supported by the Sloan Foundation and later by the Swartz Foundation, that community formed the kind of core for this more theoretically oriented neuroscience. So being a part of that was very valuable. And it was through that that I got to know people like Larry Abbott and Eve Marder better. I don’t remember actually the first time I met Eve, but I remember—one of the first meetings—there would be annual meetings of this group, and there were a series of talks which certainly made use of ideas that I was pretty sure my colleagues and I had introduced, with zero reference. On the other hand, all the talks were given by young people. And so I have strong feelings about how you treat young people doing talks. You do not criticize them. Certainly, you don’t criticize them in front of an audience, and you certainly don’t criticize them for not referencing you.
[laugh]
That’s really in poor taste. But I will admit that that was my thinking at the end of the session. And Eve astonishingly came up to me afterwards and, unbidden, threw her arm on my shoulder and said, “So, what do you think?” And I said, “Well, to be honest [laugh], nice work, but I'm a little surprised by the fact that all the references start a little bit after the thing that we did.” And she said, “Well, you've succeeded.” I said, “What do you mean?” She said, “Well, the people in this generation have internalized those things so much that they can’t imagine that there’s another way to think about it.” And so I don’t know whether she was perhaps being generous, but this is part of this issue of, did you succeed or not—how do you tell? You have to discern for yourself the paths. And at the end of the day, maybe what matters is whether you feel like you can discern those paths. And if other people saw different paths, that’s fine.
And then maybe let’s not overanalyze it and Princeton physics saw a path for you to join them. That’s a pretty clear marker of success.
Yeah. Again, it’s important personally because it reflects the opinions of people that you respect. But the more I think about it, the more I feel like, look, you need people outside to be supportive, because otherwise you don’t have a job. Once you have the luxury of having a good job, I think you have to admit that letting things be defined by other people’s opinions is hazardous for your mental health. Right?
[laugh]
I realize—sorry, I—
No, that’s great.
—that we've gone on for rather longer than—
No, it’s great. The more important thing there is that I think that’s a great narrative—I mean, we did it. We covered NEC, and we can pick up next time on your entrée to Princeton, and that larger question on the state of biophysics more generally during this period.
Okay, good.
Bill, thank you so much again, and I will see you next time.
[End 201028_0355_D]
[Begin 201110_0370_D]
This is David Zierler, oral historian for the American Institute of Physics. It is November 10th, 2020. I'm so happy to be back with Professor William Bialek. Bill, it’s good to see you again.
Good to see you too, David.
I’d like to start just editorially commenting that it’s open for debate whether or not the writing on the wall was clearer when it was time for you to leave Berkeley versus when it was time for you to leave NEC. How might you respond to that? Obviously, these are very different times in your life, different career paths, different career transitions. But just in terms of orienting yourself in those moments, was the urgency to leave NEC similar to that of Berkeley?
You know, it’s funny—I'm not sure I've ever thought of those juxtaposed in that way. I would say that the idea that I needed to think about leaving Berkeley came as a shock. As I think I said, I went to some romantic idealization of the place. I think Berkeley is one of those institutions that, like many places, produces a kind of alma-mater-itis. You have this romantic view of the youth that you spent there, and you want to revisit it. And so the best thing in the world must be to go be a professor at the place that grew you as a student, because obviously it does such a wonderful job. It’s a very I think un-self-aware view, to which I subscribed, for better or worse.
I think that at NEC—so in that sense, things were perfect, but then the events that actually transpired really were surprising. And even then, I think my immediate reaction wasn’t that I needed to leave. I didn't know what to do. And then the offer from NEC came. With NEC, I think the idea that I would eventually leave was always there. There were some precursors that made it a less attractive place to be, not for me, but generally [on reflection I am probably underestimating the precursor problems]. And so I was surprised by many things. I would say the inability of the senior management to behave in an open and collegial way with the scientific staff surprised me. Even to deliver bad news. I mean, that’s part of the job, right? But somehow globally it wasn’t surprising, right? At some point, I was going to go be a professor again, somewhere.
Meaning that at Berkeley, you could have stayed there forever, but at NEC—
So let’s say I went to Berkeley thinking I would stay there forever.
But built into NEC was you were going to be there for a certain amount of time.
Yeah, I think NEC always seemed like a place that wasn’t forever. I mean, the idea—in some way, it seems weird that you're more easily seduced by the idea of going somewhere forever when you're a kid.
[laugh]
Maybe I should have known. I don’t know. Anyhow.
Forever is a lot longer when you're a kid.
That’s right. So what should or shouldn't be true is a different matter, but I think it was true that I went to Berkeley with the thought that I was going home, and would stay there, whereas I did not go to NEC with that sense of permanence. There was a horizon, and as we've discussed before, it turns out I was more or less right about the horizon: a decade. And it was an enormously productive period, and the idea that I would move on from it was not in and of itself shocking. And you know, did it seem urgent? I think it seemed [pause]—I think it seemed like one should get to work right away on figuring out what the exit strategy was [laugh]. Whether it was actually urgent to exit immediately, that was maybe less clear.
Bill, another juxtaposing kind of question—the offer from NEC came around almost serendipitously when you needed to leave Berkeley. What about the offer from Princeton? How well did that timing work out?
Well, when things got substantially worse at NEC, I, as I think I may have mentioned, sent emails to colleagues at a variety of places, right away. Again, thinking that while it wasn’t clear how urgent it was to leave, it also wasn’t clear how long it takes to sort these things out. By this point—moving gets harder as you become more senior. It’s a bigger investment for institutions. I was still reasonably young, but still. And it wasn’t clear what—the moment when you realize you have to look for a job is also the moment at which you start to think more explicitly about, what do other people think of me, what’s my standing in the community, and so on. And I must say, I didn't know. There were plenty of positive signals, but there were also other signals absent that I might have thought should have been there. I was not, for example, at that stage of my life, cited very often. I didn't know whether that was significant or not.
Bill, what about beyond you and more structurally? In other words, did you retain a sinking feeling that—one of the takeaways with the issue at Berkeley was the structural problem of biophysics in the physics department.
Yes.
To what extent had that changed, or not, over the ten years at NEC, so that when you were putting out feelers to a place like Princeton, famous for not taking biophysics seriously for too long, to what extent were you thinking about those structural issues?
Well, sort of using a sidelight to illustrate the structural issues, in Berkeley, which I may not have mentioned, was that as I was leaving, it was also time for one of the decadal reviews of the department. I remember being interviewed by some of the visiting committee. And I think that the report that time had made some remark about how you'd think that in such a large department, one of the things they would do is to stake claims to new and developing areas, biophysics being an example. “It is our understanding that opportunities in this direction were missed.”
[laugh] Great use of the passive voice. [laugh]
Yes, exactly. I think it was actually constructed that way. This is not the genuinely famous report which would come a decade later, which was leaked to the San Francisco Chronicle and began, “The Physics Department at Berkeley, once the jewel in the crown of the University of California system, is in a state of genteel decline.”
[laugh]
So yeah, had things changed? Well, I didn't know. So for instance, when I was looking, I looked at jobs in—there was interest in physics departments. There was interest from interdisciplinary programs that were really centered in a more biological part of the world. There was even interest—I mean, I think it’s better not to list institutions by name—there was even interest from medical schools. At least in one case, the conservatism of that university’s physics department was in fact an obstacle to doing the right thing, although I don’t know. So I think the answer was that going into this, I didn't know the answer to that question. And all of a sudden, it mattered.
It was nice, actually—one of the things that industrial research laboratories have the power to support are—universities have an organization that is—so I actually don’t think that departments constitute a fundamental barrier to scientific progress, as is sometimes said, despite having some scars to show for this. I think rather, having broadly defined fields which have to wrestle with the location of their borders is a good idea. And some institutions will get it right, and some institutions get it wrong. And the fact that there are institutions that get it wrong isn’t evidence that we shouldn't do it this way. Although Princeton had this conservative reputation, the fact of the matter is that even after Hopfield’s departure, which was 1980, they always had people on the faculty who were doing things in biophysics. Bob Austin was there through and through, right? He was hired while John was there. He stayed. He’s still my colleague. And others came and went, right?
So on the one hand, you could say, well, why would you look to a place like Princeton which was so famously conservative? On the other hand, the fact of the matter is, they did have a persistent presence in the field when many other institutions did not. So, as I say, I knew that it was time to figure out how I was going to leave NEC. I didn't know that I should leave immediately. And so I wrote a lot of letters, and many colleagues were very generous in taking time to invite me out, pass me around to the relevant folks, and so on. There was at some moment the realization—sorry, so the thought, which I should complete, is that—not about me and looking for a job, but about the role of industrial research laboratories, whose structure is not constrained by traditional disciplines. So you could build a biophysics group, as Bell Labs did, and as we did at NEC, whether or not that was officially sanctioned by the top five physics departments in the country, or top ten physics departments in the country. And I think both places had really excellent groups in this field. And so for the ten years that I was at NEC, I think I didn't have to worry so much about whether this was physics or not. It was something that mattered to me deeply, but it wasn’t something I had to think about regularly. Because suppose all the other physicists at NEC decided that this wasn’t physics. Well, but we didn't have a physics department, right?
Right. [laugh]
So it wasn’t—[laugh] I don’t know, we could exist as a group of however many it was, at that point—four or five of us—and do our thing, and if everybody thought that that belonged in the same grouping as what the condensed matter physicists were doing or what the optics people were doing, then that was fine. And if not, well, that’s the way it goes. [laugh] So suddenly that issue—how much it mattered to the world doesn't depend on my employment status, right? And I think it does matter to the world. But suddenly it was an issue that mattered to me, personally, again, in a way that it hadn’t for a decade.
And so the worry—I must say, I felt very—so on the one hand, when you realize—even if you know in advance that you're going to do something that maybe is only going to last a decade, when you realize that you might be at the end of that decade and it’s time to move on, there are the usual anxieties and concerns. In particular, you're sort of throwing your family’s future up in the air, and it’s not clear exactly where it’s going to come down. Are we all going to move? Children are reaching ages that have relationships to the beginnings and ends of high school. Our son and daughter are two years apart, and so what do you—splitting somebody’s high school in half is not such a nice thing to do. So if we moved far away, would one of them go to high school in one place and the other one finish high school in the other place? I don’t know. We thought about all sorts of things. At this point Charlotte was, I think, already president of the Board of Education, and had started the project that would lead to rebuilding of all the public schools in Princeton.
Bill, what were the tenure considerations? In other words, not getting tenure at Berkeley plus ten productive years in industrial research equals what? Full professor? Associate professor? What kind of thresholds were you developing in your mind?
For the record, I didn't actually not get tenure at Berkeley. I left before they decided. [laugh]
[laugh] Nixon’s resignation in 1974. [laugh]
Well, no, it was a little better than that. It’s interesting how in the community, it is widely assumed that I did not get tenure at Berkeley. Which—
No, and that goes back to my initial insisting question about why not just retreat into biophysics at Berkeley.
Right, right. And it’s not clear how that whole process would have played out.
Well, that’s a counterfactual, but I think the question remains. Not having achieved tenure—
Right. So having left Berkeley as an assistant professor—let’s put it this way—four years as an assistant professor at Berkeley plus ten years at an industrial research laboratory, what does that equal? I had no idea. And at some level, it wasn’t up to me. [laugh] As it turns out, I don’t think anybody that I talked to had in mind anything less than being full professor. But I don’t know, could have been otherwise. I'm not sure. For example—I mean, this wasn’t relevant, because we weren’t looking to do this, but you know, there are plenty of European universities where people are associate professors for the rest of their lives, even quite accomplished people. So I don’t know. The thought—four years of being an assistant professor plus ten years of being in an industrial research laboratory and leading a group and helping to build things—it didn't occur to me that I would be offered a position without tenure, and I wasn’t. But exactly what the terms would be—again, that has to do with my standing in the community, which as I say, you don’t know. I mean, how are you supposed to figure that out?
I have a colleague who jokes that economists are very well compensated because they look for jobs a lot and move around a lot, and they don’t feel like—I think in our community, if you don’t really have a reason for moving, then mostly you’re wasting your friends’ time. And in some sense, the people whose time you waste is—the amount of time being wasted is in rough proportion to how close your relationship is, right? [laugh]
[laugh]
So the penalty—the cost falls on the people with whom you'd like to maintain a relationship. Whereas—my friend’s joke was that the economists, they just think they're contributing to the efficiency of the market by writing letters for each other.
[laugh]
So they don’t feel burdened by this.
Bill, was the Lewis-Sigler Institute, was that baked into the overall offer? Was that part of the package?
Yes. It wasn’t entirely clear what the Lewis-Sigler Institute was going to become, and what it would mean to be part of it.
So it was conceptual, at that point.
Correct. They were building a building. There was a notion of faculty being members. And— [break in audio]
There we are.
Welcome back.
Thank you. Sorry. Remind me where we were.
The Lewis-Sigler Institute was conceptual, at that point.
Right. And so it had been founded—well, I think in the beginning, maybe there was an idea of doing something very different, but certainly by the time I got to know about it, the idea was that it would quite explicitly involve faculty from multiple departments, including physics; that physics, computer science would play a substantial role alongside perhaps the more obvious biology and chemistry. Curt Callan, who was the chair of the physics department during this period—his term actually ended as my offer came. So he was chair, more or less, while I was being recruited, although Dan Marlow sort of finished the business. And Curt was, in his role as chair of the physics department, a very enthusiastic member of the sort of steering committee or whatever it was called, for establishing the Lewis-Sigler Institute.
So that must have been just generally beneficial to you, just in terms of getting yourself set up.
Right. The idea that the university wanted to make some initiative at the border between physics and biology—physical sciences and the biological sciences—I don’t need to send you the email saying that we just got disconnected—that was very exciting. I should add that there were a lot of aspects of the recruiting at Princeton that were very interesting. One was that the person who was the director of the somewhat nascent Lewis-Sigler Institute was Shirley Tilghman, at this point professor of molecular biology. And we had a marvelous few hours together.
To put things in order, the department votes but you don’t actually get an official offer until the university central committee has acted. So at Princeton, as at most universities, there are two layers of decision-making about certainly tenured faculty appointments and the promotion to tenure. One is in the department and the other is essentially in the dean’s office. And so this is after the department has acted but before the what is called for historical reasons at Princeton the Committee of Three, which of course does not have three people on it anymore; I presume that once it did. So in that funny intermediate period, a lot of the negotiation takes place, on both sides. I guess negotiation is always on both sides. What I mean is that it is both a kind of recruiting and a negotiation.
So Shirley took a big chunk of time to chat with me about visions of the Lewis-Sigler Institute. Although I had worked on many different problems before this, for the last decade, a large part of my work had been more focused on things related to neuroscience than anything else. Say from the biological point of view, it was about neuroscience. And she joked that she remembered the discussions about what were we going to do in the Lewis-Sigler Institute, and the one thing we were sure we weren’t going to do was neuroscience. But that’s okay. And she was very open and really I think took seriously this idea that physics wasn’t just—it wasn’t just that we had tools and they had problems; it was that something different was going to happen because people came from a different discipline to look at these phenomena. And that was great to hear.
And the fact that the university wants to put resources into an area that you find interesting—it’s not that they're your resources. In fact, in some sense, you don’t want them to be your resources, because then you have responsibility. The best thing is [laugh] for them to put resources collectively into an area that you find interesting. So although my initial appointment was entirely in the physics department, because at that point I think the Lewis-Sigler Institute didn't actually have the resources in the internal accounting that the university was using, the Lewis-Sigler Institute did not have the resources to make a senior appointment, all of my affiliation was entirely in the physics department. I had a long chat with chair of molecular biology, and with Shirley in her role as director of the Lewis-Sigler Institute.
A footnote to all this, which was very important, was that one day—I could I guess look up when all these things happened, but we were getting in toward—must be toward May—and it had already been announced that the previous president of Princeton was stepping down, and so a search committee had been formed. Not long after my conversation with Shirley about the Lewis-Sigler Institute, the trustees announced that she was being made the next president of the university. And this happened—because it happened at a trustees meeting, it happened on a Saturday or Sunday—I don’t remember—and of course there was a certain buzz about town, as news started to spread. We had been taking a walk and Shirley and I were devotees of the same coffee shop, same café. And so there was a certain amount of chatter in the café about her ascension to the presidency, which was of course a first, for Princeton, to have a woman president. Also hadn’t had a president who was a natural scientist in quite some time, I believe, if at all, maybe. And actually some people think [laugh]—some people joke that the more radical departure was to make someone president who didn't have a Princeton degree. I hate to think that might actually be true.
Anyhow, so there was a certain amount of chatter in town, and then we walked home, and I found a phone message. And the phone message was from Shirley, who called to say, “Well, you may have heard this news, and obviously this means I am not going to be the director of the Lewis-Sigler Institute, so that will change.” But she wanted to assure me that in her new position, she was even better placed to make sure that everything worked out. And I must say, since my previous encounter with a university was Berkeley [laugh], which is huge, and I don’t know whether—should you expect to talk to anybody other than your department chair, if you're being—? I don’t know. The idea that somebody who was about to become president of the university thought that making a phone call to somebody they were trying to recruit on the day that the news came—not sometime in the next week—I'm guessing she had a lot of people to talk to that day. So the idea that I made it onto her list even for just a few minutes of leaving a message made a difference.
And I think that that—I think I was right in the sense that that flavor, which—there’s many sides to it, but there is a thread that runs through how Princeton conducts itself, which is that it still imagines itself as a kind of small place in which everything can be done informally and in person. And so the idea that you get a phone call from the person who’s about to be president, that sort of was part of that. Now, that notion that all business can be conducted informally also has an old boys’ club-ism to it that is not so good. But obviously that part has evolved substantially. There’s still a distance to go, but yeah. So yes, the Lewis-Sigler Institute was there, but the offer, when it finally came, was to be a professor—I should look up what the relationship was supposed to be, at the beginning. But the offer came to be entirely in the physics department.
Either in ad hoc conversations one-on-one with would-be colleagues, or if you ever had opportunity to make formal presentations, whether you'd call it a job talk or however you understood that, how were you articulating what you envisioned your research agenda to be? In other words, this is not simply a retrospective consideration on what you had done at NEC. There was probably opportunity for you to think about what you would do, as a Princeton professor. [pause] Or not. Is it entirely reputational?
So having now been inside the department for some time—it’s edging toward 20 years—I think that in some way, the question of what you're going to do next is left very informal. My colleagues—in biology departments, it’s conventional for job candidates to give what is affectionately referred to as a chalk talk, I think because it’s supposed to not be with slides. I always chuckle a little bit when I hear that, because my preference would be that all talks were chalk talks. [laugh] In the physics department, we don’t do that formally, and so there’s an informal flavor to it.
I think the—you know, the department knew me. I had been around. I had given seminars in the department with some frequency over the ten years that I was at NEC. So there wasn’t much formality to presenting myself. To judge by what I see when we do it now, there’s considerable formality associated with making decisions. But I didn't see that. The most interesting—so Curt Callan and I talked a lot about—as I mentioned, he had become interested in problems related to biophysics himself, and we spent a lot of time talking about what was interesting, as well as trying to do a few things together. So that was important.
So I think he came to—he also—I think he had the view, quite independently of the issue of my appointment, that if the physics department—what’s the problem with department structures? The problem is, how do you do anything that isn’t already represented in the department? Who is going to—? At the end of the day, when a department makes a decision to appoint someone, there are some number of people in the department who are saying, “I know what’s going on in this field, and I think this is the right thing for us to do.” And there’s no—we don’t do it in Latin. There’s no official thing you're supposed to intone. But roughly speaking, I think what’s going on is that somebody’s saying, “I know this field. I know what’s important. You know me.”
Of course you know the people that we've asked for opinions, but why did we ask these people and not—how do you know that we asked the right people? So even if you say, “Well, the way you sort it out is by asking for letters,” somebody has to decide whom we should ask. And all sorts of problems can be buried in that choice. Or covered up by that choice. So at some level, there’s this trust between the advocates for the case, and the rest of the department. And how do you do that when there isn’t somebody who can play that role because it’s not their field?
So Curt felt that if the department was going to make progress in biophysics, particularly on the theoretical side—and you know, Princeton is a very theory-heavy department—unusually so, right?—we're essentially half theorists and half experimentalists—then he would have to educate himself. Somebody was going to have to take responsibility. It helped that John Hopfield was back at Princeton, even though he was officially outside the department.
So yeah, so the conversation that came closest to what you're talking about was a conversation I had with Paul Steinhardt. Paul himself had moved not that long before from Penn. We talked about physics, and we talked about what was interesting. But the thing that I heard as being very relevant to the job situation was Paul said, “We've had biophysics in the department for a long time, but the department is organized in groups. And groups need leaders. And so if we want to do biophysics, we need a biophysics group, and we're asking you to come not just to do your thing, but to lead.” And I don’t know whether he was speaking officially on the half of the department. I think he didn't say, in so many words, “The expectation is that you would play that role.” Something gentler.
And so this is the solution—in some sense, it comes full circle, right? What was wrong in Berkeley—there was no group for me to be a part of, as a young faculty member. So 15 years later, the opportunity at Princeton was to help construct such a group.
How different would this have been from the group that you created at NEC? Just as a research proposition.
Well, an interesting point—so first of all—oh, I see, you're saying if I gave you the same—well, we built a group at NEC. It was much more tightly focused, I would say, at NEC, than what would happen at Princeton. And part of the reason for that is that at Princeton, there’s the rest of the university. So in the years since I've been there, we’ve hired three young experimentalists in the physics department, all by the way jointly with these multi-departmental institutes, two with the Lewis-Sigler Institute and one with the Neuroscience Institute. So in order, Josh Shaevitz, Thomas Gregor, and Andy Leifer. Andy, the youngest, is still an assistant professor. The others are now full professors. I've had the pleasure of collaborating with all of them. Thomas actually had been in part my student, although as an experimentalist he obviously had other advisors, too. That’s for when we talk about the science that got done.
They all do very different things. They work at different levels of biological organization. Josh has done fantastic things looking at how bacteria—the things he did when he first arrived—he came out of the world of single-molecule experiments. He had been a PhD student of Steve Block’s. He [Josh], in his first work at Princeton, he was concerned with how bacteria grow, and the mechanics of how they build and rebuild themselves at the level of the membrane and the protein scaffolding and so on. He has gone on to do all sorts of things in collective behavior in large groups of bacteria, and things related to animal behavior in walking flies, and so on, all with the taste of the kinds of experiments that you can learn to do by starting with the most precise things you can measure, which were these—so he was the one who—his PhD thesis was the work that made it possible to see the RNA polymerase walking along DNA, base pair by base pair. Seeing these steps of 3.4 Å s. And, by the way, seeing it stop and back up and proofread when it made mistakes, thus closing a circle with the papers that Hopfield wrote at Princeton in the mid-1970s. Beautiful experiments.
Thomas, as we'll discuss when we talk about the science that happened after I moved to Princeton, works on the fruit fly embryo. He has a nice way of saying it—you use the embryo as a physics laboratory. You want to study how cells respond to changing signals that instruct the cell on how much of each gene to read out, how much protein to synthesize, and how to make decisions and so on. And so how do you find a—how do you do those experiments? Well, you could take the cells out, put them in a dish; then you have to figure out what things you're supposed to keep constant, what dynamic range of input signals are you supposed to use, what things are you supposed to measure. In an embryo, you don’t have to do that. The mother has constructed the experimental chamber for you. It’s the egg. Everything that needs to be held constant is being held constant by whatever it is inside the embryo that holds things constant.
And if you want to know about what happens when this signal is big, well, the signal is big for the cells that are up here at the left, and the signal is small for the cells that are over here at the right. And it is precisely that signal that is going to determine that this is left and this is right, or this is head and this is tail. And so you don’t have to worry—it’s all taken care of for you. All you have to do is figure out how to measure things inside the functioning embryo. Which of course is quite a challenge. But it’s a fantastic opportunity.
And then Andy Leifer is busily trying to record from all of the neurons in the little worm C. elegans, all 300 of them, while the animal is crawling around and doing its thing and freely behaving. So these are three experimentalists who are doing very different things. I mean, I guess I could have imagined building a group like that at NEC, but—it’s also true that building groups at universities is very different, because you have this flux of students. The groups tend to be a bit bigger, in part because we have to spend some of our time teaching, so to get the same amount of research done, you need more people helping.
And as things would evolve—so I should say—I don’t know if I said this already—I moved to Princeton at the same time that David Tank did. You may recall we talked about my interactions with David when he was at Bell Labs, and his summary of the situation at Bell Labs—that it was a great place to work if you could get a job anywhere, but he didn't think it was going to provide the kind of support for young people that he had when he went there. So he was also looking around. And as it turned out, there were people in Princeton who were very interested in him as well.
I think that rather in the background but central actor here was John Hopfield. So with both of us, he worked very clo…he and David had worked very closely together in the mid-1980s writing some very memorable papers. John and I never actually worked together. And I think he was the sort of figure behind the scenes who was pushing for both of us. And it was interesting—I knew David, and I knew what was going on. I knew that folks at Harvard were after him to be the director of their neuroscience effort, what would become the Brain Center at Harvard. And yeah, I remember email exchanges and phone calls about what his decision was going to be, what I was going to decide, among my options.
Certainly for me, having David come at the same time that I did, that was just fantastic. So he came half in physics, half in molecular biology. And I think his dream in coming was—and actually, we had an explicit conversation about this—that there were three things going on at Princeton. One was that the physics department seemed to be taking the idea of building a biophysics group in the physics department seriously. Rather than letting Bob Austin just drift along by himself [laugh], we could actually build a group! There had been long tortured discussions about the future of neuroscience at Princeton, and it was clear that something was going to happen. And, there was, as exemplified by the Lewis-Sigler Institute, an effort to do something more broadly in quantitative approaches to biology. So there were three things going on.
And what David and I realized was, I really cared about biophysics in the physics department. He really cared—well, cared about—I mean, let’s say it was the thing that we were most excited about putting our own energy into. It would be nice if all these things worked, and they’d all be good for us. But that was the place where my heart was, for reasons that maybe are not so difficult to understand after these many hours of conversation.
Of course.
David really felt that his heart was in helping the university to build a neuroscience effort, a modern neuroscience effort, which certainly in his vision was very much one that had a lot of contact with physics and math and computer science. And the broader effort in quantitative biology that would be useful, but probably it did not require either of us uniquely in order to push for that. So we kind of made a pact that we were each going to focus, and whenever we really needed the other person, we should let them know. And that has been great. Also, finally at Princeton, after having known each other for many years, we actually collaborated, which we will presumably talk about the science of this last period. But yeah, so that was important.
Let’s see, footnotes to that period of recruiting—I mean, there are funny things that happened. Let me see which things are [pause]—I think it’s okay to tell this story. So one thing which I know is okay to say is that because we lived in Princeton, because I had some relationship with the university, and because our friends in town were also faculty members, this period between the department voting and the university officially deciding to make an offer is slightly awkward, for me, because although I can talk to friends inside the physics department about what was going on, it’s not clear how I could talk to friends who were in other parts of the university. Because I officially don’t have an offer from the university; I have the knowledge that my colleagues in the physics department would like the university to make me an offer. And so my decision was that it was not something I would discuss with friends who were not in the physics department. Just seemed best.
And so it turns out that a family friend was on this central committee that makes these decisions, makes the other layer of decisions. And we were in Woods Hole that summer as we were, every year in those days, for a month. And our friends were passing through and they stopped by to say hi. I think by this point, the Committee of Three had done its thing, and I had the letter, and I had accepted. And she came up to me and she said, “So first of all, congratulations. Second of all, you should know I was on the committee.” This is not a secret. Members of the committee are public. [laugh] She was surprised to see my file in the docket, because she had absolutely no idea that this was happening. So we had succeeded in keeping it confidential from friends in the university who were not in the department, which I was happy about.
But then she said something—she said she presumed that this was not violating any confidences or expectations—[laugh] she said it was the first time that anyone around the table could remember that there were two cases in front of them in which the two candidates had written letters for each other.
[laugh]
So that was David and I. And I smiled. [laugh] I said, “Well, I know what one of them said. I hope it still looked like a good idea when you saw them next to each other.”
[laugh]
It was funny. But no, it was kind of magical, right? Both of us had been in industrial laboratories for a while. I mean, David had gone there right after his PhD, had gone to Bell right after his PhD, and stayed, whereas I had had the excursion to Berkeley. We had known each other for a long time. And the idea that the university would move to try and bring both of us, clearly with the idea that this was going to be more than the sum of its parts, that was very special.
There’s also an interesting thing where you realize—we had a lot of conversations in those days. You also come to realize that you'd like your friends to be colleagues, but you would also like them to be happy. So there isn’t this notion of convincing them to do something. [laugh] You don’t want to convince them. You want them to independently decide that it’s the right thing to do. I don’t know exactly where David’s candidacy was in the discussion process, but it was certainly hastened by things at Bell Labs. So can I tell you something, and then we check later to see if it’s okay that I told it to you?
Absolutely.
Okay. I think this is okay. So [laugh] I mentioned that I had a conversation with the then-chair of the molecular biology department, Tom Shenk. So the day that I was going to go have that conversation, we got a phone call, because Karen Tank, David’s wife, was looking for my colleague Rob de Ruyter’s wife, Tera, because they were going to do something. But somehow they weren’t connecting, and so Karen called us—“Do you know what’s up?” And honestly I don’t remember the resolution of that, which doesn't really matter. What I do remember is that I had known that the folks at Harvard were interested in David, and I didn't know whether he had a formal offer or not yet. I think he did. [laugh] Karen said that they were a little discombobulated because David had just been offered early retirement from Bell Labs. Which is to say, they were about to fire him. [laugh] Just a graceful way out. And so suddenly, all of these things became a little more urgent.
So I remember in my lunch with Tom Shenk, the chair of the molecular biology department, he said, “So, I know we're the biology department and your offer is in physics, but generally, is there anything that we can do that could make a big difference?” I said, “Funny you should mention that, because [laugh] this morning, I had this conversation with Karen Tank.” Anyhow. So I don’t know what role that played in nudging things along, but I think it was very much on the radar screen. So yeah, that was kind of an amazing—I viewed it as a kind of amazing opportunity.
When I went to NEC, there was the fact that at the same time Albert Libchaber was coming, and he was becoming interested in biological things. And so I knew that there would be experiments happening that I would find inspiring, down the hall. As it turns out, Albert and I never wrote a paper together, but his presence mattered a lot, as we talked about. And so, if David was going to come at the same time, I knew that there was a whole lot of I’d say physics-style experiments in the exploration of biological systems that were going to happen because of him, and because of his extraordinary taste in hiring young people. So that was a big factor as well.
Bill, did you take on graduate students right away?
Let’s see. So I had a graduate student who finished up, who had been working with me at NEC and finished up. And then some people started to come around. It was a little spotty at the beginning. I think in those days—there has been an evolution in the recruiting of graduate students, and I don’t know—life’s not a controlled experiment, right, so it’s hard to know exactly—lots of things have changed. I've changed. I've changed institutions. The society is changing. Whatever. There seems to be much more expectation that students be able to articulate, upon their entrance to the PhD program, what it is they want to do. Now, I don’t think we actually hold them to that. At least, we're not obligated to hold them to that. We don’t have to hold them to that. Because our mechanisms for supporting graduate students do not in the beginning years depend on that choice.
Now, there are departments where, either historically or to this day, an essential part of the way in which beginning graduate students are supported are from research grants to particular faculty members. And so then in that case, their students’ remarks about what they would like to do have more direct impact. Or rather, they kind of have to do—they will be supported to do the thing that they said they wanted to do, and if they change their mind, it can be complicated. Which I think is not really a very good way to run things, but Princeton is a wealthy institution, so we have the option of not running it that way, and we take that option.
So it is certainly true that when I arrived at Princeton, the number of students who declared that what they were interested in was biophysics coming to the physics PhD program was very small. On the other hand, students who came thinking they were going to do something else changed their minds with some frequency. Those were the students whom I saw. I don’t remember exactly what the order was. But, let’s see, not long after I arrived, certainly, two students started to work with me, Thomas Gregor and Gašper Tka?ik.
So Thomas—he actually might have had his first conversation with me while I was still at NEC. I should know this, but I'm blanking on it. Thomas had actually come to Princeton to be a chemistry student, because he had been an undergraduate and master’s student at the University of Geneva, where Roberto Car had been professor of physics. And Roberto moved to Princeton officially to be a professor of chemistry, although he is an affiliated member of the physics department, and so Thomas followed him. So officially he was enrolled in the chemistry department. His interests shifted toward doing things in biophysics. Obviously, a theorist to begin with, having worked with Roberto, and so he came to talk to me, and we started out trying to do some theoretical things together. But he would become a spectacular experimentalist in this effort to think about the early events in embryonic development in flies, which we should talk about as a coherent piece.
And Gašper Tka?ik had come to do cosmology, and also changed his interest, and ended up working both with me and with Curt Callan. And also Gašper and Thomas ended up working on things together, in part because they realized that some of the theoretical questions that Gašper was interested in were illustrated by the kinds of experiments that Thomas was doing. Sort of the best thing that one can hope for. So yeah, students started to come right away. Curt and John Hopfield and I organized ourselves that we should try recruiting postdocs who would be somehow communally advised. And so there was a little trickle to begin with, but we raised more money. Also, we got on people’s radar screens. Yeah. So I guess the short answer to the question was yes, more or less immediately, students started to show up.
What about undergraduates? What kind of interactions did you have with undergraduates in the early years?
My first teaching assignment was to teach one of the sections of the honors version of freshman physics. This was typically done a little bit by committee, so one of my first partners was Peter Meyers, a particle experimentalist. He was an extraordinary teacher, and so I learned a great deal. I should return to that, but to sort of finish the thought of what was the beginning, students at Princeton are required to write a senior thesis, whereas at Berkeley it was optional, only for students who wanted to graduate with honors. And there’s junior independent work. So in your third year as an undergraduate, you do sort of semester-long projects. I have to say, I enjoy interacting with undergraduates very much, and there are a number of things that have happened which maybe we'll get to at their proper moment, which I value enormously. Actually, that was also true in Berkeley, but at Princeton, it’s steadier because there’s this supply of people who need to do things. On the other hand, I have colleagues who are extraordinarily good at figuring out how to break off pieces of problems for students that they can get done as undergraduates in some reasonable amount of time. I've never felt myself to be very good at this. I sort of always feel like, “Oh, you should go off and think about X,” and X is sort of vague and open-ended.
In terms about giving something to an undergraduate that’s intellectually feasible for them to work on?
Yeah. And sort of feels like we would definitely learn something by the end of the semester or the end of the academic year. Given where they're starting, given—I don’t know, it’s not—I enjoy interacting with undergraduates enormously. I enjoy teaching undergraduates enormously. I don’t feel like my advising of undergraduate research has been—is something that I'm particularly proud of. I think some great things have come out. I think some undergraduates with whom I've worked have not only done interesting things, but that it stuck with them, and it made a difference in their intellectual development. But as I say, I see other people doing this, and I think they do it better.
Maybe some impatience? If I know how to do it, the thing I’d most like is the time to sit down and do it myself. Can you give an undergraduate something you don’t know how to do? Hmm. [laugh] Again, I don’t know if I've told you this—there’s this fantastic remark that John Hopfield once made to me, which I really enjoyed. I was commiserating with him because I had recently failed in some money-raising effort. And he shook his head and said, well, he was never any good at raising money, so he could just feel sorry for me, but he didn't really have any advice. I think I was looking for advice. And, you know, you talk to your wise senior colleagues, and you’re hoping they'll tell you something useful. But instead, John said this fantastic thing. He said, “Look, for a theoretical physicist, there’s two kinds of problems. There are the things you know how to do, and there are the things you don’t know how to do. The things you know how to do aren’t interesting.”
[laugh]
“And the things you don’t know how to do—well, how are you supposed to convince somebody to give you money to do something you don’t know how to do?”
[laugh] Right.
I think there’s a more subtle addendum to this, which is that there are so many forces, including the responsibilities that the community places on you as you grow up, that push you toward working on the things that you know how to do. Because in the same way that if an undergraduate needs to finish a project in the course of a semester, you better give them something you know how to do—similarly, if the blocks of time that you have in order to think about science are going down because you have to sit on various committees and so on, then it’s tempting to do something that you know how to do, because then by the end of your limited block of time, you might actually have gotten something done, as opposed to staring at a blank pad and trying all the obvious approximations, none of which work. So yeah, this sense of wanting to work on the things you don’t know how to do—in some sense, maintaining the self-discipline to keep working on the things that you don’t know how to do, that’s a challenge. And in that sense, the challenge of convincing people to give you money to do the things you don’t know how to do is part of it. But even for yourself, there’s a challenge. Will I trust myself to take the limited amount of time I have to think, and think about something that is not guaranteed to produce a result? It’s too tempting to think about the things that are guaranteed to produce a result.
So yeah, I enjoy interacting with undergraduates, I love undergraduate teaching, but I don’t feel like I'm a good research advisor for undergraduates. And I don’t know, maybe other people think better of my efforts in this direction, but I don’t think so highly of them myself.
Bill, to get back to the science, once you sort of got your sea legs at Princeton and you had the intellectual bandwidth to really think about what you wanted to accomplish, in what way were the questions you were asking, the research questions you were asking, a continuation from your time at NEC, and in what way were you departing for new ideas and new endeavors?
So the first major departure happened fairly early, and there’s a version of the narrative which is coherent in retrospect. And by now, I have views about how the different things I've worked on relate to one another. I think on the other hand, seen from certain points of view, there are really discrete breaks. There are genuinely new things that get started. And I think both of those things are true, right? You go in a new direction shaped by the kinds of things that you've been thinking about. But you hope that it’s a new direction and that it will work out that—you're looking for something in that direction. The thing you're looking for is related to things you've thought about before. But first of all, there’s no guarantee you're going to find it. And you're looking in a new place, so there’s a whole set of new things you have to think about.
So the example of this, which worked, was I was chatting with David Tank, and I was talking about how I had spent this time thinking about signal processing and information flow and the physical limits to inference that the brain can make about sensory signals and so on. I had been thinking about all that in the context of the nervous system, context of the brain, but there are very parallel issues at the molecular and cellular level. Cells respond to signals from the outside world. They generate internal signals, which regulate processes inside the cell.
I had this idea that you could think about—the usual view is, a cell wants to control how much of a protein it’s synthesizing, so it has these transcription factor molecules that it uses to regulate the transcription, the reading of DNA into messenger RNA—among many other processes, but in particular to that—and so you raise the concentration of this transcription factor and that turns on or off the expression of this particular gene. And so you usually think of it from that point of view, right? The cell wants to read out—make more copies of this molecule, so it increases the concentration of the transcription factor.
But you could view it in another way, which is that if you sit at the place where the transcription factor binds along with DNA, you could think of that as being a kind of chemical sensor. That it’s measuring the concentration of this transcription factor. And the way it writes down its estimate of the concentration is by making more or less of the messenger RNA. And if you take that point of view, then the problem has something in common with the kind of sensory problems that I had thought about before, or the problems that Berg and Purcell had thought about, when they were thinking about how bacteria move through chemical gradients. Molecules arrive at the surface at the bacterium and they have to decide, “Is the concentration going up or down?” Well, you have to count the molecules that arrive. They're arriving by diffusion so there’s noise associated with that process and so on.
And I had this idea that one should try to take these questions about signals and noise and information flow and the physics that goes into determining those things, down to this level of molecular and cellular biology. Which of course is a big world, right? And all cells are doing this all the time, so it seemed like it might matter, if these ideas were relevant. David, for his part, noted that there was a revolution going on in our ability to see what was happening inside cells. The fluorescent proteins had become more commonplace. You could genetically engineer cells to make fusions of the protein that you cared about, with one of the fluorescent proteins, and thus you could keep track of the concentrations of these molecules in cells while they were alive, doing their thing.
And the tools for making those kinds of measurements, there was the molecular biology part of, “How do you engineer the cells to make these molecules,” but there was also the microscopy part. David was one of the great figures in making the kinds of microscopies that you need in order to make those measurements. And so I think he wasn’t eager to completely change fields from thinking about things in neuroscience to thinking about things in more molecular and cellular biology, but he was I think curious to see how these techniques worked. As with theorists, we want to figure out—go calculate something yourself. So as an experimentalist, he wanted to go make a measurement using these methods.
It is literally the case, while we were having this conversation—which also of course included the dance of we've known each other for a long time but we've never actually worked together; maybe this thing that would be a new direction for both of us would be an opportunity to do that—literally while we're having that conversation, Eric Wieschaus walks into the room. And so Eric, of course, is one of the great heroes of genetics in developmental biology, having shared the Nobel Prize for finding all the genes that are involved in early fruit fly development, and having really, I think, started to have the view that maybe—and this was already true around the time that he got the Nobel Prize—I remember chatting with him, and he said, look, the things that made his reputation were experiments where basically you're looking to knock out one gene and see what the effect is. But what if—that understanding tells you that all these genes are involved, but it doesn't really add up always to explaining how the whole system works. And so he was already, in the mid-nineties, starting to cast about for other ways of approaching these problems.
So he came in, and David and I tried to explain to him what we were talking about, which in retrospect, it must have been a little funny, because I don’t think we knew what we were talking about, right? We were talking about what we might want to do. I had these vague theoretical ideas. He had I think—was a little bit more concrete in the sense that he knew the kind of thing he wanted to go measure just to see how it worked. But he didn't have a particular system in mind. I certainly didn't have a particular system in mind.
And I will never forget Eric’s response to all this, which was he smiled, and he nodded his head. I imagined that he was thinking, “These crazy physicists. They don’t know anything about biology.” And he said, “Well, I'm not sure I know exactly what it is you want to do, but let me explain to you why you should do it in flies.” And he proceeded to give his ode to the fruit fly embryo, in which he explained why it was a place where—thanks to the work of people like him and of course getting toward a century before him, right? We know that genes are in a line along chromosomes because of Sturtevant’s analysis of what we would now call recombination, in fruit flies. So there’s this incredibly century-long span—certainly it was a century before we were having this conversation—of the use of the fruit fly as a model system.
So this was an enormous investment, right? I can name all the molecules. I can manipulate them. I can do all this stuff. And crucially—so I think the remember—I remember that line about, “I don’t know what you want to do, but let me explain to you why you should do it in flies.” That was the beginning of Eric’s sort of monologue about this, which was fantastic. And then the last one was, “And if you have a physics student who is interested in trying to do any of these experiments, send them to me, and I'll take them into the lab and show him whatever he needs to know.”
And even at the time, I was struck by the fact that he did not say, “I'll have one of the students show them everything they need to know,” or “I'll have one of the postdocs show them,” or “My technician will tell them how to do the experiments.” He said, “I'll show them.” To this day, Eric spends—well, this is not a good time to measure, with Covid, although actually the last time I had a Zoom conversation with him, he was in his lab. So he likes to go in the lab and do experiments! Nobel Prize or no Nobel Prize, right? He likes doing experiments. And so, he does.
And so, I don’t remember the exact temporal sequence we had. Partly in an effort to get things started, I had organized little work groups where we would get together and talk about something or have somebody come and tell us about something that might be a path for doing something new and different. And Eric had given one of these and talked about all the things you could do in flies. And this conversation between David and Eric and I really resonated, and I repeated it, and other people talked about it.
And so Thomas Gregor, although nominally he was planning on doing theory, he got wind of this. And I've never quite confronted him about it, but I sort of have a feeling—I don’t know whether he was really serious about becoming an experimentalist, but I think he figured that it would be a good story to tell his grandchildren, that Eric Wieschaus took him in the lab and showed him how to do things, and so he gave it a try. And what would turn out is that Eric—Roberto Car who was his advisor, was very generous about he should follow his interests. Well, generous is maybe not the right word, but gracious. That he views Thomas going off and doing something very different from what he was doing as kind of fun and interesting. And then Thomas sort of settled into this pattern that Eric, David, and I were three advisors for him— roughly speaking, a biologist, an experimental physicist, and a theoretical physicist. And it would take a few years, and we should talk about the details at some point. I don’t know how much detail you think is relevant.
But the short version of the story is that I think that what Thomas did for his thesis really helped to revitalize the exploration of the early fly embryo as a model system. And in a way, the fact that many of the qualitative questions had been answered meant that this was the ideal laboratory in which to ask physics-style questions and search for a physicist’s understanding. And so that had a lot of im…I mean, he had specific results, which we can talk about, but I think that spirit, that now I can do the things that—so an example—I think the first time that I spoke about Thomas’s thesis work was at UCSF. I was passing through San Francisco. I had sent a note to one of my old friends, and they said, “Why don’t you stop by?” And I said, “Well, yeah, actually I've been doing something which is new and different for me, and actually there are probably more people in the audience—there will be people in the audience who know vastly more about this subject than I do, so it would be kind of interesting.”
So I gave this talk. And what were we interested in? We were interested in there are these very small signals that have their origins in the mother placing messenger RNA for some crucial molecule at what will become the head that gets translated. It makes protein which diffuses throughout the embryo and establishes a gradient, so that cells that are near the head see a high concentration. Cells that are near what will become the tail see a low concentration. And this drives a whole series of choices about which genes to express and so on, which eventually lays out the body plan.
There’s a whole other process occurring in the other direction, kind of around the embryo. This is the long axis of the embryo. And part of the point was if you—so in parallel with what Thomas was doing, there was a postdoc around, Sima Setayeshgar, who is now on the faculty at Indiana University, as is my old friend Rob de Ruyter, who was also involved in the beginnings of these experiments. The first paper was with him as well. And so Sima and I tried to understand whether those ideas that Berg and Purcell had about what are the limits to counting molecules, that they first thought about in the context of bacteria swimming in chemical gradients, could you do the thing that I envisioned, which was to take this all the way down to the molecular level and think about those as limits to the regulation of gene expression?
And so we showed that some of the limits were much more general than you might have guessed by reading the arguments of Berg and Purcell, which were marvelously intuitive. And they were basically—I mean they were, in all meaningful senses, correct, but you might be left scratching your head. I never had the chance to talk to Purcell about this, but I certainly talked to Howard Berg about it. And his view was, yeah, these were back-of-the-envelope arguments. So we showed that it was more than back-of-the-envelope. And in fact, this touched off a certain amount of interest of getting all the factors right. Could you really understand fully what the limits were? And if you took those limits and applied them to the textbook picture of how things were supposed to be working in the fly embryo, well, at the very least, the limits that we derived were uncomfortably close to the performance that cells actually exhibited. And it was possible that the numbers didn't make any sense at all. That if you took everything that was known at face value, this just couldn't be.
And Thomas—in order to do that, you have to measure the absolute concentrations of these molecules. So Thomas tried to do that. You have to actually measure, cell by cell, what is the noise level in the response of gene expression to changing concentrations of the input transcription factor molecules. So Thomas showed how to do that. When you realize that the limits depend on the diffusion of these molecules—so you should measure the diffusion constant—so there’s a whole series of quantitative measurements you should make, which Thomas made, and produced—there were two really gorgeous papers that came out of this, particularly beautiful papers that came out of his thesis. There were more, too, but that really sort of set down this idea of using the embryo as a laboratory to make these kinds of physics-style measurements, and probing the limits to the reliability of the system.
And so I went to give a talk at UCSF. And somebody raises his hand at the end of the seminar, and he says, “So, you know, this is a fly, so you can reach in, and there’s two copies of the gene that you're talking about. So you can reach in and delete one copy, and that’ll change all the concentrations by a factor of two.” Well, actually we don’t know that, but most people thought that that was true. Since then, Thomas has actually shown that that does work quantitatively, and, when the concentrations all change by a factor of two, that’ll change the noise by square root of two. And so the fact that you're telling me that cells can distinguish concentration differences which are only 10% but all of the concentrations are going to change by—the noise level is going to change by square root of two, so that’s going to become 14%, and that’s going to make a difference, and you should be able to see that.
And so I started to think about the answer to that question. In truth, I think the experiment he was proposing, while interesting, wasn’t going to be as simple as he thought. But that’s okay. And I started to answer that, and I said, “Wait a second. You mean to tell me that you believe that a decisive experiment in understanding how the fruit fly embryo forms spatial patterns and how development works would be to make a measurement in which the important difference is between a noise level of 10% and a noise level of 14%?” And he said, “Yeah.” And I said, “Okay, I can go home now. Because you do realize that when we started this, nobody knew any number to within a factor of two. And you now—I have apparently convinced you that the difference between 10% and 14% matters in the life of the embryo.”
So I think that more than the—and this was a biological audience, right? This wasn’t a physics audience. So I think more than the particular things that Thomas showed, this idea that numbers matter. That in some sense, this physics style of taking the quantitative behavior of nature seriously could be applied in this thing that was right there in the heart of modern biology. So that was the first thing that was a departure. So was it discontinuous with the things I was thinking about at NEC? Yes and no. It was discontinuous in the sense that we're working at a completely different level of biological organization. We were working on a very different kind of system.
And it was responsive to the fact, simply, that you had different colleagues you were working with.
Absolutely. Absolutely. And the enormous good fortune to have the colleagues that I did. And David having done this I think decided that this wasn’t—he also took on the directorship of the Neuroscience Institute, and I think that continuing to go in this direction wasn’t his thing, and so he went back to being focused entirely on problems in neuroscience, although he maintained and kind of keeps an eye on it. He’s curious.
Thomas would finish his PhD, finish up the work, staying a little extra time as a postdoc. Went off to Japan for a couple of years to do something completely different, and came back to Princeton to build a laboratory to do these kinds of experiments, which have of course evolved enormously in the intervening decade. And so to this day, Thomas and Eric and I still collaborate. We're supposed to talk tomorrow. So this whole line which has had now more than a decade of work in it came out of those early conversations. So among the first of the PhD students, among the first of the conversations.
And I think it was also—it also had some impact because it wasn’t clear what the Lewis Sigler Institute was going to be. And one of the ideas was that it would be kind of a central point for more quantitative approaches to biology. And this was one of the first things to come out of it that was kind of noticeable. Obviously, there are very different things happening under the banner of Lewis-Sigler Institute, but certainly this was a noticeable thing.
Bill, what work were you doing in information theory at this point?
So something that happened just before leaving NEC—my friend and colleague Naftali Tishby had come for a sabbatical. So Tali is a professor at Hebrew University in Jerusalem. Fascinating guy. Started in theoretical physics, spent years at Bell Labs working on speech and language processing. Got interested in things related to neuroscience, neural networks, learning theory and so on. Went back to Jerusalem to be on the faculty. During his years at Bell, he and Fernando Pereira, who is now one of the senior figures in research at Google, had written a fascinating paper about the distributional clustering of words. So you know the idea of clustering: I have points. You draw—we've all been spending too much time looking at maps in the last two weeks.
[laugh]
Think about maps of population; you'll notice that it is clustered, in the sort of simple geometrical sense. That intuition of things clustering can be carried away from the usual “things are close to each other in space” to more abstract settings. And many people had talked about the problem of clustering words. You know, you could think about clustering—that words belong together because they're the same part of speech. You could think that words belong together because they have similar meanings and can be substituted for one another. There’s a whole bunch of ways of thinking about it. The problem is words don’t naturally come in a space. So what’s my notion of things being near each other?
And so what Tali and Fernando and their collaborator, Lillian Lee, had done, was to say, “Ah. If I have a word, I can look at the other words around it, and if I look over many documents…over all the places where this word appears, I get a distribution of the words that go along with it. I don’t just get the one word that’s next to it, or the words that are within the same sentence.” To be perfectly honest, I don’t remember exactly what notion of neighborhood they chose when they started. But let’s say the simplest thing would be to take the word that’s right next to it. So, two words that go together. So these are called co-occurrences of words. And so you can look at the distribution of words that co-occur with the word of your choice. Then you say, “Ah! So every word now is labeled not by what it actually means, but by the distribution of all the other words that go with it.” And I can compare distributions.
And there are ways, for example, of measuring how similar two distributions are. It’s something called the Kullback-Leibler divergence, which one way to think about it is that it measures—if I draw samples, how much evidence do I have that the two distributions are different? So that’s a measure of dissimilarity, which is to say distance. The problem is that it’s not actually a distance in the formal mathematical sense. It’s not symmetric. It’s got all sorts of funny properties. But never mind. It doesn't obey the triangle inequality.
So what they said was, “Suppose we did the usual thing of clustering things, but the way we clustered words was that their positions in space were the distribution of words that co-occur with them, and the measure of distance in this space was the Kullback-Leibler divergence. And then after that, we cluster in whatever way you would have clustered, in some normal problem with points on the plane.”
And what they found is that by doing this, you could get words in the same cluster that weren’t just the same part of speech; you actually had words that really felt like they meant the same thing. And so in the early to mid-nineties, when they did this, and when I first heard about it, the idea that there’s this tension in the history of thinking about language between statistical and information theoretical approaches on the one hand, and sort of formal grammatical approaches on the other—one of Chomsky’s foundational papers is basically an attack on Shannon, which explains that none of this could possibly be relevant to language. Although Shannon didn't actually say it was relevant to language; he just said it was relevant to communicating. He eschewed any notion that he said anything about meaning or anything like that.
Anyhow, and in the nineties, this was sort of hanging out there as some…I mean, today, when you type things into search engines, there’s enormous statistical stuff grinding away in the background. When you do automatic translations, there are things which are implicitly statistical models. The idea that you could learn from statistical structure enough to do something useful with language is no longer controversial. In fact, you could argue that it has become so uncontroversial that there were some issues of principle that kind of got put by the wayside.
But anyway, that's not the point. So I was really taken with what Tali and Fernando—I just didn't know their third collaborator—what Tali and Fernando had done, just because out comes—I just do this simple statistical computation, and poof, I give you a collection of words. I start grouping words by what feels like their meaning. That seemed just incredibly striking to me. The problem was that from another point of view, the thing they had done was arbitrary. Why cluster in this particular way? Why give words these coordinates? Why measure distance using the Kullback-Leibler divergence? And of course, each of these things could be justified as being sensible, but it didn't seem very principled.
And so Tali was in Princeton for a year. He was spending his time at NEC. He was also visiting friends at other places. And Fernando came down—down?—yes, down, from Bell Labs—to come visit us. And we spent the day at the board. And what I told them was, I wanted to understand how I could state a principle of what you were doing in clustering for which what you had actually done was the solution. In some sense, the problem with this sort of algorithmic view of clustering is that—so there are certain versions of clustering where I can tell you what it is I want the clusters to be. For example, I want it to be the case that things that have the same cluster label have as small distances as possible from one another, while holding the other distances fixed, or something like that. Or with a fixed number of clusters, or something. So I could give you a principle that says, “Divide these points into clusters that have the following properties.” And then there would be an algorithm for doing it.
What is actually not so uncommon in computer science and what Tali and Fernando had done was essentially to propose the algorithm. But it wasn’t clear what problem this algorithm solved. So what we realized was that what they were doing was they were putting words into clusters. So let’s say that you have—the words that go together are x and y. So what you're doing is you're putting the word “x” into a cluster such that knowing what cluster it’s in gives you as much information as possible about the other word “y.” More information is Shannon information in bits. And the constraint is that when I compress my description of the word into just telling you what cluster it is, I count how many bits I'm keeping in that compression. So the idea is that I'm taking my—I mean, maybe to say more abstractly, I'm taking my description of something in the world, in this case a word, and I'm compressing my description. Which means that instead of telling you every detail, I'm only telling you a little bit about it. And I'm counting how much I squeeze down by counting how many bits I keep.
But then I say, “Ah, I don’t care about that only. I also care whether the bits I keep inform me about something else.” In this case, the other words that I see. And I can measure that information also in bits. So the idea is that you have a principle where you're trading bits against bits. And there’s a factor, which tells you how much—if I only keep three bits of information about the identity of the word, then I'm only going t—the best I can do is to keep, I don’t know, one bit about the other word. Maybe. But my choice to keep three bits was arbitrary. So, given that I keep three bits, there’s a best set of bits to keep.
So another way of saying it is if I'm looking at your face and trying to figure out your name, and I wanted to somehow compress the description of your face to do a line drawing or something like that, there are certain things I should capture and other things I can skip. And there’s a best—if I give you a budget for how many bits you have to give the description there’s a best way to use that budget to be sure that I specify your identity as well as I possibly can. So that was the problem that we formulated. It’s a very general abstract problem that turns out to be more connected to other problems even than I think we understood when we first did it. And it’s fun, because it has a certain—it tells you—it gives us a sense of relevance to the bits that you're keeping, but that relevance is itself measured in bits. The bits about something else. So there’s the bits you keep about one word, and the bits that you gain about the other words.
We called this the information bottleneck, because you're squeezing information through a thin—through a bottleneck. That phrasing was used in information theory before. But in particular, this fact that you're measuring how well you do by measuring bits about something else, it showed you that there was a purely information theoretic way of talking about relevance. So if you go back to Shannon, and you go back to the attempts to use information theory in a biological context, one of the classic objections is that there’s no notion of relevance. Animals don’t care about bits, right? In fact, they can’t care about bits. Evolution doesn't select for how many bits you have, right? They select for something else, for performance at something. So here what we showed was that you could measure how relevant bits were, and thus decide that some bits were more useful than others, but you didn't need to leave the information theoretic framework in order to do it. You didn't need somebody to come and tell you, “Oh, it matters that you're wearing glasses.” No. That’s a property, if it matters that you're wearing glasses, that’s a property of the joint probability distribution of images and names. Somewhere in there. Somewhere in that distribution is buried the fact that whether you're wearing glasses is important. And you can measure it in bits.
And so that idea, it certainly had an impact on me in thinking about things. It seems to have had an impact on the field. It’s actually—in Tali’s hands, it has had a renaissance in the last several years, because there has been a discussion about using these ideas to analyze what’s going on in deep learning. That’s actually quite controversial, and I don’t know where it’s going to end up. But it provided—it’s a framework, right? It’s not of itself an answer to anything. So that framework turned out to be useful. And it would eventually have—it would come around—it took a decade or more, at least for me to understand how to do this—to think about the idea that when information is being processed in the brain, or even being encoded in neurons early on, obviously not all bits are relevant. And so that’s one of the objections, again, to the use of information theoretic ideas to characterize the neural code. Bits about what? Why do you care? And so we had this idea that what you care about is your ability to make predictions.
And so in fact another thing that Tali and I had done together with a student, Ilya Nemenman, was to try and take an information theoretic point of view to the problem of prediction more generally. So to look at time series and ask, “How much information does the past provide about the future?” And to show that you could classify time series depending on the behavior of this predictive information. It turns out that this is connected—it is a kind of classical version of the problem that our more quantum physicist friends know as entanglement. So quantum entanglement is that you have a system in a pure quantum state, so there’s no entropy. But if I don’t show you—if I hide half of the system, then because the variables in this half and the variables in this half are correlated with each other, or in the quantum sense entangled, then when I average over what I'm not looking at, this system appears not to be in a pure state anymore, and so it has an entropy. If you work through the algebra, you realize that what that entropy is, is the mutual information between the two halves of the system.
And so the problem we were doing was the mutual information between two halves of a system not in space but in time. So imagine that—and of course we were doing a classical problem rather than a quantum problem, but the spirit is the same. And as in the discussion of entanglement, the behavior of this mutual information as a function of the size of the region that you're looking at is a signature of the underlying states of the system.
So what we would eventually be able to do is to show—and this is something that we should do more of, but kind of ran out of steam. I think other people, some of the young people who were involved in doing the work, are now trying to do it themselves. So in particular, there was a postdoc, Stephanie Palmer, who’s now on the faculty at the University of Chicago, who worked with another experimental collaborator, Michael Berry, not related to the Berry’s phase Berry, who records from large numbers of neurons in the retina. And actually, conversations with Michael produced another new direction that we should talk about.
But to close this idea about prediction, so what we were able to do was to ask, if I look at the stream of action potentials coming out of a small number of neurons, there’s two things I could do. One is I could try and reconstruct what just happened in the past, which is in the spirit of the things that Rob de Ruyter and I had done in the fly back several conversations ago. But then another thing I could do was to ask, well, as an organism, I don’t actually care what just happened; I care what’s about to happen. So could I design an experiment in which I measure not how much information neurons have about the thing that just happened, but in this sort of statistical world that we've created in the experiment, how much information do they have about what’s going to happen next?
And so what Stephanie and I did, and worked with Michael and a wonderful postdoc who was in his group, Olivier Marre, who’s now in Paris at the Vision Institute, what we were able to do was to design an experiment in which in a fairly model independent way you could measure the amount of information that neurons had about the future of their sensory inputs, which is still one of the things that I find quite amusing. That we were able to figure out how to do this. How do you measure how much information something has about the future? Well, the way you have to do it is to imagine that you see—it’s like the counterfactuals, right? Imagine you stream many pasts, and they diverge into the future in a way which is always consistent with the same underlying statistics. But of course, every time you roll the dice, you get something different, right? So you create a world that has that flavor, and in that world, you can then measure how much information neurons have about the past, how much information they have about the future.
And so now comes back the idea of the information bottleneck. If I show you the past, of all the bits that are required to describe the past, actually most of them are totally useless for predicting the future, so you should get rid of those. If I give you a budget, then there’s the best bits you could keep that would give you as much information as possible about the future. And so what we showed is that neurons vary in how much information they keep about the past and how much information they have about the future, but they track the optimum tradeoff. So the bits that they keep really are the bits that are most useful for predicting the future. I think we should do more of this.
We did one thing. Stephanie and her colleague—actually, Stephanie continues to work with Olivier on trying to do this more generally. And it’s challenging, as you might imagine. But yeah, so that. And actually, right now these ideas about the information bottleneck, I've thought about how to insert them into our discussions about the fly embryo. That again, if I think about the molecular mechanisms that are working to actually read out the concentrations of these different molecules, well, there are some places where if I measure the concentration very accurately, that’s going to tell me a lot about the thing I care about, which is where am I in the embryo. And there are other places where I can measure as much as I want, but basically the concentration is flat, as a function of position. So measuring that very accurately doesn't help me know where I am. So if my mechanisms for reading out the information, reading out the concentration of a molecule, are limited in their information capacity, how do I make sure that those bits are deployed—that limited budget of bits is deployed as effectively as possible for extracting the information that the cell actually cares about? So these are ideas that go back to things that were purely sort of information theoretic arguments that we were making sort of ’99, 2000, around the transition between NEC and Princeton.
Bill, where is evolution in all of this? How are you thinking about evolution as a conceptual framework for understanding biological function?
Well, I think the simplest answer is I'm not. The slightly more nuanced answer is that I've always been interested in situations where it seems to me that the biology is getting close to the limits of what the laws of physics allow. So there’s some notion of optimization. Optimization as some sort of evolutionary endpoint. And if I'm anywhere near that optimum, then there’s a well-defined theoretical question of what that optimum looks like, how would the system behave, and so on. Not just is it there, but what does it need to do in order to get there. And I can use that, even if I don’t understand the dynamics by which I get there.
It’s a little bit like sort of equilibrium statistical mechanics versus dynamics. The whole point of equilibrium statistical mechanics is that I forget where I started. The things that are interesting are path independent. So if evolutionary pressure is strong enough that you get close to an optimum, then I don’t need to think about the dynamics. Or rather, to be more precise, the dynamics becomes a separate question. The fact that this glass of water is essentially at equilibrium, although its temperature is probably slowly changing, but internally it’s at equilibrium, that doesn't mean that the question of how it gets to equilibrium isn’t interesting. It just means that if I want to understand what’s going to happen when I pour it out, I don’t have to think about how all the molecules come to equilibrium. So the questions separate. So in my mind—and some of my best friends spend their time thinking about evolution—
[laugh]
—but I've sort of felt like—actually—so my view is somewhere in between, “I don’t need to think about it, because I have an alternative”—at least I have an alternative possibility that I can explore without thinking about it—and that’s proved to be productive. I don’t know whether it’s ultimately correct, but it’s proved to be productive. There’s a stronger version of the view, which is that the constant reference to evolution actually in most cases doesn't predict very much. Sorry. Ask an evolutionary biologist, “Should there exist organisms that are capable of counting single photons?” They have no way of figuring that out. There’s obviously a benefit to being able to count single photons. You can hunt until it’s darker at night. There’s a cost of counting single photons. Your photoreceptors are among the most metabolically active cells in your body. It costs an enormous amount. Nobody knows how to compute that tradeoff. So that means that evolution is completely silent on this question.
On the other hand, empirically, there are organisms that count single photons. And so since you can’t count half a photon, that’s as well as you can do. And in fact, we know even more that in some cases, the reliability of which you count single photons is limited by the dark noise in the photoreceptors, which you can measure. And so I can get started asking, what is it that the retina and the rest of your visual brain have to do in order to be able to count photons with the reliability that you actually observe? And that’s a perfectly sensible set of theoretical questions which make predictions about experiments, which in some cases turn out to be right.
Obviously, evolution plays a role here because that’s why there are—I mean, it is evolutionary pressure that drove you in this direction. But to say, “Oh, you should worry about evolution,” well, what’s evolution going to tell you about this? We just don’t know how to do the calculation. That’s not to say that evolution isn’t interesting. As I said, the fact that the system you're studying is in equilibrium doesn't make how it got to equilibrium uninteresting. It just means that you can separate the questions.
Evolution is almost a foil for you.
Maybe, yeah. At some point, it becomes interesting to ask—one of the things that I think about now is, are there situations in which we could say, effectively, what the landscape really looks like? How hard is it—so the photon-counting example looks sort of hopeless, because the things that control the reliability with which you count photons are sort of completely distant from the costs. Maybe I could figure out how much it costs to be able—and measure some signal-to-noise ratio—the single photon signal-to-noise ratio or something. I could compute an energetic cost associated with that. Maybe. But then, how much does that buy me in terms of my ability to hunt and therefore reproduce? I don’t know. I mean, I just don’t know how to do that calculation. Do I count how many calories I collect by catching things later at night versus how many calories I spent [laugh] powering the photoreceptors? I don’t even know if that’s the right calculation.
So in the context of the genetic networks that are controlling what’s going on in those first two hours of the fruit fly, that Eric Wieschaus and David and Thomas and I have been thinking about for all these years, there, maybe we have a chance of asking, what does—if we think what’s important is the ability of the system to transmit and capture information about where cells are in the embryo, because ultimately that’s what allows a system to build a complex and reproducible pattern, then maybe that provides a surrogate for what evolution is selecting for.
And I can ask, what does the landscape look like? See, I'm close enough to the molecules that I could ask, if I change molecular properties by this much, how does that affect the amount of information that’s being transmitted? And that’s something we can do. And to some extent, we even know—well, we don’t really know yet, but we can imagine understanding how those molecular properties which matter for how information is transmitted are actually represented by DNA sequence.
There are a lot of gaps here. There are simple versions of that idea that actually Paul François and Eric Siggia and Vincent Hakim followed, in thinking about using something like information as a metric for how well certain simple genetic or biochemical networks are performing, and sort of developing a sort of simplified evolutionary dynamics to see, can you find your way to various optima, and what happens along the way, and so on.
So I think that that’s a—these views—I think part of the problem is that you have to find—in order for this to work, you have to find a place where the kinds of things where we're making progress thinking about optimization principles, the landscape for the optimization itself could be specified well enough that asking whether plausible evolutionary dynamics would find the optimum—that’s the question. Or, identifying places where you clearly aren’t finding your way to the optimum because evolutionary dynamics can’t get you there in any reasonable amount of time. And I'm sure there are many examples of that. And then you really do need to think about the dynamics itself.
Bill, when you think about developing unifying theories for biology, to what extent, if at all, are you taking intellectual cues from the pursuit of a grand unified theory for physics? You know, bringing gravity into the standard model.
We're a few centuries behind, right?
[laugh] Wait, who’s behind who?
Well, I think our understanding of the living world is a few centuries behind our understanding of the inanimate world.
You know of course that astrophysicists say we don’t understand what 94% of the universe is made of.
Yes, yes. But if you think about what we do understand in order that we can say that, right? When the Higgs was found at CERN—you know, there’s this iconic picture with some cross-section or other, right, and there’s a little bump. I would point out to people, “Look at the background. Look at how accurately we know what would happen if there weren’t a Higgs.” So in some way, every one of these steps, you say, “Oh, there’s this thing, that we just found.” But in this case it literally rests—it’s a bump, and it’s on a background—so it literally rests on this entire generation of understanding of the standard model, which is now incredibly precise.
So I think that sure, we're always focused on the things we don’t understand. And in some ways—so particle physics and cosmology have this mix of having some things we understand just unbelievably well, and some things which are kind of zeroth order questions which I can easily explain to a high school student, to which we don’t have the slightest answer at all. But when you start talking about the living world, I think in the sense that physicists like to say they understand, we understand vastly less. So, what do I do to organize my thinking about any collection of processes? I don’t know. There has been progress, but yeah, having some theoretical understanding that has quantitative predictive power, we can only do that in very small corners. Whereas in the inanimate world, we have a lot of reach.
Is this to say, that we're centuries behind in the biological realm, is there a certain catching up? Or are there limitations built in so that there will never be that equal level of understanding?
So, I think if you go back to the discussions about can we really do physics in the context of living systems, there’s a view that there’s something irreducibly messy and unreliable and so on, about biology, which you could read that as physicists’ skepticism that the biologists are really doing sufficiently quantitative experiments to make it worth thinking about. So if I go back to what my professors sneered at when I said that I wanted to go off and think about these things—they said, “Eh, are you sure you're not wasting your time?”—you could read that as a concern about the quality of the experiments, mixed in with a little physicist’s arrogance, that if these guys were any good, they’d do a better experiment.
But actually, I think the more substantive question, which emerges—there’s a period where you hear about, well, the mathematics we have to describe the world was built for physics, and we need a different mathematics for biology, because biology is different. The important facts about biology are not quantitative facts. They’re qualitative things. And all the numbers don’t mean anything. In fact, if you read Ernst Mayr, which is another good reason for my not thinking about evolution, he says—he, actually—this is a guy who’s held up as one of the great figures of the modern evolutionary synthesis. His response to Galileo saying, “The book of nature is written in the language of mathematics” is, “He just said that because he didn't know any biology.”
So there is a thread which runs through this interaction between, let’s say, the mathematical and physical sciences on the one hand, and the biological sciences on the other—there’s a thread that runs through that, which is, no, the kind of understanding that we have of the inanimate world will never be achieved in the context of biology, because biology doesn't have—living organisms don’t have quantitative reproducible properties that can be the target for theories in the tradition of physics. Here, I'm willing to tread and be a little bit impolitic. You know, biologists say that we physicists are arrogant. So, what? You have some private line to God that tells you that organisms don’t have—that none of the properties of organisms that are summarizable numerically turn out to matter?
You've gotta go—first of all, I guarantee you that if you don’t look for them, you won’t find them. I gave you the example of the fly embryo, right? Nobody had measured anything to within a factor of two. Do you know how hard Thomas had to work in order to make these measurements to convince himself that cells could tell the difference between two concentrations that differed by 10%, and move the conversation? I mean, even if everything we said was wrong, he moved the conversation to the point where we were asking about those things. And that was an enormous effort. But if you don’t make that effort, you will never find those quantitative reproducible behaviors that are characteristic of what’s going on.
Going back to when I was a student, I was so fascinated by those early experiments seeing the single photon responses from rod cells in the retina that came out of Denis Baylor’s group. And I remember some years later, I visited Denis, and I was in the lab with them, and there they were, on the oscilloscope, those little single-photon responses from the rod cells. And I said, “It’s not that I ever doubted you, but it is sort of beautiful to just walk in the lab, and there they are. They look just like in the papers.” Which by then was a decade later. So there are these crisp, beautiful, quantitative, reproducible properties of these cells. And he said, “Yeah, it’s not that way when you start.” So you have to uncover these things. In his case, he was driven by earlier experiments that told you that the visual system as a whole seemed to be capable of counting single photons, so you want to drill down and find those elementary responses.
So I think that a certain amount of the skepticism—so look, until you succeed, people can be skeptical, and they're right to be skeptical. And you should be skeptical yourself about your research program. But I think that a certain amount of the skepticism is rooted in the absence of examples not of successful theories but of experiments that produce the kind of data that theories in the physicist’s sense would explain. So if all of your data are sort of soft and squishy, and the best you can say is that when you do something, 75% of the time something happens, well, yeah, what’s the point of writing down some sophisticated mathematical description of what the answer is supposed to be 75% of the time something went on?
And so in that sense, I think that it depends a little bit on where you started. If the first experiments that ever fascinated me were, I don’t know, something about changes in gene expression of some human cell line in culture, which is awfully hard to control and looks extremely variable, if that was the first example I ever saw—you know, you dump hormones into the system and some gene expression increases, but then you look at every single cell and it’s completely different—if that was the first thing I saw, maybe I wouldn't have the view that I do now. Whereas the first things that I saw were you put photons in and little electrical pulses come out. And they're beautiful and reproducible and they're the result of the actions of single molecules. One out of a billion in the cell. And so you see that that’s how cells can behave, and you realize, “Okay, that’s the thing I want to think about. I don’t know why these other cells don’t behave that way. My guess is, they're monitoring”—so first of all, a lot of biology experiments are done on cells under conditions that are really pretty weird for the cell. And there’s good reasons for that. But you should be careful in drawing conclusions about what life is like, as opposed to what your experiment is like. But the other thing is, I don’t know, a human cell in culture is measuring all sorts of things about its environment, some of which we don’t even know about. Can you give me a complete list of all the receptors on its surface so that I know which molecules I should worry about? I don’t know. I can’t do that. I don’t know whether anybody can. So what does it mean?
This goes back to Thomas’s excitement about the embryo as a physics laboratory. It’s a place where all the things that need to be held constant are being held constant, for you. You don’t have to do it. You think, “Okay, screw this, I'm not going to worry about big complicated organisms. I'll think about bacteria.” So you watch people doing experiments on E. coli, right, and the E. coli are growing, and there are all these beautiful things they're trying to measure—noise, and gene expression, and everything else. But then you look at it and you realize that by the time you've got 100 cells in your field of view, the cell that’s in the middle sees a very different world than the cells that are on the edges, because they have very different access to the nutrients that are coming in from the outside, and so on. And that’s not necessarily a problem, but it does mean that if you notice that different cells are doing different things, how do you separate out the fact that they're in different places and thus different chemical environments—even a bacterium—from the idea that there’s some intrinsic noisiness or variability? It’s hard work, right? People have done that hard work in some cases, but it’s easy to average over and confuse sort of responses to meaningful but obscure signals with noise.
In the fly embryo, one of the things that Thomas has been working very hard to do is, some of the experiments, you can do where you genetically engineer the organism so that things light up. The proteins that you want to look at are fluorescent, and so you get to see them. Or the RNA molecules attract—bind fluorescent molecules, so you can see spots that glow and so on. So you can watch dynamically. But other experiments that you do, you stop the action, and then go in and probe in various ways. Well, when you stop the action in an embryo, how do you know when you stop the action? Where is the time marker? Well, if you did it one embryo at a time, you could watch, right? But of course, you don’t want to do that; you want to do hundreds of embryos at once, or even thousands. So if I have a test tube full of embryos, how do I know where each of them is in its developmental cycle? Well, I can tell the difference between things where the number of cells is different by a factor of two. That’s easy. But if I've got the same number of cells, where am I in the cell cycle? How do I tell? Well, it turns out that in the fly embryo, there are ways of telling to within a minute accuracy. But you have to know what those are, and you have to use them. And if you don’t use them, then you make a mess out of what is in fact a very precise process. So there’s all sorts of ways.
And I think as the community of physicists who are exploring the living world moves on, we discover more and more places where we could have messed up, and where things would look noisy and variable, and in some sense uninterestingly sloppy, just because we didn't know to control for something. We didn't know even to just look at things as a function of some variable which is natural to the organism but wasn’t natural to us. And my guess is that that process hasn’t finished by any means, and that part of the difference between the places where we're doing physics experiments on biological systems and the places where we're not yet doing physics experiments on biological systems is coming to grips with all of those variables. And I have great admiration for the people who have managed to tame their particular example to the point where they can squeeze out all of these things which are systematic from things which are noisy.
So I feel like part of the problem—part of what made it possible to make progress in the inanimate world was the possibility of isolating effects. And a lot of what goes on in living systems is you've got everything happening at once, and so it’s hard to isolate things. And you can isolate them by taking them apart but then some of the things just don’t happen in the parts; they only happen when you put the parts together. And so then what do you do? There has been an enormously successful enterprise of searching for the fundamental building blocks of life. That’s what gave us molecular biology. It’s not coincidental that there were a lot of physicists involved at the beginning of that who were emulating—who were more explicitly emulating the reductionist program that brought us from atomic physics to nuclear physics to particle physics. “So let’s find the stuff out of which living things are made.” And indeed, that stuff consists of parts that are rather more universal and interchangeable than you might have expected. They're not exactly—of course, [but] our DNA is all made out of the same stuff, and proteins are all made out of the same amino acids.
But it’s more impressive, perhaps, that the proteins that we use to do a job and the proteins that a worm uses to do a related job or a bacterium uses to do a related job—that you can see the relationships among those molecules, and you can under certain conditions substitute one for the other. That’s an extraordinary discovery in the physics spirit, but it’s in the reductionist mold. So the challenge is to create a physics of biological systems which is in the synthetic or emergent phenomena mold. So we need the condensed matter physics as opposed to the particle physics.
Bill, do you pay attention to astrobiology at all, and are you open to the idea that advances in that field might well upend certain bedrock understandings of Earthly biology?
Actually, I'll give you a stronger answer than that. In these famous lectures during the Second World War, Schrödinger asked—the title of the lectures was, “What is Life?” I think if you ask most biologists—different people have different attitudes about those lectures. Some people said, “Well, everything that was right wasn’t new, and everything that was new wasn’t right.” Somebody once said Schrodinger’s problem was, he was trying to figure out how physics relates to biology, but he didn't know any chemistry. There are all sorts of views. It’s clear that at a certain moment in history, it was incredibly inspiring for a lot of people. It certainly was for me.
I think that if you ask a biologist—“Okay, so in 1940-something-or-other, this great physicist got up and gave a series of lectures entitled, “What is Life?”; So how are doing on answering this question”—I think a lot of people would say, “Well, we know the answer.” You've got DNA. You've got proteins. You've got duh-duh-duh-duh-duh. Except the problem is that I recognize what’s alive and what’s not alive without knowing that they have DNA in them. Right? So if I watch something under the microscope, I can tell the difference between a particle of dust undergoing Brownian motion and a bacterium that’s swimming, trying to get somewhere. And more generally, we see behaviors of living systems that are recognizable.
I think that part of the challenge—I think that the notion that there could be life on other planets reminds us that our ideas about what life is clearly have a lot in them that are the historical accidents of our particular planet. It would be sort of like saying, if I ask you what a computer is, and you start telling me about the properties of silicon. Well, yeah, that is what—our mastery over the properties of silicon is what made all this possible, but that’s not what a computer is. So I haven't found in what I know about the astrobiology literature, which is perhaps less than I should, something that really gets at that in a way that I find compelling and useful for me, in my thinking about life on Earth.
But I think the spirit that we should be asking—I think trying to formulate questions in ways that aren’t too grounded in how things are implemented, I think that’s an important challenge. This is a well-established tradition in thinking about the brain. So David Marr talking about different levels of explanation, right? But then you also find—I remember hearing a lecture once where they were talking about high-level vision and low-level vision. So, roughly speaking, high-level vision is I can recognize objects and I can attach names to things. Low-level vision is I can see the edge of something, and I can sort of make out the pieces. Roughly. But the more I listened to the talk, I more I was wondering, so are you sure that the things that you hold up as examples of low-level vision aren’t the things that we know are done by neurons early on in the visual pathway? And more to the point, high-level vision are things—is the set of things that you can’t see how they're done by neurons early on in the visual pathway, so you presume they must be done by things further on in the visual pathway? In which case the question “Is primary visual cortex capable of computing quantities that are relevant for high-level vision?” is a non-sequitur. That question doesn't make any sense.
It might be that the answer is—what you mean by high-level and low-level vision is so closely intertwined with how the visual system is actually laid out anatomically that these neurons accomplish certain computational goals and these neurons build on those goals, and so that’s all there is to it. Except of course there’s feedback, right? So maybe there’s a reason for thinking that that’s not all there is to it. And when you start talking about context-dependence of computation in early sensory areas, you're sort of mixing high-level and low-level.
So I remember at that time, and this is now many years ago, so I don’t know exactly where this all stands, but you could object—it was coherent to object that you had mixed your levels of explanation. The things that the brain is doing should have an abstract—they're solving problems which can be formulated—the brain is solving problems that can be formulated in the abstract. And then, the neurons along the way actually do particular things. They take this abstract problem and they break it up in a certain way. And you could imagine a brain that accomplished the same abstract task but broken up in a different way. And so, there are questions to be thought about here, and separating the abstract from the concrete is useful.
So you could ask the same question about life itself. We only have—in the limit, we so far only have one example. And some of the choices, some of the features, are probably essential to what we mean by being alive. And some of them are—not. And I don’t think we have a very strong theoretical framework for sorting those out. And this is exactly the kind of thing that in going back to the question about having theories for biology in the sense that we have in physics, and how does that relate to the objections that other people would have—this is just not a question that you can ask unless you have the physicist’s point of view. Which features of life as we know it are actually essential to being alive? How do I ask that if biology is defined as the study of the things that I actually find in front of me? Doesn't mean anything.
The geologists understand that there are minerals that don’t actually occur naturally. You can make them; it just turns out that they aren’t made [laugh] in the plates slamming into each other and all that stuff. And that’s okay, and that’s actually useful to think about. Because, for example, if you think about conditions on some other planet, you might find yourself in a different regime where those things now become relevant. So I don’t know, maybe—I think the possibility of life on other planets is a useful foil, as you put it, for a reminder—“Wait a minute. Which of the things that I'm doing are too much tied to the details of what I'm seeing?” But absent a result, I don’t know that it’s anything more than food for thought. But I also don’t follow the field that closely, so I don’t know. And also, if you decide that what you're going to do is go look, then I think there’s a lot of thinking that goes into what you're looking for, which is of the flavor that I've been suggesting is important. And I just don’t know that literature as well as I should to comment on it.
Bill, let’s bring the narrative all the way up to the present. What are some of the major things you're working on currently?
Actually, we've been going for a while, and we've skipped over a few things. So I wouldn't mind answering the question. Maybe I can give a short answer, and we can agree that we need—
We need one more.
Yeah.
Let’s actually hold off on that, because we can come back to that in its proper narrative context.
Okay, all right.
[End 201110_0370_D]
[Begin 201202_0385_D]
This is David Zierler, oral historian for the American Institute of Physics. It is December 2nd, 2020. I'm so happy to be back with Professor William Bialek. Bill, it’s good to see you again.
Good to see you too, David.
This is quite appropriately our seventh session and grand finale to our wonderful series of discussions. First, I just want to say at the outset how special it has been to spend all of this time with you. It’s one of the reasons why I tell people, friends and family, that I have one of the best jobs in the world. It’s because I get to spend so much time with people like you. So I'm really looking forward to this.
Thank you. That’s very gracious.
And I'm going to miss, in advance, having ongoing conversations with you, so I'm just going to really enjoy this in the moment. I want to start, Bill, today—there’s one science topic that I don’t think we covered well in our previous conversations. I want to ask you about the origins of your interest in building statistical physics models for real biological networks. So to start, can you sort of untangle and define that sentence, as you understand it? What are statistical physics models for real biological networks?
So an enormous fraction of the phenomena in the living world that attract our attention are very clearly things that emerge from interactions of large numbers of components. So at some kind of vague level, this is obvious because there’s all these things happening at the molecular level that become macroscopic behaviors. but that might be so vague as to not be useful. More concretely, if I think at the molecular level, then even the emergence of protein structure from the interactions—proteins from big, complicated molecules—the structures that emerge are emerging from interactions among, on the order of 100 up to 1,000 amino acids, and they're not all identical. So there’s something going on there. So that’s collective.
We now understand that many of the things—at an intermediate level of organization, there are many things that are sort of on the micron scale in cells that we used to think were organelles that were bounded by membrane, which we now understand to be sort of condensed droplets of proteins and nucleic acids that self-organize. In many organisms, when an embryo is developing, you see cells moving around to rearrange and build structures, and there’s a sense of the tissue flowing like a fluid, or the particle—the role of particles are played by the cells.
In the brain, your ability to think and to remember and to perceive depends on the activity of individual neurons, but in many cases really is something that emerges from the coordinated activity of thousands or even more neurons. Hundreds of thousands of neurons, perhaps. And on the largest scale, we've all watched a swarm of insects or a flock of birds or a school of fish. And so I think physicists have been attracted by these phenomena, because we think, “Oh, all the birds deciding to fly in the same direction, that’s like all the spins in a magnet deciding to point in the same direction.” It’s a collective phenomenon that we can relate to and we know that the things that are anchor points are things that we really understand deeply, and they have a richness and a beauty as statistical mechanics problems that emerged over a century.
And there’s a long history of people trying to use statistical physics ideas to think about these systems. I guess neural networks is the classic example. So this goes back into the forties, but there were milestone papers in the seventies from Leon Cooper and others, and then in the early eighties from Hopfield, that really touched off a revolution, which has echoes in the modern use of neural networks in artificial intelligence. There was the work by Viscek and colleagues and by Toner and Tu in the nineties, trying to write down models for a fluid of particles, and that the particles were active so they could be self-propelled, and then the way in which they moved depended on what their neighbors were doing. And Toner and Tu in particular tried to understand, “Well, how do I go from if you want the molecular dynamics of those particles—particle dynamics—to a kind of macroscopic fluid mechanics?”
And the goal really was to understand flocks and swarms. So they unambiguously had in mind the biological example, and realized that this led them into effective field theories which were not the ones you would have gotten at or close to equilibrium. That actually became the field of active matter, which is today a quite large field. And it is unambiguously biologically inspired. The question is, what’s the relationship between that world of statistical physics ideas and the real systems? And are we supposed to think of this as—one way I think about it is, is the statement that the birds in a flock agreeing to fly in the same direction is like the spins in a magnet agreeing to point in the same direction—which, by the way, is not doing justice to what happens in active matter, but just let me leave it at that level for the moment—I mean, is that a metaphor, or is it a theory?
[laugh]
So if it’s a theory, I should be able to calculate things and get them right. And for a long time, it didn't matter, because the measurements weren’t good enough. So the relationship of let’s say statistical physics models of neural networks to what’s going on in the real brain could remain slightly metaphorical, because it wasn’t clear what you would measure. But soon enough, it became possible to monitor simultaneously the electrical activity of ten neurons, 100 neurons, 1,000 neurons. And now this is like sort of peering in and getting a movie of the molecular dynamics of some little droplet of fluid, and now asking yourself, “Well, do I understand—now I can no longer—do all those molecules bumping into each other produce the Navier-Stokes equations, or do they produce something else?” That’s a real question. Whereas before, if occasionally I could watch one molecule moving around, I really can’t tell.
So, in sort of circa 2000, around the time that I was moving to Princeton, these measurements of more and more neurons were becoming—they weren’t quite common yet, but in particular, the retina, you can record from many neurons simultaneously. And this is an idea that goes back—well, I think it’s really Markus Meister who gets credit for this—Markus realized that the general problem recording from many neurons is interesting, but the retina is special because it is approximately flat, and so that means that I can build an array of electrodes. And there are people who were building arrays of electrodes and then trying to take neurons out of the brain and get them to grow on the array. So this is a kind of—this is a complicated and messy situation.
But what Markus realized is, “Well, why don’t I just take the retina out, and put it down on the array, and then I'll have access to all of the neurons that are coming out of the—the output end of the retina?” And so I had interacted a bit with Markus and one of the postdocs in his group, Michael Berry, moved to Princeton, and we continued to interact when I was still at NEC. And then when I moved to Princeton, conversations became a bit more frequent. And there was some moment where he asked the question, “Suppose I record from three neurons simultaneously and I tell you that I see some fraction of the time A and B are active at the same time, some fraction of the time B and C are active at the same time, some fraction of the time A and C are active at the same time; should I be surprised when I see all three of them active at the same time? Suppose I only tell you about the pairs, and then I see the triplet. What’s my expectation?”
The answer, of course, is if the only thing you measure are the pairs, then the triplet just could be anything. But is there a natural guess which, when violated, would tell you something interesting? And so there were a number of discussions. And this is edging into the—it takes some time for these things to happen, so I don’t remember the exact years. What a couple of the postdocs who were around and I realized is, wait, there is a natural answer, which is the maximum entropy principle. So if I want to describe a probability distribution, and I can’t measure everything that I need to know in order to define the entire probability distribution, which is the normal case, because probability distributions over—if I have many variables, then the probability distribution is one number for every possible state of the system. The numbers are all positive and they add up to one, but otherwise they could be anything. And already, if I have ten variables that could be zero or one, that can be two different states, I've already got a thousand states for the system as a whole, and it grows exponentially. So by the time I'm talking about a small group of neurons, there’s no way I'm going to do an experiment to measure the probability that the network is in all possible states. So I can measure some things, but I can’t measure other things.
So the simplest version of that is this problem with three neurons, where of course you could measure everything, but you could at least imagine not doing it. So what you could say is, “Well, every time I give you a constraint, that tells you something about the probability distribution, but it doesn't tell you the whole distribution.” But then there’s an idea that goes back—it was really articulated by Jaynes in sort of the immediate aftermath of information theory—Shannon information theory—where he said, “Ah, I now understand, thanks to Shannon, that the entropy that we all know and love in statistical mechanics and thermodynamics also has this informational interpretation. The entropy of a probability distribution is the amount of information you would gain, on average, if I told you what state the system was in.”
So this is something that you learn in your statistical mechanics course in some vague way—that the entropy has something to do with our uncertainty about the state of all the molecules. But that sounds too vague. And in fact what Shannon proved is that it’s not vague. It is literally the amount that you don’t know that you would learn if I told you what all the molecules were doing. And it is the only quantity which gives you a consistent definition, a consistent mathematical definition, of that information that’s available.
So what Jaynes said was—and this produced all sorts of arguments—is this a physical principle? Is it a statistical principle? Is it a principle of inference? Is it a philosophical idea? I don’t know. I think we can use the idea without entering into a lot of those debates. What he said was, if I want to write down a probability distribution and I only have a certain set of constraints, then what I should do is to maximize the entropy that’s consistent with the constraints on the grounds that's that leaves the largest possible room for variation in the system. I'm putting the minimum amount of information, with the minimum amount of structure, into the distribution that I need, in order to satisfy the constraints, and no more.
I often think of this as being the opposite of what theorists usually do. Usually, we have a very definite principle in mind, and we're trying to sort of bend nature to our will. We're trying to make the data look like what we think it should look like. We have a prescriptive relationship to nature. This is descriptive. So, what you could say is, if I go back to Michael’s question about the three neurons, well, you've measured the pairs. There is a maximum entropy distribution that’s consistent with the correlation in the pairs and that makes a prediction for what the triplet—how often you should see the three of them be active at the same time. And if you see the three of them active more than that, then that’s something interesting. And so he said, “Oh, huh, that’s cool.”
And also, this has a deep connection to statistical mechanics, because if the system you were talking about was a mechanical system, so the energy was a well-defined quantity, and the only constraint that you gave was on the average energy, the maximum entropy distribution consistent with the average energy is the Boltzmann distribution. So you sort of build this bridge between all of your statistical mechanics formalism and a collection of things you can measure on real complicated systems.
So we set down this path, and to be perfectly honest, our expectation was that there were all sorts of interesting combinatorial things happening. That you'd find three neurons active simultaneously with a probability that was very different than what you expected from the pairs. And not just three, but four, and five, and ten. Somehow, what we would be doing by constructing the maximum entropy distribution was constructing something which was almost a null model in the statistics sense. That it’s the uninteresting part.
The first thing that happened was we built maximum entropy models for groups of roughly ten neurons in the retina while the retina was watching a movie of some reasonably natural scene. So kind of rich stimulation. And lo and behold, they just worked. They explained—you could compute things in the model that are not just the pairs. You could compute the triplets, and you could compute all sorts of other things. What’s the probability that k out of n neurons are all on at the same time? It all worked.
This paper finally came out in 2006 and touched off quite a lot of discussion. So we started to realize, well, okay, so first of all, you shouldn't overinterpret the fact that this particular set of constraints—I would say looking back—I think we understood this then, and I certainly understand it up to now—the fact that choosing this particular set of measurements as the constraint on the overall probability distribution, the fact that that was successful by itself maybe doesn't mean so much.
But what it does tell you is that you immediately go from a set of measurements on the system to—you have a strategy for using the things that you measure that you think are important, to build a model which has the form of a statistical mechanics problem that you understand. So in particular, if you think about the example of neurons and pairwise correlations and all that stuff, what you get is a model for the joint distribution of the activity of the neurons simultaneously, which is a distribution that looks like the Boltzmann distribution, with an energy function which is the energy that came out of the Hopfield model for neural networks. But the way Hopfield got to his model was he wrote down a model for the dynamics. He waved his hands and threw lots of things away. He simplified things and made sort of clever abstractions to the point where maybe at the end you weren’t even sure whether—does spin-up mean that the neuron is on because it generated an action potential, or does it mean that it’s on because it was on for some sustained period? You lost a lot of microscopic detail. Which was fine; that was the goal of the model-building. Here, the variables are literally the variables that you're measuring. In the particular case we started with, it’s you take a very small window of time, and spin up means there was an action potential and spin down means there wasn’t. Totally unambiguous.
Furthermore, once you match the pairwise correlations, the model that you write down does not have any free parameters left. So you have this clean separation—I measure these things, and they determine what the model is. Which is some particular statistical mechanics model, which is in a class that we know about, but it’s a particular one, that describes this network. And then, from that, you can compute anything you want, and either you get it right or you get it wrong. So that’s the spirit. So can we do that?
And then of course you have to remember—so there’s two problems here. One is, do we believe that these models are accurate models of the real system? And the truth is that most of our work has been on that problem. But then, it’s sort of like in statistical mechanics itself—Feynman has this comment, he said, “The Boltzmann distribution, it’s sitting at the top of the mountain, and either you can try and climb up, or you can take the long slide down.” It’s not the end of the subject. But the fact that I wrote down this model—what is this telling us about the real system?
For example, one of the things that I find completely fascinating is, if you really take this seriously as a statistical mechanics problem, and you start thinking about bigger and bigger networks, well, in the limit that the network becomes infinite, you know there’s a phase diagram. And what phase is the real system in? What does the phase diagram in this model look like? Where is the real system in the phase diagram? Is it deep inside one phase? Is it near a phase transition? And so on. So by 2005, 2006, we had done the sort of trial run of all this on small groups of neurons, and then we started working in many different directions. So that was a very long answer to the “What do these words mean?” question.
And Bill, where does renormalization come in from this? And establishing the intellectual heritage, are these ideas—how far afield would they have been to somebody like Ken Wilson, for example? Would he have recognized the connections you're making? Or is it only as a result of what he did that you were able to build on something that may or may not have ever occurred to him?
This question is really interesting. When you're talking about N equals ten neurons, you probably shouldn't make these connections. As you start making the number larger and larger, you might ask yourself—so one question you might ask is, why should simple models work in the first place? When I write down the Ising model for a real magnet, there is no real magnet that microscopically is well described by the Ising model. Certainly not by the nearest neighbor Ising model. And yet we know that you can use the nearest neighbor Ising model and compute properties of macroscopic behaviors of magnets. You can compute macroscopic properties. And in particular, as you get closer and closer to the critical point, those predictions become quantitatively accurate. It’s not just that you qualitatively get the phase diagram right. It’s not a metaphor, right? You actually get critical exponents, right? You get the existence of scaling. You get all the phenomenology right. So why is that?
We now know—the renormalization group is the answer to that question. It tell you that as you go from the microscopic description to macroscopic behaviors, models have a tendency to get simpler. And there are terms that you could have included in the microscopic model which are relevant, in the sense that those survive and in fact become more important on the larger scale. And there are ones which are irrelevant. And in the very technical sense, not just the colloquial sense, but the technical sense, that they become smaller and make smaller and smaller contributions to more macroscopic behavior.
So part of the resonance with renormalization group is in some sense the license to consider models which are not microscopically realistic. That as long as the terms that you include in your model encompass all of the relevant operators, you will get the macroscopic behavior right. You might have to adjust your parameters. So the coefficient that you put in front of some interaction cannot be interpreted as the actual microscopic interaction, the strength of that actual microscopic interaction, because you left a bunch of stuff out. And you make up for it by shifting the value of this coefficient. But that’s okay because all the macroscopic quantities come out right.
So part of the import—so I should say, I think that many of us in let’s say in my generation, we internalized this as license to make simplified models for complex systems, not just in biological systems. But it’s also true that in many important cases, we don’t know how to do an honest calculation that would implement these ideas. It’s sort of a blanket license. You say, “Well, appealing to the renormalization group, I can write down something simple.” And that was where I think things were in this enterprise for a long time.
I would say, for instance, the ideas about flocking were not of this flavor because Toner and Tu had actually done renormalization group calculations, and others following them. That was the beginning. It would take us 15 years until we started to think about, “Oh, wait, now that our experimental colleagues are recording from a thousand plus neurons”—you know, a thousand is a lot of factors of ten right? It’s ten to the third—"So maybe I can think about really implementing the ideas of course-graining and flow from microscopic description to macroscopic description in the real data. Can I use things that are—can I turn some of the language of renormalization group into tools for analyzing the experimental data?”
But that’s something we've only done in the last couple of years. We certainly weren’t there circa 2005. But what I would say is—you asked me what would be the reaction. What I would say is that this stream of work—it had an immediate resonance in the statistical physics community. Maybe even more than in the neuroscience community, a few people were scratching their heads. And they were scratching their heads not only about, oh, well, these ideas from physics, but also about real questions about what is this telling us about real networks of neurons and so on.
In particular, if you think about what we were doing, what we were doing in order to get started, we had to do the inverse of the usual statistical mechanics problem. So the usual statistical mechanics problem is I give you the Hamiltonian, and you compute the correlations. Here, what we're saying is, experimentalists give us the correlations, which don’t have any symmetries to them and things like that, so I can’t re-express them in momentum space or anything like that. You give me the Hamiltonian. That’s the construction. And we're trying to build the probability distribution by having observed the correlations. And that inverse statistical mechanics problem also sort of caught on as a—can I solve that in certain approximations and so on. We tended to be a bit brute force about it, because we wanted to be sure that we weren’t tangling up our notions about approximation schemes with our interpretation of the real data. But other people have been doing really interesting things in developing approximations.
I think the other thing that happened was we very quickly started to see connections between our use of these ideas in the context of neurons and in other systems. So one line of work became, “Well, let’s sort of march forward. We could do ten neurons. Could we do 40 neurons?” Our experimentalist friends were rebuilding their electrode with—Michael Berry and his colleagues were rebuilding—next generation of students and postdocs, they rebuilt their electrode arrays so that they could record simultaneously from somewhere between one and two-hundred neurons in a small patch of the retina. Okay, can we push these things to describe 100-plus neurons?
We would eventually start collaborating with my colleague David Tank and Carlos Brody, looking at data from eventually up to, I don’t know, a couple of thousand neurons in the hippocampus, using very different techniques, using these optical recording techniques, which sort of trade time resolution for spatial scale. So there was one line, which was let’s keep thinking about more and more neurons, and at each stopping point, you ask yourself, if what I am seeing is a sample out of a really large network, and it’s like this, what would that large network be like?
And one of the things that was enticing was that more and more it seemed like that network would be sitting somewhere near a critical point. And that’s a very controversial idea. I still don’t know whether that’s the right interpretation of the data or not. There are people who are absolutely convinced it can’t be right. There are people who view it as completely obvious that it must be right. I view it as a question of experiments, so we should keep going. But the other thing that happened almost immediately was we started to see connections to other systems. So let me pause. So I tried to give you a quick version?
Mmhmm.
So shortly after we got our first results with small groups of neurons, Rama Ranganathan, who at the time was in Dallas at the University of Texas Southwestern, came and gave a talk. And he talked about experiments in which he took a family of proteins—so you see proteins that are related to each other that occur in many different organisms, or playing slightly different roles in the cell, in the same organism. And if you line up their amino acid sequences, well, a traditional thing to do would be to look—let’s look at amino acid number 17, and it’s always a histidine, so I guess it’s important that there’s a histidine there. But amino acid number 23, sometimes it’s an alanine and sometimes it’s a glycine. And so I guess I can flip those, and it’s okay. So there was this tradition of characterizing the probability of amino acid usage or conservation at single sites and using that as a description of the whole family of proteins and trying to say something about evolution and functionality and everything else.
But what he did was, he said, okay, let me imagine that I choose—and I have a family for you, which is an example—I have a thousand proteins that all come out of the same family—let me go to site number 17, look down the thousand of them, and say, okay, there’s some probability there’s an alanine there. I'm going to make a protein by choosing the amino acid number 17 at random out of that distribution. And now I go to site 18, I'm going to choose independently. If you do that, what you make is glop. You don’t make functional proteins.
So then he said, “Aha, maybe I need to remember that sometimes the substitution over the course of evolutionary time, if the amino acid here changes, the amino acid over here also changes, in a correlated way.” And this is something that people had commented on in various ways, but the intuition was, “Oh”—for example, if they're touching each other, in the three-dimensional structure—well, if you change the charge of one of them, you better compensate by changing the charge in the other. More subtly, if you make one of them a little bit bigger, you better make the other one a little bit smaller, or you're going to disrupt the packing. So that was the idea behind looking for these correlations.
So Rama and his colleagues developed a strategy for generating a whole new family of sequences in which the probability of using amino acids at each site was the same as in the original family, and the pairwise correlations were the same. And then when he synthesized proteins that were drawn at random from this new ensemble, with some reasonable probability, they actually folded and worked. And so his argument was, “See? You only need to keep track of the pairwise correlations.”
So he came and gave a talk about this, and I heard that and I said, “It’s gotta be that what he’s doing is the same thing that we're doing with our neurons.” Where the identity of the amino acid is the state of the individual neurons. And he’s building a model for the distribution which is consistent with the pairwise correlations, but anything higher-order emerges without any additional constraint. The problem was that his strategy was a very seat-of-the-pants strategy which ended in really making molecules, which is great, but he never actually wrote down the probability—his way of doing it avoided writing down the actual probabilities.
So what happened is that my sister in law was getting married, and she lived in Texas. So I wrote Rama, and I said, “I have to be in Texas for personal reasons. Can I come visit you?” And he said, “Sure.” And I sat down in his office and said, “Tell me exactly what you do.” And we went through all the details. And I went back to the hotel room, and I convinced myself that there was some limit in which the sequences that he was synthesizing are indeed drawn from the probability distribution that has maximum entropy consistent with pairwise correlation. But I know how to write that down, because I know something about statistical mechanics. I still have to solve the inverse problem, but I know the form of the distribution.
So we wrote that up and stuck it on the arXiv. And the truth is, we've never gone back to publish it. Rama recently wrote to me and said, “You know, this has become a big enough deal that maybe we should go back and do this.” It’s not such a big deal in the sense that lots of people cite that paper. But a handful of physicists got interested in this and realized—this connection, and then it was those papers that discovered that if you construct this statistical mechanics model, then the things that look like interaction energies are much more short-range than the correlations that result from them. Which we're used to in statistical mechanics.
But in particular, they're sufficiently short-range that if you pick the biggest one, you tend to pick amino acids that are in contact. And that means that by looking at the statistical structure of a family of proteins, you can infer something about what their three-dimensional structure is. That is remarkable. We didn't do that. However, what we did do was to say, “Oh, the right language for describing these models is this maximum entropy thing.” And then that, once you have that form, then you can—Rama, as I said, hadn’t actually written down the distribution. So the notion that there are effective interaction energies at in those coefficients, that was something that came out of our work. I couldn't predict that that was going to have all of these consequences, but that was fun, and has been fun to watch that field develop.
Not long after this, I had a sabbatical, and we went to go spend a semester in Rome. So this is all happening—so we put the first pre-print about the neurons on the arXiv in 2005. The preprint with Rama is 2007. And immediately after that, in Spring of 2008, we go to Rome for a semester. Parenthetically, one of the things that we had been looking forward to in Rome was to spend time with Daniel Amit. I don’t know if you know—knew Dani. Was a remarkable character. And I had had many wonderful interactions with him. He had been my host in Rome many times. Fascinating guy, both for his physics and for his politics. And I had had the experience once of being in Jerusalem for a month, and I was given his office, because he was in Rome, which was fascinating, because I was just surrounded by this person that—the things of this person that you know. And unfortunately, he had had rather serious health problems, never completely recovered, and passed away shortly before we were—in late 2007. So anyway there was this kind of gap, a loss.
And so my hosts were then from the older generation. It was Giorgio Parisi, Giorgio was always very gracious, and was on this occasion as well. And I didn't really have a definite plan for what I was going to do while I was there, and talked to various people. And then one day, over lunch, somebody said—they said to each other, “Did you hear the radio this morning?” There was a story on the radio about the work that was done by some of the people in the Rome physics community, one of the groups in the Rome physics community, about flocking in birds. And so there was this whole discussion. This is how it started, right? There was a call for—is this relevant or not? This is too much detail. Anyhow. The important thing is there was all this excitement about basically one of the groups in Rome, and Giorgio had been involved in this, actually as had Nicola Cabbibo—had decided that since there were all the models out there, but no data, let’s go look. But they set up to actually measure three-dimensional positions and the velocities of birds in a flock, and the example they chose were the starlings that flock over Rome in the evenings in the spring, and late winter.
And we're having to have this long conversation about this, and so it’s arranged that I should go meet these people. And then I go to meet Charlotte, my wife, that evening. We had agreed to meet in front of a particular museum which is very close to the Termini station in Rome, which turns out to be the building on top of which they set up the apparatus for observing the birds.
Oh, wow. [laugh]
Which involves another story. How do you get—it’s an old building. How do you get permission to set something up on the roof of it? There is a ministry that worries about science and education. That’s a different ministry than the one that worries about—
Preservation.
—preservation of buildings, which might be the same one as the one that runs the museum, but I'm not sure. Anyhow, it’s Italy; it’s complicated. I think this one would be complicated here, too. And as I'm approaching, I'm realizing that this museum actually has two branches. And so my main concern is whether in fact we've agreed to meet at this particular one or if it’s the other one. But then, we find each other, and Charlotte explains to me that she has just seen this extraordinary thing, which is the display of the starlings in the evening over the square of the train station. And I explain that of course the most interesting thing that happened to me that day was I had this conversation with my physics colleagues about the same phenomenon!
So very quickly, I got to meet Andrea Cavagna and Irene Giardina, who were leading this effort, and they both came from the statistical mechanics tradition. In fact, they had been Parisi’s students, then gone off and come back to Rome. And for one thing, we just hit it off personally. They were quite delightful. As things would develop, they became close friends. Their daughter was a bit older, their son had just been born. In fact, the first day that Irene came back to the department was when I was teaching a course, and so that’s when I met her. And we realized, oh, this is perfect with the maximum entropy idea. So instead of doing let’s say the sort of path of Viscek, Toner and Tu, of saying, “Let me imagine what the dynamics is like and then coarse-grain that theory”—look, I measured the correlation; build me the maximum entropy model that’s consistent with those correlations.
And so the first thing took time, and there’s a lot of—there are a lot of things to get right. And one of the things that I've really come to appreciate, which echoes my interactions over the years with other people—I probably talked about this already—this drive for the measurements to be very precise. The care in the reduction from raw data to things that we actually want to talk about. You don’t actually measure three-dimensional positions, right? You take movies from multiple synchronized cameras, and infer their position and velocities. There are a lot of steps there, and there are a lot of places you can go wrong. But the data are really beautiful, and so the first thing was to think about a flock in which we don’t worry about the fact that the birds don’t all fly at the same speed. So birds are just unit vectors. And then the maximum entropy model that’s consistent with pairwise correlations is some sort of Heisenberg model, on some funny lattice.
And again, we decided that for the sake of discussion, we would think about just snapshots. So again, you look at where they're all going; you don’t worry about how they're moving which is of course only half the problem at most. And it was extraordinary successful. You could predict—you could basically say, “Well, I think that what’s important is my correlation to my neighbors, because I don’t believe that this bird is looking at the bird on the other side of the flock.” So if that’s true, then I can write down something where I take an average over my neighbors—I look at what the birds in my neighborhood are doing, and I look at the average correlation that I have with them, and that’s characteristic of—this one number that’s characteristic of the ordering of the flock. And we build the maximum entropy model that’s consistent with that one number.
And then from that, you predict everything else. You predict the correlations as a function of distance. You predict the sort of degree of polarization of the entire flock. The degree of polarization of birds that are a certain distance from the boundary. You predict correlations among groups of four birds that are in excess of what you would have expected from the pairwise. It all works. And then we realized, “Oh, but wait you can do more.” You could use the same basic principle, but now include the fact that the birds have fluctuating speeds.
And something deep happens here, because if you think about the case without the fluctuating speeds, you notice that the fluctuations in direction in which the birds are flying—on average, they're all going in the same direction, but they fluctuate, and the correlations of those fluctuations extend over very long distances. And then you realize, oh, actually, I shouldn't have been surprised by this. Because of this mapping into the world of Heisenberg models, if the correlations are strong enough, then you're in the magnetized phase. And so the fact that they all point in the same direction really is a broken symmetry. I've broken a continuous symmetry, and that means there should be Goldstone modes. And so these long-range correlations that you see are the Goldstone modes. And so, that was beautiful, right? But then you also see long-range correlations in the fluctuations in speed. And there’s no continuous symmetry that’s being broken there. Because it is not true that flying at all possible speeds is the same thing, right? There’s friction. And you might think, oh, wait, Galilean relativity; I can boost—no! [laugh] You're moving relative to the air.
So what that means is that if you were going to explain—try and explain the long-range correlations and speed fluctuations in this family of models, you would have to tune them to a critical point in order to—and so the long-range correlations would be critical fluctuations. But the way we construct the model is not by fitting the long-range correlations. The way we construct the model is by matching the short-range correlations, the correlations with your neighbors. And it turns out that the strength of those correlations is just right to poise the entire model at its critical point. And then you get the long-range correlations for free, quantitatively.
So this really—for me, this was quite extraordinary, because we're basically—we're matching two numbers—the correlation to your neighbors, and the variance of the speed. Right? And then again, everything follows from that. The whole shape of the correlation function comes out of that. And then you get this idea that something more general is going on. And so, here we have these very different examples—networks of neurons, sequence variation in families of proteins, and flocks of birds, where this same set of statistical physics ideas were being used to describe the structure of correlations and predict new things. As I said, we weren’t so involved in the protein families work, but then for the neurons and the flocks, it was even weirder that not only could you use the same statistical physics language, but you even found that in both cases, you were near to some kind of critical point or surface. And that that seemed to be the natural thing that was going on.
And again, it’s not clear that that’s the right interpretation of that. People are still—but I think the idea that you can use statistical physics ideas to describe the real thing—it’s not a physicist’s idealization; it’s the real data. And what comes out of it is the most accurate predictions that anybody has had for these kinds of complex systems. So for example, we used these ideas in the hippocampus, where my colleagues David Tank and Carlos Brody were doing experiments recording form roughly 100 and eventually thousands of neurons at once while the mouse runs along a track in virtual reality. The data were collected by Jeff Gauthier; it’s an amazing set of experiments. The hero of the story is Leenoy Meshulam, who decided that she would like to do her PhD thesis with David, Carlos, and me together. She saw an opportunity and went for it. She built the maximum entropy models for the joint distribution of activity in all these neurons, matching the pairwise correlations; then you predict the correlations among triplets of neurons, and you get them all right. There’s 100,000 of them, and you get them right within error bars. So you basically give it as accurate a model as you possibly can of this network.
You can predict, if I tell you the state of N minus one neurons in the network, what’s the probability that the last neuron would be on or off, and you could show that the structure of that is exactly what the theory predicts, neuron by neuron, moment by moment in time. And this doesn't follow in any simple way from any of the other things that we knew about the network. That’s a long discussion to be had, but it doesn't—in particular, if you make parameter-rich but conceptually simple, biologically-motivated models of the network, then you don’t get the triplet correlations right at all. It’s a disaster. So qualitative ideas that encapsulate what we knew about the biology just don’t hold up quantitatively, whereas these statistical physics models, they just get everything right.
Bill, given the fact that this body of research really elucidated some connections between theory and experiment, I want to zoom out a little bit and ask you what that might mean about biophysics more broadly. So for example, just to set the stage, if we look at accelerator physics or particle physics, it’s a pretty stove-piped operation in terms of the theorists and what they do, and the experimentalists and what they do. Is this work suggestive that those bright lines are not necessarily as bright in the world of biophysics generally?
So I think there are two things to say. One is that when I look at the work that we did—we talked about these ideas about decoding signals in the fly embryo, and telling where a cell is to within 1% accuracy, and then using that to build maps of what would happen in mutant embryos. Then I look at the precision of these predictions in the hippocampal network or in the flock of birds. And by the way, to close the circle, in the hippocampal network, when experiments got to the point of looking at thousands of neurons simultaneously, we tried to look at renormalization group motivated coarse-graining ideas. And we used those to look at the data, and we found scaling behaviors and we found distributions of coarse-grained variables approaching fixed forms that are not trivial. So that connected back to that whole set of ideas, and that’s very much something that we're still thinking about. But there again, the quantities that came out—there were exponents that described the scaling that were reproducible in the second decimal place from one animal to another.
So, what I would say is that this body of work or let’s say this generation of work, the work of the last ten years, what’s characteristic of it is that we've achieved a level of precision in the comparison between theory and experiment that I don’t think anybody would have believed was possible, certainly when we started. That was the reason not to do this—“You're never going to be able to do that.” Now, I think it’s also true that this has been a period—and how long this period will last—biophysics isn’t one enterprise, because biology is so vast. So I think that this period has been one in which very close interaction of theory and experiment has been the way that my colleagues and I have made progress. I don’t want to say it’s the only thing. And I think in that sense, it’s more like what I hear about condensed matter physics from around when it was born.
And also, I would say the fact that in particle physics, theory and experiment are so well diverged, this is in some sense because the theories that coalesced in the 1970s just worked. And so you keep calculating things, and they keep measuring things, and they keep coming out right. If you look at the discussion surrounding dark matter searches, for a while it seemed like everybody knew what they were looking for, and so theory and experiment again didn't really have—just go look, right? But suddenly you realize—it’s getting to the point where the things you thought you were looking for might not be there, and so we need to look for something else. So what’s the something else we should be looking for, and do we really understand what their signatures would be, if we found them? And so there are places where the theory and experiment are getting closer together again. Obviously, theory maybe with a smaller “t” than other parts of the theory.
Okay, so I think that it is important—I think that achieving the level of theory/experiment comparison that we've achieved goes a long way toward justifying the notion that there is a theoretical physics for biological systems. Because we can’t achieve that kind of comparison when it’s not clear what we're doing. On the other hand, it’s a moment where it seems like, or there has been a period where it seems like very close interaction is important. But it’s interaction—the principles that we were using in thinking about—the statistical mechanics things, I think it’s more like we kind of knew what to do. And then the question was, what happened when you did it with real data?
In the work on the fly embryo, less clear what you should do. There were real theoretical questions that had to be answered in order to say, “Oh, this is the”—and there was give and take. You do something theoretical, and then that tells you that if the numbers come out this way, then the next theoretical question is this, but if the numbers come out that way, the next theoretical question is something else. So yeah, I think it’s unfair to the history of physics to say that theory and experiment have diverged. I think there have been crucial moments—I don’t know, I think about the discovery of superfluid helium-3.
Right, right.
There was an enormous amount of theory-experiment give and take, just to figure out what you were looking at. Even though people knew what they were looking for. In fact, they’d known what they were looking for, for quite some time! So yeah, I think different fields, theory and experiment move relative to each other. I think in the physics of biological systems, the first challenge is, is there a physics of biological systems that’s not just physics tools, biologist’s questions? And then in that world, is there theory, either capital “T” or small “t,” that has an independent existence, other than just, “Oh, I need to interpret what the experimentalist saw”? So I would give a kind of yes and no answer.
Bill, last question on this topic—going back to this idea of wondering what’s a metaphor and what’s a theory, I want to ask you if we can bring the concept of neural networks beyond statistical physics and into astrophysics. And by that I mean, there are astrophysicists, cosmologists, who say things like—like Lenny Susskind will say something like he suspects that the universe is a neural network. That’s clearly some kind of a metaphor that we don’t have any idea what that actually means. But from your perspective, your expertise, how might you understand how to apply the concept of a neural network not to biological systems but to the whole universe?
So a couple of reactions. Let me try to keep them brief. First reaction is, if you go back to the early days of neural networks, I think Hopfield’s insight was to emphasize that computation is a dynamical process. That a machine that carries out an algorithm of course has equations of motion. And so in some sense, the equations of motion embody the algorithm. And sometimes, if I know that it’s an algorithm for doing something, I don’t have to solve the equations of motion, because I know where I'm going to end up. And so it’s sort of like saying that dissipative dynamics ends at the minima of the energy, the free energy, systems come to equilibrium. So that idea that dynamics embodies computation, that’s a very deep idea. It’s an idea that can be dangerously general. So that a natural reaction to “what’s being computed by the earth going around the sun”—okay, I don’t think that celestial mechanics is strongly illuminated by this observation, right?
[laugh]
On the other hand, you could say, well, in the process of computation, information gets moved around. Some of it gets destroyed. Some of it gets copied over in other places, and so on. And certainly in biological systems thinking about how information gets moved around I think is very important. And then you can think about, well, on the sort of grand cosmic scale, there’s a question of how information is being moved around. The version of this that most people would be interested in is the black hole information paradox, which is of course a quantum mechanical phenomenon, not a classical phenomenon.
But it’s also true that—so, first answer was, okay, dynamics embodies computation, and so maybe that is a useful perspective on the universe writ large. Second point is, there’s the movement of information, and things becoming correlated with each other. Information gets shifted around. Information gets transferred. Information gets destroyed due to dissipation. In the quantum version of that, you have things like the black hole information paradox. If you think about some approaches to the black hole information—already one of the questions that gets asked along the path to think about the black hole information paradox is, if I have a dynamical system and I can’t see inside it—in this case because it’s beyond the event horizon—but I can see things at the surface, what’s the relationship between what I see at the surface and what’s going on inside? Well, without either the beauties or the complexities of general relativity and quantum mechanics, that’s a very biological question. That’s what we're doing right now. You have access to what I'm doing, with a very limited bandwidth. And the goal is for me to communicate to you what’s going on inside, right? [laugh]
So are there relationships—and that’s something which has not been exploited perhaps as much in the truly neural setting as it could be, but I think it’s an interesting connection. Other than that, I think it’s all a little bit vague and metaphorical. By the way, just to be clear, it’s not that metaphors aren’t interesting.
No, of course. Of course.
They're different. You just need to know which one you're dealing with.
Right, exactly. [laugh] Bill, let’s leave the laboratory or the proverbial pen and pad to do the theory and the equation. I want to talk about your career in teaching, and specifically I want to frame this chronologically, for your career, where there is a remarkable parallel but also a divergence, if we want to focus on Berkeley and Princeton. Of course these are two elite institutions, but they are certainly at the opposite ends in terms of the public and private divide. And so in reflecting on your career in total in teaching, I wonder if you can think a little bit about what are some of the things you've learned from teaching in these two very different environments. The kinds of students that you've come across. And perhaps how students might have changed regardless of the institutions, because of the internet, because of social media, because of the hyperactive concern with careerism that undergraduates now have for any number of reasons, that they may not have at the beginning of your teaching career.
So, one of the things we haven’t touched upon, which is an important footnote to your phrasing of the question, is that the sense in which Berkeley is a public institution of course is very different today than it was when I was a student. The fraction of its budget which is actually provided by the taxpayers of California now is very small. And so I think that—basically, there are well supported public institutions and there are poorly supported public institutions. The difference with private institutions is that there are well-endowed private institutions, and there are underendowed private institutions.
So I'm not sure—I think elite versus not elite, I think that has gotten vastly worse. So the gap between—if you have your first faculty position at a top ten or top 15—or maybe even the difference between top five and top 15 is too big, in terms of the resources you have, your quality of life, and so on. But you were asking about teaching. I don’t feel like I can talk about how the students are different. Because when you start out, you're excited to be a faculty member. All the students are—that’s the challenge. It’s fun. And by now—I hate listening to aging colleagues talk about how the students were better when they were younger. When I was at NEC, Boris Altshuler had one of his Russian friends visiting. Somebody started in on this. And they said, “You can go back to somewhere in the ancient Greeks—I don’t know if it’s Plato or Aristotle—writing about how the students aren’t as good as they were.”
[laugh]
So let’s take for granted that they've been getting continuously worse for 2,000 years.
It’s a timeless complaint.
Yet somehow things get done, right? So how does that work? Of course things were changing, right? When I walk into a classroom—I mean, I actually put this in my notes for the students now. Why are we in the classroom? There’s a book. And if you think that my particular lectures are so good that I'm giving you this unique view of the subject, which every once in a while I might actually try to claim, I could do it once in a well-lit studio, and we could put it on YouTube, and we could be done with it. So what are we doing in the classroom? And the answer is, there is something about that interaction that is vital. And we always thought that. But maybe what is happening as a result of the tremendous changes in technology. and of course these are enhanced, distorted, perturbed by our current moment of being in the pandemic—maybe we'll actually end up focusing on the part which is essential.
Everybody who has succeeded in science remembers being in the classroom with a professor who conveyed something about the subject which lit them up. It’s like going to live theatre. Why do that when somebody spent a million dollars to film it? Why should I spend my $100 for a ticket? [laugh] Because there’s something about that live experience which is a little bit intangible, and holds you in a way that the canned version doesn't. And I think we need to be much more focused on that than we used to be. And so that’s interesting. So, you know, is the answer to this the flipped classroom? We should really only be using the time that we spend together to have conversations rather than me lecturing? Maybe. My feeling about lectures when they were working was that it was with relatively small groups and with lots of interruptions. So I tended to think the information flow was not just one way anyway. But, okay. These are large-scale things.
I would say that the difference between the Berkeley of my experience, both as a student and as a professor, and the Princeton of my experience as a professor, and with all the caveats—different era, different stage in my career—I would say that Berkeley was very much sink or swim. “There’s all this stuff, a lot of it is just fantastic, finding your way is entirely up to you, and if you get lost, well, you know, there are a lot of students. We'll find somebody else.” [laugh] Or maybe it just doesn't matter. Maybe people get lost—I don’t know. There was—again, this is my view of that—it’s a ten-year period, but it is a particular ten-year period—there was a lack of concern about students not finding their way. There wasn’t help.
And the quality of the teaching I would say—again, the peaks were as high as anywhere. They were extraordinary. Was that because the institution communicated the value of that, or is it because individual faculty members took it upon themselves to perform at that level? I would say that the institution that I experienced did not convey any concern for the quality of teaching. That was my experience. That is not to say that there weren’t individuals who did extraordinarily good jobs. It is not even to say that the mean level was lower than anywhere else. But the institution wasn’t telling you, “This is something we care about.”
At Princeton, that’s not true. The institution definitely conveys that this is something that they care about. And, they care about the students finding their way. In fact, I would say and have said to colleagues that I think we go too far in the other direction. That the institution is so concerned that your undergraduate experience be a coherent undergraduate experience which is joyful and rewarding, duh-duh-duh-duh-duh, that the rope isn’t very long. And students are discouraged from doing things out of order, and they're discouraged from taking on too much.
Maybe—maybe—the advancing careerism means that the only people who show up are people who have always taken the three hardest courses they possibly could at any one time, and nonetheless been at the top of each class. And if you put all of those people together and they continue that behavior, they're all going to go nuts, and so you have to tell them not to. On the other hand, there’s a version of that caution which comes out as being almost anti-intellectual. That I get tired of hearing about how important your experience outside the classroom is. Of course it is, but this is a university, right? [laugh] Why did you come here? You don’t really want to tell me that you came here for the social connections. Like all of those boys from upper-class families did in 1947, right? On the one hand, you're telling me you don’t want this to be an upper class finishing school, and on the other hand, you're telling me that, “Well, it’s not only what happens in the classroom; it’s all the social connections we form.” There is some complication there.
But in terms of the teaching, I would say the issue is about too little assistance at Berkeley, too much sink or swim in Berkeley, and too short a rope in Princeton. I think each of the institutions, if they moved a little bit more toward the middle on this, they would improve. At least for the kinds of students that I imagine I'm worried about.
Bill, of course part of this with Princeton—the one word we haven't mentioned yet is privilege. Privilege saturates the entire undergraduate experience at a place like Princeton. Socioeconomically, racially. Students who are coming from places where they don’t understand what failure means, and that they've never been put in a situation where sink or swim was a real option or a real reality that they ever faced.
Right. I would point out that the demographics of Princeton is now very different than what it used to be. Once upon a time, the only people who went to Princeton were the people who could afford to pay. That’s just no longer true. More than half of the incoming class is on financial aid, and the average award is very close to being the entire tuition. Each incoming class is more socioeconomically diverse than the one before it. What is true, and this is about the nature of elites—I mean, I've had this conversation with my colleague Mark Mézard who’s now the director of the École normale supérieure—and one of the things he struggles with is, there was a generation where the process of getting into the École normale supérieure in France, that was the selection. And there was a sense that it wasn’t what you did—having become a normalien was to be accepted. In fact, originally the institution didn't even give degrees. Now, this one [Princeton] gives degrees, but as one of my European colleagues who moved here noted, when you enter, you become identified with the year in which you're going to graduate. Which is very interesting, because that suggests that your graduation is guaranteed. Aren’t you supposed to do something? [laugh]
So there’s some sense that—so by getting in, you've achieved everything. And that is very bad. And it arises, ultimately, because—so there’s an irreducible component of this, which is that high-quality education is expensive, because part of it is personal access to the people who are handing it out. So you just need lots of people. It costs money. And so if the highest quality education is a limited resource, then you need to decide how you're going to dispense this throughout the society. And that creates this problem where you're going to think that by getting into the group, you already—so whatever selection process is used acquires a meaning outside of this. So oh, that person got accepted to Princeton; that person got accepted to the École Normale. That must mean something. Well, it does, but it’s not clear that it means the thing that your prospective employer wants and needs it to mean, in order to make a decision. The only answer is raise the mean.
Look, Princeton isn’t going to become ten times bigger. It’s not going to become the size of Rutgers. What has to be true is that if you're a kid who grows up in New Jersey, it should make less difference whether you go to Rutgers or whether you go to Princeton. And you're not going to achieve that by making the education at Princeton worse; you're going to achieve it by making the education at Rutgers better. And better for all the students who go there, not for the students who find their way, to go back to the Berkeley model.
I got a great education at Berkeley; I found my way. Lots of people didn't. I don’t think you should set up an education—that is one answer, right, is you just let lots of people in, more people than you can really handle, and you get out of the way and see what happens. But it’s not clear that that’s selecting for anything meaningful. For one thing, it selects for people who feel comfortable asking. It’s not that help was impossible to get; it’s that it wasn’t there structurally. So if you could get somebody’s attention, as I did because I worked in a lab in high school, and the guy that I worked for in high school knew one guy who was on the faculty at Berkeley—and that had enormous consequences. Outsized consequences. I did my part, but it shouldn't have to work like that.
Bill, one question about teaching that might tie it all together in terms of your approach or your aspirations—with all of your interactions with undergraduates over the years, the term “genius” is rarely productive or meaningful, but I do wonder if you've ever had the pleasure of knowing even one student, one undergraduate, who blew you away. If you've ever had that experience, how did you know what you were looking at, and what did you do with that opportunity? Without naming names. This could be a composite character.
You prefaced this with various—it’s a problematic concept, et cetera. I have certainly interacted with undergraduates who had extraordinary spark. Something special about them. There was one who I think had often been in environments where he was using his talents to solve practical problems. He was programming. Had sort of viewed the things he was being asked to do as maybe not so interesting. And so the goal was to give the quickest possible answer. And so the challenge was to convince him that there is a world where the questions are deeper, and if you don’t like the question, it’s your job to improve the question, not to give the questioner an answer that makes them go away.
So I think that there’s a mode of interaction, if you're quick—so quickness can be a handicap, right? You get into this habit of giving quick answers, and eventually—it’s sort of like the difference between the classroom and research, right? The classroom—answer the question as quickly as possible and get to the next one. In research, if the answer is coming too quickly, just make the question harder, because that means you're not trying hard enough, right? So I guess that that’s maybe the most interesting example of this.
I've had students—undergraduates, the number of undergraduates who stand very far out of expected is small. By the time they've been filtered and we get them as PhD students, the mean at the thing that we're trying to do has gone up. So part of the thing with undergraduates is that they could be extraordinary but just in some way which isn’t especially relevant to the thing you happen to be teaching. And why should they be? That’s not their obligation. They're there to sample, and you're there to show them. And it might turn out that this isn’t—it’s also true, as maybe we talked about once already, you look at paths to successful scientific careers, and in certain areas, those paths are pretty narrowly—they're pretty narrowly bundled. And in other areas, they spread much more widely. The theory-experiment divide is clearly there. So I've certainly encountered people who have kind of raw technical power. But that’s also something which is honed, right? And I'm not so sure that by the time I see them, I can tell the difference between the honed and the innate.
You're asking me this at a moment where I'm quite troubled by this. I don’t know—I think—what’s the line of the Haggadah about, even if we were all learned, we’d still be obligated to tell the story again? Even if we are all well intentioned, there is a self-reinforcing and self-reproducing character, as Bourdieu taught us, to all of these things. And we look for certain—even when we talk about spark—look, I worry that when I talk about—when I recognize somebody being talented, I'm not really saying anything more than, “I turned out to be really good at this. I've seen someone who reminds me of me when I was his age.” Or her age.
And it is true that if they remind me of me when I was at that stage, then the probability that they will succeed has gone up. Because I have a data point, right? [laugh] And I have other data points of other people like me. The question is, how good am I at recognizing other things that might lead to success? Really to doing something. I think that’s much harder. And that's where, if nothing else, we get lazy. If we can fill—we're about to go through this exercise, right? If we can find 20 theoretical physics students and make offers to them for graduate school, who we feel confident will do something interesting, then we tend to stop. It’s easier to make an estimate of their probability of doing something interesting if they're in a part of the space that you've sampled well. If they're in a part of the space that you haven't sampled well, then you don’t know what you're looking for.
Now, you can add to this the fact that we're not all innocent. We carry with us all of the prejudices of our society. Even leaving that aside, I think there are perfectly good reasons why this is challenging. So the only thing I can think of, as a way of combating this, is just being more thorough. Look in more places. Get to know more people. Experiment. Let’s try some people who don’t look like what we thought—they certainly don’t look like what we looked like when we were kids. I don’t know. I mean, I'm supposed to fill out these forms, recommending people for graduate school. And I try to write a thoughtful letter of what my experience of them is as undergraduates, and how did we work together, and what did they do in class, and duh duh duh. And then I get this question—“Rate this person’s”—I can’t remember exactly what—
Whatever it is, it’s reductive.
— “Scientific ability. Top 1%, top 5%, top 10%, top 25%.” I don’t know! Am I supposed to be carrying around a list? I mean, I guess I could make a list in my head of all the people, how are they doing. Am I supposed to keep a ranked list in my head of all the students that I've worked with and slot this person in? They're number 47 out of 1,000, so they're in the top 5%? Really? Is that what I'm supposed to do? [laugh]
Bill, to move on to graduate students and postdocs, we talked in detail already about your style as a graduate mentor. And at the risk of leaving anybody out, I certainly wouldn't want you to single out your most successful graduate students. But I do want to ask, generally, a word that you've used repeatedly in writing and in our conversation, is pleasure. The pleasure that you've experienced in working with graduate students and postdocs. So, given the fact that graduate school can be such an emotionally fraught and intellectually challenging, a very difficult and risky endeavor, from your perspective as a mentor, where is the pleasure in all of it? Where is the pleasure in terms of building human relationships, conveying things that you've learned both in the science and in life, and in learning from your graduate students who quite literally represent the future of the field?
So one thing is, I don’t fool myself into thinking that you're automatically kept young by being surrounded by people who are younger than you are. But it is true that you're constantly met with people who have questions which are not necessarily the questions you had in mind. And so that is refreshing.
We just—I try to meet with some of my students once a week now, over Zoom, and we talk about what have you been reading in the literature that caught your eye. Don’t give me an hours-long lecture about what was in the paper, but just tell me something that caught your eye. And we had a fun conversation about a paper that caught somebody’s eye. And it turns out I've tried to read lots of papers from this line of work, and honestly, I'm pretty sure that there isn’t anything there, we can skip it. But I think they were right to think that it was interesting.
And understanding that distinction between yes, you were right to have this catch your eye, and maybe there’s something for you to do, but I don’t think that you're going to learn a lot by figuring out what they did—that whole discussion—why did it catch your eye? What do you find interesting? That’s just an example. Seeing the scientific literature through different people’s eyes. That’s a very low level example, but already, it’s just fun, right? I have my view, but then—“Oh right.” The first reaction is, “Oh, please. I don’t want to talk about this paper.” [laugh] And then you realize, “No, no, no, no, no. Actually, this is great. They are right. If they don’t know this body of work, then they were right to think that this was worth a look.” And maybe that means that I should be a little less convinced than I am of my own views about particular parts of the scientific literature, too.
You see people get interested in things that you somewhere between didn't think was interesting and was pretty sure it was not interesting, [laugh] and you reexamine your views. There’s the moment when the student comes and tells you something that they've understood that you didn't, and that you didn't necessarily predict as an extrap…I mean, you obviously don’t give students problems to think about—you don’t encourage students to spend their time—I don’t think of myself as giving students problems. Strange language. You don’t suggest that they spend time thinking about something where you know the answer, right? That doesn't make any sense. On the other hand, you do sometimes encourage them to think about something where you know the answer up to here, and you think that if they keep going, you sort of know how it might turn out, and that there will be an answer that would be worth getting to, in some reasonable amount of time, and that’s why you suggest that you look at it. And then they come to you, and they tell you something that you didn't expect. And that’s a pleasure. And then of course they realize that they told you something that you didn't expect, and then that’s another pleasure.
There is this transition from being a student to being a scientist. And in a way that has analogies, but of course is different, from being a parent and watching your child grow up, there’s a sense of them acquiring, building an independent view of the intellectual world. And some of that happens while they're your student, and some of it happens afterwards. You see them at a meeting two or three years later, and they're professors, and suddenly you see a person who is related to the person that you encountered at the beginning of their career but different. Mature. They have a view.
I've had interactions with—again, I'm trying to stay away from names. I've had interactions with—I'm thinking of postdocs who, they didn't have the stigmata of the brilliant theorist, and yet, on the decadal time scale, they've helped to create new fields. You have the other one where people are—they look like the brilliant genius that we were struggling to talk about earlier, and you get a glimpse of the direction in which they're going, and maybe you can help. And then again, you look ten years later, and you see this huge body of work.
Bill, for the sake of setting up the question, permit me to just share with you my own experience, because I want to ask you how you respond to it in terms of the pressure or the burden that you feel sometimes as a mentor. So, I'm very much of the school, as a graduate student, that my advisor, not only did he teach me how to be a scholar, but he set forth the career connections where I was the person who he said I was, that I could accomplish the things that he said I could accomplish. And I view that as, well, that was true only insofar as he taught me how to be that person. All of which is to say, I worship him, I would take a bullet for him. I owe everything to him, right?
Now he bristles at this notion because first of all, he would say, “You can’t put all of that credit on me. You deserve some of that credit yourself.” But also, because that’s an extraordinary amount of pressure, because what about all the students that didn't work out so well for him? It shouldn't cut both ways. He shouldn't just be a hero or a total failure, right? So in terms of how you understand your role as a mentor, for those students who feel that they owe that much to you, versus the students that you've had who for whatever reason just didn't make it in the field, how much of that cutting both ways do you find burdensome because you don’t want to be responsible for somebody’s life, so to speak, and how much of it do you want to feel like, “Wow, I really launched an incredible career, and I was able to set somebody on a path of financial stability”? How do you feel about all of those things?
Right. The idea that you could make a living doing something that you're interested in is actually quite extraordinary. It seems like a low bar. You don’t have to become famous for doing it. You don’t have to become rich doing it. But yeah, if you can spend your productive working life doing something that you thought was interesting, being part of a community of like-minded folks, it’s great. If you earn your living playing the piano and nobody puts a glass on it, then you're doing okay? [laugh]
[laugh]
Look, I don’t know. I—so one student whom I co-advised—worked closely with two of us—I won’t say who the other person is, because then anonymity goes out the window—when I read the preface to his thesis, he said something about how his co-advisor and I had set an example not only for the science but in life, and how to enjoy the science, and how to enjoy other things, too. And I remember being both surprised and moved. I didn't think that I was communicating anything special about this. And maybe I wasn’t. Maybe another student at the same time wouldn't have seen these things.
Look, we are examples. We're examples—we're examples for our students in how we choose problems, in how we collaborate with others, in how we value teaching, whether we have integrity in how we do science. There’s an ethics to our business as well as the substance of the science itself. Yeah, there’s lots of dimensions, and the boundary between the part where you're scientists and the part where you're human beings is not very sharp, and I don’t think it should be. I think that part of the joy of science is that it is a human activity, and it is a shared human activity.
As I said before, one of the things that I try to remind students of is the goal of science is not for you to understand something. Success is when you've changed how other people think about something. And so that means that your ability to communicate, your ability to understand what their view is to begin with, these are all integral to the process. They're not something that decorates the process. And different people are better at different parts of it than others, just like they're better at different parts of the science itself.
As I say I think the boundary is in the wrong place. I think the communication of science is an integral part of science. But we understand that there’s somebody—just to talk about theorists, there are people who are great at seeing connections to very abstract parts of mathematics, and there are people who are great at exploiting the power of computers to do interesting things, and there are other people who see the connection between abstract things and experiment. There aren’t too many people who can do all of these things at an extraordinary level. And that seems normal.
Similarly, with respect to the parts that make science a collective human activity, but we provide an example for how to behave as human beings who engage in this particular activity. And I do feel responsible about that. And I worry, particularly as I've gotten older, I feel less connected to the students, because there’s this generational gap. And I think, “Well, there’s just nothing I can do about that one.” I also worry because one of the consequences of increased responsibility in the community, in addition to my personal frustration of not having as much time to do science, is that I worry that I'm not paying the attention to the young people around me that I should. And maybe, worse than that, I'm conveying—there’s a danger of conveying the sense that I'm busy, and that I'm distracted, and that I'm not taking joy in what I'm doing. Because suddenly I'm doing a part of my job which is a responsibility rather than just the pleasure. And I don’t think the answer to that is to be completely irresponsible, but I think there is a problem. So yes, I feel a burden that is tied in with the questions that you're worried about. I'm not sure whether that’s a completely compelling answer to what you asked me. In the spirit, I guess.
Bill, before we get to the last part of our talk where we'll spend some time on the big and fun questions about the current state of the field and where it’s headed, I want to return all the way back to our first conversation, and ask you about some of the things that you learned about your father and his experiences. I'll frame that I guess you shared with me in writing that amazingly you wanted to become as a kid the first Jewish astronaut in space. I want to ask you to consider the aspirations that were available to you, vis a vis those that were available to your father.
And I'm specifically thinking about this quite powerful anecdote you shared with me, where you explained how your father decided not to fast on Yom Kippur, because he had fasted enough. That whatever the purpose of fasting is supposed to achieve in a quote unquote normal life, or an unburdened life, to achieve that level of atonement in a life that’s comfortable and secure, from what I know about Holocaust survivors from my family, as a historian, in that regard, the reaction that your father had I think was normative. And it says things bigger than his own experience.
One of the common themes that we've had in our conversation is, as the next generation, for you growing up in your parents’ household, maybe for your father, his achievement was surviving. Dor l’dor—generation to generation. And that’s what he was able to do. That was the purpose of his survival. The concept of having fasted enough—that life threw challenges at him so fast and so intensively that he was—so to speak, he was good. He was good for all of the subsequent Yom Kippurs that he had, after the Shoah, after the Holocaust.
For you, when you look at your career, both retrospectively and for what you have remaining ahead of you, what’s your version of that? What are the challenges that you've had thrown your way where you can say in your generation, with the challenges you've had, with the opportunities you've had, what are the things where you can say, “I'm good. I've had my fill. That’s not something that I need to do”?
Because of course there’s always a level of heroism with the Greatest Generation. That they experienced something that no other generation has experienced, and that the next generation had it far easier. And perhaps comparatively that’s true, but you can’t measure yourself by the previous generation. You can only measure yourself against looking in the mirror and with your own peers. So with that, with the opportunities, with the challenges, where in your career, where as a scientist, where as an intellectual, can you say “I'm good. I've done that”? And where can you not say that?
So I'll footnote—in some ways, I think my mother—although the direct challenges that my father faced were more severe, in the sense that they were directly life-threatening, the intellectual frustration for my mother might even have been greater. As we discussed earlier, her parents died when she was still a teenager. It was the Depression. She probably didn't finish high school. She had an extraordinary mind and I think would have been, I imagine, a politically engaged lawyer or something, had things been different. So from both sides, there was this sense of absence of opportunity.
My mother, having lived through the Depression, thought that you should always save to protect yourself against a huge calamity. My father had a view that the things that can go wrong are on such a scale that you can’t possibly prepare for those, so you should just enjoy life. It’s possible that from a financial point of view I should have listened a little bit more to my mother.
[laugh]
So it’s just a footnote. Look, it simply doesn't make sense to compare the challenges that I face with the challenges that my parents faced. Doesn't make sense. Are there places where I can say—maybe there’s a less grand version. Is there anything I could say, “Been there, done that,” as the phrase goes? With satisfaction—“I don’t need to do that again”? Sure. Are there any of them that are profound enough that I would compare them to deciding that I had achieved some spiritual state that entitled me to claim that I was still being properly observant of the need to fast on Yom Kippur? I don’t think so. [laugh] There’s a question about whether I want to be observant, but I don’t think there’s anything that I've done that sort of has that scale. Of course, yeah, all judgments are local, but part of the point of learning something about history is to remember that sometimes you could make a slightly less local comparison and realize that what you're worried about is—as they would say in Casablanca, your problems don’t amount to a hill of beans in this world. And mostly I would say that’s true.
The biggest challenge is that the thing that I thought was interesting was not what everybody thought was interesting. And I stuck with it. And I got away with it because I did enough, or I had enough of the—there were enough people who said, “Smart kid, give him some space” that I had a long rope. And there’s privilege in that. It mattered that I impressed some people at some moment. I can imagine who some of these people were, but you don’t know exactly which ones turn out to be the crucial ones. And were they right? Well, yeah, I mean, maybe. But maybe there’s somebody else they should have noticed, too. Somebody who didn't even get a chance.
What I would say is there are things where—so for example, I had an enormous amount of fun being involved in organizing summer schools. And I am a little bit involved in that—well, this is not the right time to ask that question in the pandemic, but I'm a little bit involved in that, with things that we do in biophysics at Princeton. We bring people for a summer school. And it’s fun. And I lecture at a lot of summer schools, and that’s fun.
But in 2019, I went back to a summer school which rotates among topics, so biophysics got rotated back. And the last time there had been a biophysics summer school at that venue, I was one of the organizers, and this time, the organizers consisted entirely of people of a younger generation, all of whom had worked with me at some point in their careers. And I could go do my lecture and enjoy chatting with the students. And in fact, if anything, regretted not having more time to spend, because it kind of got squeezed with other things.
And I looked at that and I thought to myself, “Huh! This is great! I don’t need to do this again.” In fact, I could say, “Oh, what I really want to do with my time now is organize a better summer school.” No! It’s the next generation’s opportunity to do that. And I might complain and say, “Why did you have people lecturing about A instead of B?” And I could tell them that, and it would be up to them to decide whether I was right, not up to me.
So what I would say is that there are things where I look at it and I think, “Well, in some way, to—if I were given the opportunity to do that again, to take it, it might not be so much that I'm so self-satisfied that I did it right the first time that I don’t need to do it again.” But rather that it’s time for somebody else to do it. So yeah, yesterday, before yesterday, I got an email that a paper was accepted, a paper that I co-authored with my friend Rob de Ruyter, who has appeared at several points in this story, reaching all the way back to the first moment in the Netherlands where I realized that he was the guy in the office next door. And that was the Fall of 1983. And this is the Fall of 2020, 37 years later. The first full-length paper that we published was in 1988, which means it was before almost all of the young physicists that I work with were born.
Do I look at the things that Rob and I have done and think, “Been there, done that?” No! I love working with him. There’s unfinished business. Some of it’s unfinished because we should have finished it, but we just didn't. And yet I think it would still be worth doing, so I'm happy to go back. There are other things where I maybe look at the opportunity to form a new collaboration, and I feel like, “Ah, what this would do is to do again a thing which we've done before.” And, well, you could talk to the young person I was working with when we did that thing before, and maybe it would be better if they brought their views to that. Which is a little bit less of going back—maybe that’s become their thing. So there are things where I feel like—maybe not so much that I don’t need to do it again. Somebody told me, when giving a talk, “Never underestimate the pleasure some people take in being told things they already know.” [ed. Attributed to Leo Kadanoff quoting Ugo Fano.]
[laugh]
So there’s hardly anything that I look at and I think, “It wouldn't be fun to do that.” Or at least of the opportunities that come my way. But there are things where I think, “Huh, if I do that, then that means that the people doing this are 60.” But maybe they should be 40. Or whatever. There’s no right answer.
I sit on advisory boards for institutions all over the world. It’s great fun. I get to talk to people. I learn something about what they're doing. I try to learn about things that are different there than anywhere else, so that I don’t spread some uniformizing influence. But I worry that I go to these advisory board meetings and I look around the table, and I see a surprising number of people that were at the advisory board meeting for the thing on the other side of the world, six months ago. And maybe that’s telling me that I should say “no” to the next advisory board invitation, not because I've been there and done that—because of course doing it in a different place would be interesting in different ways, and it would be fun, and blah blah blah, and maybe I haven't been to that part of the world or spent much time there, and so it would be great—but maybe because they should ask somebody else. It’s somebody else’s turn.
That view is stronger than the—what I'm worried about is, the way you phrased the question, there would be a sense of self-satisfaction. That I had done it well enough that I don’t need to do it again. And, you know, I'm very wary of that. But I am concerned about getting their—of taking up space. I had my shot. And if I did a good job at it, then I did some good. And that doesn't mean I should be asked to do it again. Maybe once, but not too many times.
Bill, I want to ask, before we get to the big questions for the last part of our talk, sort of a state of the field right now. The way I want to set up this question is, what are some of the big—biophysics is way too complex; there’s too many interesting things going on. So I want to zoom out to the degree that you can, to talk about some real broad-lying trends in the field. What were people, including yourself, doing 20 years ago, ten years ago, five years ago, versus what they're doing today, and what they hope to do five, ten, 15 years down the future? For example, the way that particle physicists have really entered into the field of cosmology for the past—this is one of those major trendlines of the field, to see the state of play at a true macroscopic level. So on that basis, I wonder if you can just sort of give a précis about where the field is now relative to where it was x number of years ago.
So if you think that a prerequisite to doing the physics of x is that people be able to do sort of physics-quality experiments on x, and we agree that we understand what we mean when we say that—let’s not have that conversation. When I started, and for quite some time afterwards, if you set that criterion, then the number of places in the living world that you could look as a physicist was very limited. What has happened is that experimentalists have tamed more of the biological world to the point where you can do physics. And that’s true at every scale. In some ways I think it has happened gradually. And I think it has happened, if you think of the frontier, where this is the part where we feel comfortable doing physics, and there be monsters on the other side, I think that frontier has a fractal shape. [laugh] So there are places where we've been able to go very, very far, and there are places where, as soon as you take a step, you’re sort of bombarded by the full mess of biology that everybody warned us about. I think because of that funny geometry of the frontier, it’s very hard for people to get a global view. So the number of opportunities for young people to go out and look has expanded enormously.
But having a view of the thing as a whole field is very, very hard. And so this is good news and bad news. So I think that maybe one way to say it—and the same thing is true about theory. So experiment created the opportunities to ask whether our general theoretical ideas actually informed us about anything particular in the real world, and so we went in one place and we looked. And I think that physicists being physicists—and certainly I would say that this is my ambition, and I hope it’s clear in the work that my colleagues and I do—the goal isn’t to make a model of this particular system, the goal is to ask whether the behaviors that we see in this particular system can be understood as following from some principle that at least has a chance of being general. And I think that talking about those general principles almost immediately generates pushback. So you focus.
But if you look at the work, you see that there are these bigger ideas that stand behind it. And you see the same ideas showing up in very different places, sometimes consciously, as in the story I was telling you about the evolution of our ideas about using statistical physics. We started in one place, and then having accomplished something there, you look at something else, and you realize, “Oh, this is the same problem.” And so you very consciously use the same ideas in multiple places and ask yourself whether there’s something general going on.
In other cases, I think that people came to these ideas in more genuinely independent ways, but there are common theoretical ideas. And so, I would say the thing that is the biggest advance is the area that we can cover, or the volume in the space of the living world that we can cover as physicists and do something that we recognize as being physics. I think the complicated geometry of that volume means that the potential for unity or for unifying ideas is obscured. And if I had to guess, I would say that the thing that has the chance of being—when you ask somebody about their view about the future of the field, it is not clear—in the same way—what’s the remark?—“There’s no history; there’s only biography”? So maybe there’s no prediction; there’s only ambition?
Hmm. Good, good. Yeah.
Right? So let me not pretend that I'm an objective observer—
Sure, of course.
—and let me say, what I would like to do with the productive career that is left to me, is to take this foundation that the community has built, which is anchored in the details of many particular systems, and ask, “Are the ideas that I see as being common in these many different areas, are those the ingredients of a theory which you could start from and …”—so the strategy was, “Go look over here. Build something that works. Maybe abstract away. Go look over here.” And it’s sort of building up. So do these things meet in some structure that you could flesh out such that if you started from there, you could actually derive these things?
So for me, as a theorist, that’s the ambition. I don’t remember what I said about things like that ten years ago, or 20 years ago, but whatever I said, I don’t think the argument for it was anywhere near as good as it is today. I think that the search for those kinds of unifying ideas now feels like a research program rather than speculation. So.
Bill, allow me to commit an act of intellectual heresy here, for the sake of setting up the last part of our conversation. Let’s go ahead and treat physics and biology as two totally separate entities. Let’s pretend this is 50 years ago, something like that. A hundred years ago maybe. In the way that physics has very well defined ambitions with regard to putting it all together—by incorporating gravity into the standard model, by identifying and truly understanding dark energy and dark matter and therefore understanding how the universe really works—to the extent that physics as that discrete discipline has these grand ambitions for total understanding—a truly grand unified theory—do your sensibilities, working in this divide over—or this transom, or this threshold over the course of your career—in what ways can you transfer those intellectual ambitions into the world of biology? In other words, is that one of the things that biophysics aims to do? Is there a grand unified theory of biology? Is there a sense of, “Here’s this well-defined thing that we don’t understand, but we know that if we do understand it, it will put everything together”? To what extent are those very well-formed sensibilities in physics transferable to biology?
I think one of the things you have to remember is that the thing that we clearly call biology is so many different things that are woven from so many different historical threads. And that’s not true of physics, I think. Let’s be a little careful with—in fact, neither the majority of physicists nor the majority of money spent on physics is spent on elementary particles plus cosmology. I think the typical physicist is some flavor of condensed matter physicist. So you might want to think about the formulation of the question.
Just to tie this to something we've been talking about, to keep it concrete, for me, the idea that the same mathematical structures that we use in thinking about the coordinated activity of neurons in a network and the coordinated motion of birds in a flock, or the correlated substitutions in amino acids in protein evolution, the idea that there’s something that we can even put those problems into the same language so we can ask about whether there’s anything more there, I find that as a physicist intrinsically interesting. And I think that I'm not alone in the physics community. I think that some of the physics community’s reaction to what we've done is because of that shared sense, this notion that things that are unifying are potentially exciting.
For example, If you look at when everybody got excited about the fact that you could measure the noise in the control of gene expression, which clearly was something that captured the attention of a lot of people in the physics community, and it certainly related to the things that we worked on in the group [ed. likely “NEC group”]—I think the fact that this might have something to do with noise in ion channels, or noise in sensory receptors, I don’t know any biologist that thought that that connection was interesting. Surprisingly few physicists thought it was interesting. They sort of bought—many people in the physics community—this was a moment where I think the physics community had internalized this notion that they needed to learn a lot of biology. And the biologists said, “This is what’s important about the control of transcription.”
And if you went to them and said, “You do realize that there’s this other problem with noise in the neural context and it has something in common”—yeah, but nobody who worked on transcription knew anything about that part of neurobiology, and vice versa. So if they went to talk to their biological colleagues about what was important, they would never learn that. Now the amusing thing is that many of the ideas that people were using in thinking about noise and the control of gene expression had precursors. You could go look at the equations and see that they were the same in the discussion of noise in ion channels and noise in other biological contexts, and many of those ideas had been brought into neuroscience by people who were physicists, by training. So what was ironic was because of the way biology had been constructed, the physicists in year N who got interested in this problem and the physicists in year N+K who got interested in this problem, who ended up doing things which are conceptually related from the point of view of physics, sort of didn't know about each other via the scientific literature because the biologists had said these belonged in different boxes.
There’s a reason why biology is organized the way it is, and it’s a long and complicated history, and it’s also related to the kind of infrastructure needs and so forth. Neuroscience itself is already complicated, because there are the people who came from psychology and the people who came—some people came top-down and other people came bottom-up, and they're very different. They're supposed to meet in the middle somewhere. So it’s not clear to me—and I think in physics, we have a slightly romantic notion that there is a subject called Physics with a capital “P,” and in the right environment, we will paper over all of our internecine battles to explain why there is this grand activity which is what the entire community is doing. And then we go back into the department and fight with our colleagues in the other subfields that we want more resources for our part. But that’s okay. That’s normal. But it is sibling rivalry, right? It’s understood that there is one activity.
Whereas I think that there are parts of biology writ large that other biologists wouldn't recognize. The span is so great. We may have discussed this before, but there’s almost no biology department in the United States that survived the molecular biology revolution intact. So they had to split into the people who were concerned about organisms and the people who were concerned about molecules. And maybe that was right at the time. I don’t know. It’s not my field. So I think that—I'm sitting—we're supposed—I'm the chair of this committee to write the decadal—for the first time, as part of the decadal survey of physics, there will be a volume about biological physics.
That’s a long time coming.
And I've mentioned this to various people. Well, I've talked to lots of people about it, because we're trying to get input. And what’s interesting is there is no decadal survey of biology. And I mentioned this—I don’t say to biologists, “Why isn’t there a decadal survey of biology?” I tell them what we're doing, and where it fits into this larger—“It’s a decadal survey of physics. And to make it manageable, we break it into pieces.” It’s not because we believe that the world is broken into these pieces, right It’s just because they have to do something. And—“Oh, huh. That’s an interesting idea!” [laugh]
I think it’s a very different culture. It’s not clear that there’s a Biology with a capital “B” or that there’s even an aspiration for a Biology with a capital “B” in the sense that there’s the aspiration for Physics with a capital “P.” Sometimes you will hear—there was a sort of post-human genome—I heard some molecular biologists and molecular geneticists in a kind of triumphalist mood. That, you know, now that we have genomes, biology is unified. But I don’t think that made them any more interested in animal behavior. I think their view was that when you did the genetics of animal behavior—that of course has been done—now they could get interested in animal behavior.
And actually, that’s an interesting thing, is that as for instance, tools from molecular biology have gone into other parts of biology, that sort of gets this group of biologists interested in what that group of biologists is doing. Because of this, we're going to use the same tools, right? And that’s interesting, because before the tools were common, they didn't think the questions were interesting. And I think that tells you that they were looking for different kinds of answers. Of course once you have the same tools, you're searching for the same kind of answers, to some extent. That means that you had two things that not only were part of the same field, but it’s not so much they were asking different questions; they were obviously asking questions about different things, but most importantly, the form of answer that they thought was interesting was different.
That doesn't happen in physics. We're unified not by the things that we study, not even…I think even more than by the questions that we ask, the kind of answer that we're looking for. So that drive for the physicist’s kind of answer, looking at the living world, that is something that we bring. Will I convince my neurobiologist colleagues that flocks of birds are an interesting example of collective behavior that we should think about in relation to networks of neurons? I think it has its moments for them as a sort of, “Oh, it’s a way of seeing what you're talking about.” But intrinsically, maybe not. And as the distances get larger—between neurobiology and behavior isn’t that much. But the distance from that to molecular biology can be quite large. So as those distances get larger, will people see—if you showed them a connection, would they think it was interesting? I think they might not.
And maybe what that reflects is that connection doesn't help them answer any of the questions that they had in mind. And I wouldn't disagree with them about that. I think in physics, we have this feeling that seeing connections—first of all, the act of unification itself is a form of success. That matters. And that the more connections, the more tools you have for understanding.
I'll set up another cartoon-ized dichotomy for the sake of eliciting the right kind of answer from you, or the answer that I think will be most telling. There are two approaches from the public’s point of view about science. One is that there’s an understanding that basic science is important just simply to understand how the world works, in and of itself. And then there’s the other of course that says, unless science is specifically geared toward improving the human condition, what are we supporting this for as a society? Now obviously this is an unfair dichotomy because any application toward bettering humanity stands on the shoulders of basic science.
And so in asking the question about your contributions, reflecting on your contributions in the field, the harder and more telling question I think might be the one where I would ask you to think about the ways that your contributions to basic science either directly or indirectly, however you understand these things and how they work, is there anything that you can look at over the course of your career where you trace its intellectual stepping stone to some particular advance in applied science that gives you particular satisfaction? Not necessarily as a scientist, but as a human where, like everyone else, you want to contribute to making people’s lives better?
So there are a couple of things. I think there are two kinds of answers. I don’t think that there’s anything we've done that has had a revolutionary impact. I think that the work that we did on neural coding came at a time when people were starting to think about brain-computer interfaces as prosthetic devices, and that the successes we had in being able to read out continuously varying signals at high signal-to-noise ratio for the very sparse sequences of action potentials had an impact on people’s thinking about how to do that. [we’ve talked about this a bit already] And maybe the actual algorithms they used? There were certainly people who were inspired by the things that we did, who built more sophisticated algorithms that—there’s some path. We were interested in those problems in a particular regime, and so is that the regime that turns out to be relevant in practice? I don’t know. I think we provided some inspiration, and others could—people would tell you, I think—we've talked about some of the things that we did in thinking about image statistics turned out to be useful for image enhancement and image restoration.
But I can give you a different kind of answer. There are people with whom I worked when they were students who did things that were not particularly useful, who have gone on to do things that are either directly useful, in the sense that they work in industries where you can tell that they're doing something useful because somebody pays for it, usually. That’s the criterion, right? Or they themselves do things that obviously have an impact on how we're going to think about problems that matter for society. And I think I can see the traces of their intellectual style coming out of the problems they worked on when they were younger.
And I think that a very important answer to this general question of the role of basic science in having an impact on society. In addition to the complexities, you take some piece of technology that changes the world and you can trace it back to some piece of basic science. On the other hand, if you were looking from this side, you might have had a great deal of difficulty figuring out where it was going to lead.
In addition to that problem, which is already—it seems obvious I think to any scientist who has thought about it, so that doesn't—the claim that you should pay for science in relation to its likely impact on society presumes that I can calculate the probability that this will have an impact on society. And I think that’s a very hard calculation, and I think the historians could give us examples of why that is.
I believe it’s the case that Charlie Townes gave the example of—maybe we even talked about this—if it was 1950 and you wanted to spend money to improve eye surgery, which part of science should we invest in? And the answer, in retrospect, is completely obvious—it was microwave spectroscopy. Because that led to the maser which led to the laser and duh-duh-duh-duh. But you know, nobody would have gotten that right, right?
Sure, sure.
But there’s another subtlety, which is that the exploration of basic science produces educated people. People who have taken on the challenge of trying to understand something, and wrestle something into a form where answers become possible. It’s characteristic of basic science at the frontiers that we don’t—we ask questions—part of our job is to phrase the question in a form where there’s a chance that it does have an answer. Because there are many phrasings of the questions about what we don’t understand which don’t have an answer on finite time scales. Success isn’t only about—it’s not like there’s a list of questions, and you go, “I'm going to write my thesis about question number 32.” No. Part of the challenge is to form the question in such a way that on the five-year time scale, we can actually make progress.
And the training ground, the sort of pre-exploration training ground that is provided by basic science for doing that, I think historically produces people who then—some of them go on to keep doing that, and some of them go off to places where the playground of things that they're exploring is not so wide. It’s the things that matter to the society on a much shorter time scale, and wrestling those problems into a form where they're solvable then is something that is of great value.
And so I look not only to results that we have that have an impact on society through this complicated process, and who knows how to do the credit assignment. But you see people who have come through our community who are having impact, and then you say, “Okay.” Or people who come through the community and are doing something where you say, “Okay, now I can see how this work is going to have an impact on society. I don’t need to tell general stories anymore. I can see what’s going to happen.”
Bill, penultimate question. We'll refer readers or listeners back to session one, where you conveyed clearly enough your ideas about the existence, or non-existence, as it were, of a God, or at least specifically a Jewish God. That does not necessarily tell us, though, what you feel about the possibility in reviewing all that you've learned about biology and physics, in the possibility that there is metaphysics in the world. That there are phenomena that exist outside of the bounds of scientific inquiry regardless of technological developments. Do you feel that there’s a possibility, or do you allow in your imagination or your spirituality that metaphysics is real?
I think you asked two different questions. The first one is, are there phenomena which exist outside the bounds of scientific inquiry, independent of technological advance. And the other one is whether sort of metaphysics is real, which I take to mean that there’s some other plane on which these things are operating. I don’t think those are the same question. So for example, I think part of the reason is that the boundary between the things that are subject to scientific inquiry and the things which are not, which are, if you will—so I have two children who are different flavors of philosopher, so I don’t want to use philosophical as a pejorative—but I mean things in which—there are things where the ultimate arbiter of the correctness of our views is experiment, and there are things in which the ultimate arbiter of the correctness of our views is a kind of internal coherence. So that’s a boundary between science and philosophy, or physical science and then philosophy, modern science and philosophy. This is natural philosophy, so natural philosophy and the rest of things, maybe.
Remember that before Bell, we thought the boundary for that in quantum mechanics was in one place. There’s a class of theory—there’s a way of thinking about quantum mechanics which was the Copenhagen way, and then there were other ways of thinking about it, and as far as we could tell, these were experimentally indistinguishable. And so you could think about it that way if you wanted to, but it didn't matter. And so how you should think about it, then, was not something—that wasn’t a scientific question. Turned out that the boundary was in the wrong place. What Bell showed was that certain ways of thinking about things actually produce experimental predictions that could be tested, and it turned out the sort of Orthodox view comes through, and so other views don’t. And now the question is, well, there’s still a question of which of the parts—you still have multiple possible views, and maybe that remains a matter of taste. But that lesson is that the boundary between the things which are subject to scientific inquiry and the things which are not, is not fixed for all time. And by the way, it didn't move one inch because of technology. It is not because somebody built a better widget; it’s because somebody understood that there was something new to measure. Then there was a prediction of the theory that was somehow intricately hidden in its interpretation.
The obvious analogy here is to consciousness. There is what the philosophers referred to as the hard problem, right? That my internal experience of consciousness is not commensurate with the things that we measure about each other. What is the “me” that’s inside that’s experiencing all this? I am not an expert in this field. But finding all the correlates of being conscious is not the same as understanding what consciousness is. It doesn't explain why I, the me, has this experience. It says that while I'm having this experience, there are all these things going on. On the other hand, it might be that that incommensurability between my internal experience of being conscious and the things that I can make objective scientific measurements on, that that boundary could move. And it could move not because the resolution of fMRI got better, but because we understood something, like the way Bell did.
So, I guess I'm not so interested—I feel like the things that I don’t know how to study as a scientist, some of them feel like maybe if I thought more about it, I would figure out how to do it, and maybe that’s an interesting problem and somehow as a community we should spend some fraction of our time trying to move that boundary. So just like the people who work on the foundations of quantum mechanics, right? They're not philosophers. They're physicists. And they're trying to figure out, how do I move that boundary between the parts that we call science and the parts that we don’t?
With respect to life, there are lots of those things. You know, origin of life—I mean, how much of what we see is inevitable and how much was a low-probability event? Should we feel special in the universe? I don’t spend a lot of time thinking about this, because I don’t have any good ideas, but it also doesn't bother me a lot. And I don’t feel like everything that’s on the other side of my ability to speak as a scientist somehow becomes mystical. I have my mystical moments, but that’s not what drives it.
Bill, for my last question, I think we've covered the highfalutin’ pretty well, so I just want to ask you a very basic question on the premise of two things. One, as I've learned, physicists never retire. And two, if I can give you the traditional Jewish blessing that you should live to 120, [laugh] essentially twice as long as you are now. So however you define the remainder of your productive career, what’s left for you to accomplish? What excites you the most as you look to the future for yourself?
So, to put it in perspective—so, okay, I don’t want to spend too long at this, but—so I remember a conversation with Albert Libchaber once where we were talking about, what do scientists want in life? You'd like a paragraph in the book. So somewhere there’s a book that records our understanding of the universe. As I just told you, sometimes I'm capable of being mystical, right? So there’s the book that contains our understanding of the universe. And many of us—I think one of our ambitions as scientists is that there should be some part of that book that we contributed, unambiguously. Because we’re social animals, it would be nice if other people knew that we contributed to it. But I think deep down, what we really want is that we should know we contributed. [laugh] I think that’s true about a lot of scientists. So, that’s one version of the answer. You want to be sure that there’s something in there.
Because I decided that the part of science I was interested in was, in some sense, a long shot—it wasn’t a field that was well prepared at the moment that I entered it—it could be that my accomplishment as a scientist will have been to prepare the groundwork to really do something. And it could be that before I'm done, I will have unambiguously done something. I don’t spend a lot of time—I tend to take the view that I'm sure that I've contributed to the groundwork. That the field is nothing like what it was when I started. It is a much—there really is a thing to do as a theoretical physicist thinking about the living world. And pretty much everybody realizes that. And we squabble about exactly what it should look like and how important it is, and duh duh duh duh, but there’s a “there” there, to paraphrase Gertrude Stein. And, I don’t think I'm being unrealistic to say that I had something to do with the fact that there’s a “there” there. I'm not the only one. Lots of people contributed. It’s definitely the work of my generation. And there weren’t that many of us who got started, so, okay.
Now, is there something more? I'm not sure yet, and I don’t think that it will help me be productive—I don’t think that a productive way to spend the years which are left for me to be creative and original is to argue that something that I did in 1995 was really the thing. So I'm inclined toward the view that the work that my colleagues and I have done is groundwork. We've taken the field and wrestled it into a form where now we can—we, in the larger sense of we—can really do something.
And so, if we have this conversation again in ten years, and I still feel that my only contribution was groundwork, I would be disappointed. If I were 70 looking back and said, “Well, my window was that I was there to help lift the field up to this point and lay the foundations, and other people will carry it forward,” I would be disappointed. But that’s a kind of selfish thing. I want to do part of the thing that turns out to be the real thing, not just the foundation work, in the sense that we were just talking about. That the field has changed so much that it now seems like a real research program to say, “Let’s find those unifying theoretical ideas from which the behaviors of large numbers—a small number of unifying ideas from which the behavior of many different biological systems can be derived, quantitatively.”
I think that what we've done is to show that that is not an unreasonable research program. And if I'm a little more optimistic, I would say that if you look at the things that my colleagues and I have done, you can see hints of what those things might look like. But I would not say that the claim that we have found one of them is particularly defensible. And so as I say, I don’t want to spend my time trying to defend one of those claims. I want to spend my time trying to show that some of those ideas really do have that much power.
Bill, on that note, instead of me getting wistful and regretting already that we're wrapping up these conversations, let’s make a deal: let’s revisit this in ten years.
Okay! [laugh]
Bill, it has been so fun, and such a great honor, to spend all of this time with you. I really want to thank you for spending this time with me, and for giving of yourself and your insights that will be a powerful and unique and in many ways, a timeless reminder for the historical record that science is a human endeavor. It requires an appreciation of working with others, it requires a sense of wonderment, it requires a sense of history, which I really appreciate, of course. And, it requires a drive that perhaps more than anything else is one of the gifts that you have not talked about yourself explicitly, because that’s not your way, but I can do that for you. So for all of those reasons, thank you so much for doing this. I really appreciate it.
Well, David, it’s been a real pleasure. And I have to say that, maybe going back to studying philosophy of science when I was a kid, taking time to think about some of the things that have happened to me along the way, and where our field is in the broader context that you pushed me toward, has been a real pleasure. So thank you.
[End]
Postscript. David, we spent quite a lot of time on the support provided by my parents, and by the society as a whole in funding public schools etc. What you didn’t ask about---and, to my embarrassment, what I didn’t bring up---is the support during my adult life as a scientist. I have been very fortunate that Charlotte has been, and continues to be, incredibly supportive. It might have helped that she grew up with a theoretical physicist father, so she knew maybe more than I did about what to expect, but that shouldn’t take away from what she accomplished and made possible. There are many aspects of our lives together, and with our children, that I just didn’t have to think about. I don’t know that we really agreed on any of this when we started; I couldn’t have imagined all the things that need to be done to keep a family and household together and functioning. I don’t want to reduce this to a list, because that misses the deeper point. The fact that Charlotte was willing to take these things on herself was liberating for me, in ways that are difficult to put into words. It’s by now almost forty years of partnership that has enabled so many things, in life and in science. In the meantime, she accomplished so much on her own. Crucial years in the narrative we went through were years that she served on the Princeton Board of Education, and the impact of her work has been felt by thousands of kids since. The scientific community is organized to shower praise on a lucky few. At the very least we should remember that we don’t do these things alone.