*Notice*: We are in the process of migrating Oral History Interview metadata to this new version of our website.

During this migration, the following fields associated with interviews may be incomplete: **Institutions**, **Additional Persons**, and **Subjects**. Our **Browse Subjects** feature is also affected by this migration.

We encourage researchers to utilize the full-text search on this page to navigate our oral histories or to use our catalog to locate oral history interviews by keyword.

Please contact [email protected] with any feedback.

ORAL HISTORIES

Photo courtesy of Elliott Lieb.

Interviewed by

David Zierler

Interview date

Location

Video conference

Disclaimer text

This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.

This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.

Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.

In footnotes or endnotes please cite AIP interviews like this:

Interview of Elliott Lieb by David Zierler on March 10, 2021,

Niels Bohr Library & Archives, American Institute of Physics,

College Park, MD USA,

www.aip.org/history-programs/niels-bohr-library/oral-histories/47475

For multiple citations, "AIP" is the preferred abbreviation for the location.

Interview with Dr. Elliot H. Lieb, professor of physics emeritus and professor of mathematical physics at Princeton University. Lieb opens the interview discussing the primary differences between physical mathematics and mathematical physics, and he outlines how modern mathematical ideas have been used in physics. The interview then looks to the past, to Lieb’s childhood and adolescence in New York City, where his passion for physics began. Lieb discusses his experience as a student at MIT, particularly his political involvement during the McCarthy Era. He also mentions his time working at Yeshiva University, and compares the political sentiment there to that at MIT and other universities around the United States. He talks about the work he was able to do abroad in the United Kingdom, Japan, and Sierra Leone, and about the lessons he learned from each of these experiences. Eventually, Lieb returned to Boston and joined the applied math group at MIT, while also working on the six-vertex ice model. In 1975, Lieb moved to Princeton, where he has collaborated with a number of scientists on a variety of topics and papers, including the 1987 AKLT Model (Affleck, Kennedy, Lieb, and Tasaki). The interview ends with Lieb looking to a future of continued experimentation and collaboration on the subjects that interest him most.

Transcript

This is David Zierler, oral historian for the American Institute of Physics. It is March 10th, 2021. I'm so happy to be here with Professor Elliott H. Lieb. Elliott, great to see you. Thank you for joining me.

Very nice to be here, and very nice to meet you.

To start, would you tell me please your title and institutional affiliation?

I am a Higgins professor of physics emeritus at Princeton University and also a professor of mathematical physics. I taught in both the physics and the math departments and voted in both faculty meetings.

From an administrative perspective, the title or the field, mathematical physics makes a lot of sense, but I wonder if you could explain a little more. Of course, all theoretical physicists use plenty of mathematics for their work. What is the significance of the specific field, "Mathematical Physics"? And why not Physical Mathematics?

There is such a thing as Physical Mathematics. Over the whole of history there's been a close connection between math and physics. Physical mathematics, very loosely speaking, is about solving problems or equations posed by physics. Mathematical physics, in the sense that I use it, perhaps starts with Wigner, von Neumann and others in the 30’s and 40’s. A second phase comes after the war with Wightman, Thirring and others. The idea is to use modern mathematics, such as group theory—which was first rejected as the Gruppenpest and later seen as essential–to provide new views of the foundations of physics and what insights can be gained with mathematical thinking. An example is the discovery by Derek Robinson and myself of the unexpected existence of a limiting velocity of propagation of information in some solids. It is now, after some decades, widely used by theorists. It was not discovered by “physical thinking” but rather by the use of rigorous modern mathematical ideas and theorems.

There are more than a few examples of this kind: Wigner’s invention of random matrix theory to understand nuclear energy levels, which flowered into a subfield of mathematics and physics. Thirring’s invention of his model to understand quantum field theory, which led to the widely studied Luttinger model and *bosonization* in physics. (Some people think Luttinger’s inspiration was Tomonaga, but that is not correct.)

I started life as a physicist, and I still am a physicist. My Ph.D. is in physics and only later, much later, I developed a serious interest in mathematics, and I have written many pure math papers.

Today I will try to stick only to my work in physics.

That's interesting. It's usually the case that physicists start out liking math, but then they want to do things that are less abstract, and that's how they get to physics. But that wasn't the case for you.

It was not. My interests started with engineering and later physics, and later still pure math and its usefulness to problems of physics.

Let's go all the way back to the beginning. Let's start first with your parents. Tell me a little bit about them and where they're from.

My father was from Lithuania and my mother was from Bessarabia, which was then part of Romania. They came to this country around 1900 and eventually met in Boston.

Where did you grow up?

I was born in Boston in 1932. When I was five, we moved to New York, and I grew up there until I was ready to go to college in 1949. At that point we moved back to Boston, and I went to MIT.

What propelled the family to move to New York?

A job opportunity.

What was your father's profession?

He was an accountant, and my mother was a secretary in an import firm. That firm also moved to New York. My parents were not rich at all, and I would describe the economic level as middle class, but not on the upper side of the middle class.

What neighborhood did you grow up in?

In the Bronx, mostly on the Grand Concourse. The Bronx was different then. That section was very middle class. What killed the Bronx for a long time was, in my view, Robert Moses’ driving the Cross Bronx Expressway through the heart of the borough.

Let's talk about school. What kind of schools did you go to?

Public School 46, which was on 196th Street. The education was alright, but I don't remember very much about it.

What about middle school?

It was Creston Junior High School in the Bronx. Afterwards, I went to the Bronx High School of Science. A lot of physicists did that—some went on to win a Nobel Prize. Bronx Sci has changed very considerably since those days.

There was a competitive exam to get into high school?

I don't recall it. There was no tension or anxiety associated with an exam that I can remember. It is quite different today.

Was it in high school that you developed an interest in physics?

That interest comes later.

What were you interested in academically in high school?

I was interested in amateur (ham) radio—I wanted to build radios. I had a cousin who was interested in engineering, and he wanted to play with electricity. He was older than I. I wanted to do the same things he did—I looked up to him. Eventually, I got into ham radio and building radios. This was a lucky time to get started. I was born in '32 and started high school in 1945, at the end of the war. There was a huge amount of surplus military equipment available, especially in New York, for all kinds of vacuum tubes, small ones for receivers and big transmitting ones. Today transistors are ubiquitous, but in those days, there were only vacuum tubes. They are still useful for high power applications, however. An important step in doing ham radio was to get a license to transmit. You still do, I guess. And you had to pass a test in Morse code, which you had to be able to receive at 13 words per minute. By professional standards this is very low, but it's tough. I had to memorize the code, but since I'm not a great memorizer it took two attempts. I was very proud when I got my license, which was W1SMS, later changed to W2ZHS for the Boston area. I wanted to be an electrical engineer and I had my path cut out, I felt. I didn't learn much physics in Bronx Science, or chemistry. The teaching in those subjects was not spectacular. I won't say it was bad, but it was not gripping. Mathematics teaching was better, as was English Literature.

Do you remember why you chose MIT for your undergraduate school in 1949?

It’s an obvious choice for a future electrical engineer. And MIT is in Cambridge, which was not unfamiliar to me because we had family in the area. I was lucky to be admitted. Nowadays, getting into college is such a stressful experience. People worry about which college to attend and what specifics they offer. I would have been happy to be in any good college. Parents, then, did not take you around to look at different colleges and ask “do you like the dormitories” or “what do you think about the food in the cafeteria,” and that kind of thing.

So, you got to MIT, and you wanted to be an electrical engineering major?

Yes. However, it was in my freshman year that I decided on physics. Of course, we didn't have to choose a major at that point, but I fell under the influence of an assistant professor in freshman physics.

Who was the professor that was so influential to you in the freshman year?

The first influence was a senior professor named Viki Weisskopf, who later became the chair of the Physics Department. He was a major figure in Los Alamos and my uncle, who had a big influence on my life then, knew him. He owned an art bookstore in Boston, and Weisskopf was a German who was very sophisticated about art. Somehow, I got on his good side through my uncle, and he encouraged me to go to MIT. The second influence came in my freshman year from the assistant professor named Matthew Sands. He was one of the two people who wrote up the Feynman Lectures. Before Sands was at MIT, he did research at Los Alamos. He had an enormous feeling for physics, I don't know how to explain it. In elementary physics, you first learn mechanics, and I found this very difficult and was trying hard to really get to the bottom of the subject. Somehow, Sands took a shine to me and eventually he made things totally clear for me. The penny dropped during the middle of my first semester. I was very happy, and so was he, and I got all As after that. He had such an influence on me that I found that physics is more fun than electrical engineering; that’s when I decided I wanted to be a physicist. My father's reaction—this was in 1949—was, "Okay, you can be a physicist if you want to, but you'll be poor. Go ahead." He didn't object, but said “who's going to hire you?"

Did your father understand what it meant to be a physicist?

Yes, sort of. He didn't know any science really. Of course, he knew Einstein as a physicist.

Most people had not heard of a physicist, however. They heard of chemists. One of the people who influenced me at MIT was Jerrold Zacharias, who told me he had to explain to his mother that he wanted to be a physicist. She didn't say he would be poor, but she said, she just didn't understand what this meant. He replied, "Well, it's like a chemist." People understood what a chemist is. Chemists did things. “Physicist” made no sense. The news from Los Alamos hadn't yet completely seeped into the community in 1949.

From what people knew, they would think about the atomic bomb if they would think about anything related to physics.

After the war, people were stunned with having to come to terms with a new world of which they had no concept, and physics was part of it. A few years later, the government realized that physics was very important, and money started to flow into the sciences, especially physics. Then came the Soviet satellite Sputnik, which really alarmed people, and the government was now ready to put any amount of money you wanted into science.

You were at MIT at the height of McCarthyism.

Yes, McCarthyism colored everything then. I was very much involved in politics. Let me mention a couple of things. First, when I was a sophomore in 1950, we had the Korean War, which was particularly devastating because WW2 had just been finished. Everybody had been breathing a sigh of relief, and now there's this new war, and if you were my age at that time, you were subject to the military draft.

We, as undergraduates, had to take ROTC. Now it is optional, but then it was a required course. Two years of it, and you had to pass it—as you had to pass Phys. Ed. and swimming to graduate.

I also had to pass Mechanical Drawing and learn how to put ink into a draftsman’s pen! One day we had a special ROTC class with a US army Colonel, who said, "You know, we're now at war in Korea, and you have two choices. The first one is to continue with ROTC for another two years, which is optional. If you do that, you get a stipend and graduate as a lieutenant and go to Korea as an Officer. The second alternative is to do nothing, in which case you get drafted soon, and go to Korea as a Private."

Easy choice?

No. Not at all. I decided I would go as a Private because once you've committed to this two-year deal, it was not a joke. If you didn't go to Korea, you went somewhere else, possibly for a long time. They weren't giving this option away for free. The choice was not so obvious. My friend. Martin, decided to accept the second two years option. I forgot where he went; he did not go to Korea, but he had to go to the Army. Now, I was at that point living in a little house my parents had bought in Boston, and very close to the borderline with Brookline, a rich area. Boston was a mixed area. Whether you were drafted or not depended a great deal on how many available men there were for the draft board to select from. Students of science and theology could get a deferment if there were enough men in the pool. Had I lived a few blocks over I would be in the district of the Brookline draft board. In Boston there are not as many college students, proportionally, and my chances were, I suppose, better. So, it was not as equitable as it appeared. Years later, in the Vietnam War, there was a draft lottery, which made it a bit more equitable. Even so, there were many demonstrations against the Vietnam War and the draft.

In the same year there were Senator McCarthy’s hearings, which were disastrous. As a student, I didn't realize how devastating the situation was for the faculty. Some people really suffered. I remember very clearly that one of our Dutch mathematics professors, Dirk Struik, was in trouble. Not with the McCarthy committee, but with the state-run Massachusetts analogue. Everybody was wanting to imitate McCarthy in those—it wasn't just McCarthy. Struik was accused of trying to overthrow the government of the State of Massachusetts. Why? Well, he was an avowed Marxist and he'd written on this subject. This seemed at the time to label him as someone who wants to overthrow the government. What did MIT do? They weren't going to fire him exactly, but they suspended him with pay. He was allowed to go to the library, but that was it. He wasn't supposed to step onto the campus otherwise. Some of us students were upset by this, and we formed a little student committee called “Students for Struik.” Most of our effort went just to fight for the right to exist as a valid campus student organization and hold meetings. Many in the MIT administration and the student body thought we were overstepping our bounds. I don't remember whether we were allowed to organize or not, but it didn't matter anymore because I eventually graduated and, eventually, Struik was let off the hook, not because the accusation was ridiculous but because the US Supreme Court correctly decided that the matter rested with Federal authority, not State authority.

How would you describe your politics as an undergraduate?

Leftwing. My family was, and I was, and I still am, although a little tempered, as we all become eventually. During the Struik affair, most people approved of McCarthyism. It was a terrible time. If you haven't been through it, you don't realize the pall that hung over the academic world. Some people went to Canada, as happened again during the Vietnam War.

At one point, while I was the president of the Students for Struik initiative, I went off to a conference on academic freedom at the University of Michigan in Ann Arbor, Michigan. I quickly realized that the organizers were really a whole lot more left wing than I was. They were really nasty if you didn't completely toe the party line. So, I went back home after day one. But there was someone who informed the FBI of my attendance because the FBI knew all about it. Remember, I'm a student at this point. Years later, when I wanted to get Q clearance, the FBI knew all about my activities and asked about my political opinions. Did I want to overthrow the government? Stuff like that. Eventually, they were quite reasonable about it, and said, "Oh, that's fine. No problems." But this event revealed the fact that I was being surveilled as an undergraduate and this could have affected my later career. Under the Freedom of Information Act, I got my FBI file with all names redacted. I had earned a bit of a reputation as being not the usual MIT student. An MIT student was not expected to be outspoken on politics, especially on the left.

Returning to my academic life at MIT. I came under the influence of other professors besides Viki Weisskopf, Matt Sands and Jerrold Zacharias. Francis Friedman was a great teacher of physics. I took a summer job at one of the laboratories in the famous Building 20 where radar and other electronics was developed. The leader of that particular lab was Peter Demos; his coworker was Israel Halpern. There was also the mathematician Isidor Singer, a winner of the Abel Prize. He taught me linear algebra and we became good friends. I also admired Norbert Wiener, who was sympathetic to the Students for Struik initiative and who didn't like McCarthyism, either. He was one of the very few professors who outspokenly supported us. This might seem surprising because Wiener was not somebody you would think of as very political. Other people found ways to evade the reality of what was actually going on in front of them. They were almost all afraid, really afraid, with good reason.

At what point did you realize you wanted to continue on in graduate school?

I never thought I would do anything other than go to graduate school, since I had decided to be a physicist. That was just the next logical step.

Was it always going to be theory that you would focus on?

Yes.

How were you in the lab? Did you do well by experimentation at all?

I'd been working for all those high school years doing electronics. So, I thought I knew everything, which of course was wrong. I kept trying to avoid the lab courses, and they were nice to me and let me skip many of them. David Finkelstein was a lab instructor when I had a lab course. We became good friends when I blew up a glass bottle by cocking it the wrong way. He was furious, but he forgave me. We crossed paths again later when we both were professors at Yeshiva University. He was an enormously talented, true physicist, both theoretical and experimental. He worked on general relativity, and he did experiments on ball lightning.

What kind of advice did you get about where to go, whom to work with?

I always went for advice to Viki Weisskopf. "I want to go to Europe—I want to see the world," I said. He thought about it and replied, “if you want to go to Europe there are few places to go to—where you can get on with English at least.” Remember, we're talking about 1953, which is only eight years after the war and Europe was rebuilding. One good place for theoretical physics was Birmingham, where Rudolf Peierls headed a department.

Did you bother applying to any schools in the US?

No.

Weisskopf specifically suggested Birmingham?

Yes. I had a good time there, but it was a bit confining. I’m wanted to study theoretical physics. I didn't know the concept of modern mathematical physics at that time. I just knew theoretical physics and wanted to do it.

How much math did you take at MIT?

I took the advanced linear algebra course with Isidor Singer (of Atiyah-Singer Index Theorem fame), which was tough, and it was more than your usual linear algebra course. Also, a course in analysis with Warren Ambrose, a very nice guy. And, of course, the usual calculus courses. I took these courses and I liked them, but I didn't see them as part of my future in any way—they were just interesting, and I wanted to learn some math.

As to physics, I took quantum mechanics with Felix Villars. I realized, when I was reading books on quantum mechanics, that the physics was hard and I had to learn a lot, but the math was sketchy. The textbooks were skirting over big issues and didn’t define things very clearly. There was a lot more to be done in quantum mechanics than what was in the books or in group theory. I realized the subject needed polishing up, to which I eventually contributed.

The department at the University of Birmingham, was it called the Department of Mathematical Physics?

Yes, it was. There was Sam Edwards, who had studied with Schwinger. Also, Gerald Brown, a very well-regarded American physicist who was labeled a Communist and who had to leave the US. They took his passport away and he couldn't get back into the country for several years because, if he returned to the US without a passport, he would not be able to leave again. Not locked up, but locked in. He was happy in Birmingham. The British of course didn't need his passport and they let him stay. In addition, there was Peierls, of course, and Dick Dalitz. Also, John Valatin, who worked on superconductivity (the Bogolyubov-Valatin transformation). It was a strong group.

What is the history of the department of mathematical physics at Birmingham? Is it unusual? Do places like Oxford and Cambridge have similar departments?

It is due to Peierls, who had to leave Germany in the 1930s and got a position at Birmingham. Later on, he moved to Oxford and set up the department there. Why would Peierls call his department “mathematical physics?” His definition of mathematical physics was not the same as my current one. He didn't care too much for proofs. And this brings us to a difficult question. How much mathematics is enough for physics? Some, obviously. And the answer is constantly changing. Nowadays, you need to know some fancy mathematics way beyond calculus. Even algebraic geometry. Mostly, physicists think that the amount of mathematics you need is what they happen to know. And so Peierls did not use the word mathematical physics in the way I would use it. He used it in the sense of theoretical physics. Only he was going to call it mathematical physics because it was going to be a bit more serious mathematically, but not terribly serious.

Did you take more math classes as a graduate student as a result of being in that department?

No, I didn't. I didn't even know the math department existed, so to speak, nor did other students, which was a pity.

Who was your graduate advisor?

At first it was Gerry Brown who worked on perturbation theory for quantum field theory. He was the first person I worked with. Second, there was my final advisor, Sam Edwards.

What was Sam Edwards known for? What was his research?

I don't know exactly what it was at the time, but he was doing things with functional integrals. It was not very many years after his Ph.D., so he was still in a state of looking around. Eventually he applied his ideas to polymers and became very well-known in polymer theory. He even received a Wolf Prize. He was never really very mathematical, but he was supportive and we got along just fine. I wrote a not very good thesis on functional integrals for Euclidean quantum field theory.

What were some of the conclusions of your thesis research on functional integrals?

I was able to figure out how to approximate Euclidean functional integrals in such a way that it would at the same time give me many quantities, not just total energy but correlation functions as well. I later discovered that it really is more or less the same as Hartree theory.

How so? What makes it similar?

If you take away the functional integral aspects and look at it with a cold eye, and if you have enough perspective, you can see it's really rediscovering Hartree theory, but in a very different way. Hartree theory for a field theory. So, I don't think my thesis was much use for anything. But it was a thesis, and it was not mind-boggling, unlike some of my co-students’ (Stanley Mandelstam, James Langer, Walter Marshall and John Bell) more interesting theses.

You said you wanted to see the world, but did you have an opportunity to travel beyond Birmingham as a graduate student?

Oh yes. With my NSF student stipend of the order of $3000 a year I was able to buy a car. It was not a bad salary, but by English standards, it was terrific. I was getting around England, Scotland and Wales and learned how to hike.

You're from the Bronx. Why would you know how to hike?

Exactly. That’s why I wanted to see the world. I took a summer off and drove around Europe: Spain, Italy, Yugoslavia, Germany, Denmark, Netherlands, France, Switzerland. It opened my eyes. It was a very important thing for me and I now had a more global perspective, which many people didn't have in 1953 unless you were rich or you had been in the Army, but then you got a wrong perspective.

After Birmingham, I went to Japan, which was an unusual decision only a few years after the war. The world was so devastated, and the general feeling in the US was: We won the war, and we could go anywhere we wanted to go and do anything.

After you defended your thesis in 1956, how did the opportunity to go study in Japan come about?

I mentioned my uncle earlier, who ran an art bookstore on Boylston Street in Boston. The Boston Book and Art Shop was unique and specialized in Japanese prints (ukiyo-e). I sort of fell in love with them in my youth and I had the strong desire to go to Japan. I applied to the Fulbright Commission from Birmingham before I graduated to go to the Yukawa Institute, which was built as part of Kyoto University for Hideki Yukawa, who had won a Nobel Physics prize. I got the Fulbright grant, and I went, and I think I must have been the first foreign researcher who was there for a whole year.

Could you feel the impact of the war? Did it feel palpable to you still?

No. By 1956, Japan had managed to rebuild quite a lot, as Germany had. By 1964 the Shinkansen bullet train from Tokyo to Fukuoka, the first in the world, was operational.

What were your impressions culturally of Japan?

There are many things to say. While I was in Birmingham, I shared an office with a Japanese visitor, Shiro Yoshida. The deal was, I would help him polish his English, and he would teach me basic Japanese so that I could get around. When I arrived in Japan, I was again lucky. The Japanese hosts kindly felt obliged to get me a place to live, and they got me a room—this was another turning point in my life—in the house of a painter. K?n?-sensei was a descendent from the Kan? school and had many students and a marvelous, large house with a very big garden. He was a very friendly, open-minded person, interested in foreign cultures. Neither he nor anybody in the house spoke any English at all. So, I had to learn more Japanese. I lived with him and his family and many nights we would sit down and discuss philosophy in my bad Japanese. I will never forget this.

I went all over Japan and had to immerse myself completely in the culture. In the Yukawa Institute, the scientists spoke English to me, but mostly not the staff. Many years later, at a retrospective meeting, a list of all the seminars was compiled, and mine was the first listed since they began to keep a record.

After one year, I was thinking of staying for a second year, but Peierls wrote to me, saying “If you want to have a career in physics, you’d better come back and get into the American system." So, I returned.

What kind of research did you do in Japan?

I wrote a paper with a young colleague, Kazuo Yamazaki, which went unnoticed by anybody until maybe in the last 20 years. It turned out to be useful for the mathematical physics of the polaron problem, invented by Herbert Froehlich. The polaron is a result of the interaction of an electron and the vibrational modes of a crystal. It's like a miniature field theory, only much simpler. But still, it’s a tough problem. Richard Feynman made it popular at the time by applying his ideas of functional integrals. He was able to get the known weak coupling limit correctly, but not the strong coupling limit which people had for years thought they knew. Finally, in 1981 Monroe Donsker and Srinivasa Varadhan, at my urging, proved that limit rigorously by using Feynman’s formulation of it as a functional integral. In 1997, Larry Thomas and I proved it using coherent states. Two years later, in 1958 when I was at Cornell as Bethe’s postdoc, Feynman was there. “What do you do, young man?” he asked. When I said, “Well, I work on the polaron problem,” I thought he'd be very happy. He asked, “What did you do on the polaron problem?” I said, “I proved that the ground state energy is bounded from below.” Remember, in field theory, you usually get minus infinity, but in this case you don't. This is one of the exceptional cases. And I said, “I proved it has a lower bound. It's off by a factor of three from what people believe it should be, but at least it's a bound.” I still remember how very upset he became and said, "Real physicists don't do things like that."

What do you think he meant by that and what was your reaction?

You know, here I am, a young postdoc, and there was Feynman. I suppose he meant well. By the way, that was one of my first applications of mathematics to a physical problem.

I was waiting for this.

Anyway, back to Japan, which was very different in those days from what it is now. Remember, it was only eleven years after the war, and Japan was like the old Japan still. Nowadays, it's recognizable as a distinct, unique culture, though not to the same extent.

Your next stop before Cornell was at the University of Illinois in Champaign-Urbana. Is that where you get more involved in solid state physics?

I got interested in all kinds of things, but not in particle physics—more in solid state. I was officially a postdoc, and I met a German who was on leave there, Heinz Koppe. We spent a lot of time together and wrote a paper on scattering theory.

You were at Illinois from '57-58, right after Sputnik. Why go for a second postdoc? I assume there were jobs galore available at that point.

Yes and no. The job market was only just beginning to improve. Besides, I felt I should be a postdoc. I hadn't really done anything significant. I didn't feel I was good enough for a serious position. I managed to get a postdoc with Hans Bethe at Cornell, possibly due to Peierls’ influence.

Do you remember what Bethe was working on when you connected with him in 1960?

Among other things he was interested in the Bose gas, which was very popular at that time, and I wanted to look into its ground state energy. There was a 1957 paper by Lee, Huang, and Yang on the second order correction to the ground state energy for a low-density gas, but nobody had proved anything. The first order term for the energy of the Bose gas energy per particle is 4 *pi rho a*, where *a* is the scattering length of the two-body potential and *rho* is the density of the particles. Everybody agreed that this first order term was correct. It had been asserted in 1929 by Wilhelm Lenz based on an argument that was okay as an argument goes, but hardly a proof.

By the 50’s quantum mechanics had evolved to the extent that some kind of a ‘proof’ was expected. Not a rigorous proof, but something you can convincingly demonstrate on the blackboard. And there wasn't any, really. But Lenz had the right answer. Lee, Huang, and Yang came along and managed to get this *4 pi rho a* result by using something called the pseudo-potential in which you replace the potential by its scattering length times a delta function. Which is a big jump; one then continues by using perturbation theory. I wanted to prove *4 pi rho a* and I thought you should not have to use pseudo-potentials for that. You should be able to derive it in a straightforward way, even by physics standards. When you use the pseudo-potential, you're putting the answer in a-priori. While that's a good thing to do if it gives you the right answer, you would like to improve the method. In 1947, ten years earlier, Bogulyubov also obtained these first two terms in the energy, more rigorously, but his result was not well recognized at the time. Bethe was also interested in this question. It was only at the very end of my two-year stay in Cornell that I managed to figure out a way to make the pseudo-potential a little bit more rigorous. Bethe submitted my paper to the PNAS. Mark Kac, a famous mathematician, came to my lecture on this work. He was impressed because it was using Fourier analysis in a novel way.

The end of the *4 pi rho a* story came in 1998, 38 years later, when Jakob Yngvason (with whom I had worked on the second law of thermodynamics) and I found a rigorous proof. Much later, the second order correction proposed by LHY and Bogolyubov was finally proved in 2020 by Sören Fournais and Jan Philip Solovej. This *69-year* story was a very long one by the standards of theoretical physics. Solovej was a student of mine and we wrote many papers together. One ongoing theme we continue to study is coherent states, with the goal of proving that Alfred Wehrl’s entropy conjecture holds for many Lie groups, not just Gaussian coherent states and the Heisenberg group.

Was Bethe a good mentor?

Yes, indeed. He made me understand which way was up and taught me standards. I liked him a lot. I was with him for two years, but I hadn’t accomplished a great deal.

How did the opportunity at IBM in 1960 come about?

I was looking for a stable, long-term job this time. One summer, before I went to Birmingham, I worked in an electronics company, and that life was not bad, but I really wanted to get to basic research and wanted an academic job. Robert Brout, who later invented the Higgs boson (it wasn’t called that at the time) took a shine to me and helped me find this job. I was supposed to go to a big APS March meeting and look for Phil Anderson, who was supposedly willing to interview me for a job at Bell Labs. I went and I approached someone I took to be Philip Anderson but it was Elliott Montroll. That's how I got to IBM.

Montrroll was an important physicist—one of the people who married mathematics and physics. He worked a lot on the Ising model and similar models and had a wide range of interests. He was just starting up the new IBM lab at Yorktown Heights in 1960, which is exactly the time we're talking about. He was also supposed to start up a small department on mathematical or theoretical physics called Basic Physics, where researchers would not be obliged to work on industry problems.

What was the research you were doing when you first got there?

I came with an open mind and met Dan Mattis there, and Ted Schultz, who had just done a thesis on Feynman's polaron work. We hit it off and started talking and finding common interests. We wrote a big paper in the Reviews on Modern Physics about how to solve the Ising model by turning it into a problem of fermions by applying a Jordan-Wigner transformation to the transfer matrix. In this way, we made Onsager’s original solution more accessible. It is now the way most theoretical physicists think about the model.

We did many more things together in those days. Probably my most cited paper is one with Mattis and Schultz called *Two soluble models of an anti-ferromagnetic chain*. It's in Annals of Physics and had over 4,000 citations by 2021. People continue to refer to it, even now. Another important paper by Mattis and myself was *The theory of ferromagnetism and the ordering of electronic energy levels*, which had a big impact. It was understood from the time of Heisenberg that the reason for ferromagnetism—why the spins want to line up in a parallel way in a magnet like iron, if given the chance—is that the spins are attached to electrons, and electrons have an electric charge, and they like to repel each other because they're charged. One way to construct a quantum-mechanical wave function F that keeps particles away from each other is to construct an anti-symmetric function since, when you switch the position of two particles, F has to change sign by antisymmetry, which means when they're on top of each other the only possibility is for F to be zero. The only number which is minus itself is zero. Thus, if you write down an anti-symmetric function then you automatically keep the particles away from each other, which lowers the repulsive electrostatic energy. But by the Pauli principle, if the spatial coordinate part of the wave function is anti-symmetric, the spin part has to be symmetric, because the total space-spin function has to be anti-symmetric. Thus, the Pauli principle causes the electron spins to line up.

This was Heisenberg’s explanation for ferromagnetism. Everybody believed it, including Peierls and Bethe. What we proved in our paper was that this argument was flawed—in one dimension, at least. This was one of my first really important purely mathematical proofs, putting me outside the realm of any physics textbook at that time. Despite what Pauli’s principle says, automatically the spin in the ground state is zero, not magnetized, but zero. The truth of the matter is, keeping them apart is one thing, but forcing them to be anti-symmetric is another thing—although it's convenient, it's not the answer. In fact, the ground state is always anti-ferromagnetic in one dimension. It's never ferromagnetic. The spins are all opposing each other. The bound state has spin zero. Peierls looked at our paper and had trouble with our conclusion. Likewise, Bethe. Something must be wrong! It took a while, but eventually they came to realize that we were right. This Heisenberg story about the anti-symmetric wave function, while it might be true in some circumstances, is certainly not true in one dimension. No matter what potential you had, repulsive or attractive, the ground state spin is always zero. The mathematics that went into our proof is not deep, but it's outside of what you find in any textbook. Peierls wrote it up later in his beautiful little book *Surprises in Physics*.

Let me make two observations from what I'm hearing right now about IBM. First is to state the obvious, IBM supports basic science. You're not doing anything that's related to their corporate interests?

That's right.

The second is, you're really starting to come into your own. Things click for you at IBM where now you're contributing significantly.

Actually, the three of us thrived on one another’s presence. After one year at IBM, I went on a paid leave to Sierra Leone. Now we're in 1961, when the world is our oyster. Africa is opening and getting rid of the colonialists.

This is the background: At that time there were two groups of people at MIT, physicists and mathematicians, who independently had the following idea: We are ready now for a complete change of our school curricula in both these fields. In mathematics, it was called the New Math. The idea was to change math completely. We're not going to do finger exercises from morning to night. We're going to do *concepts*. And this really worked for a while, but the average teacher and the average student could not handle it. Then there's the new physics, proposed by people including Jerrold Zacharias, Philip Morrison and Francis Friedman at MIT. The MIT physicists started the *Physical Sciences Study Committee* (PSSC) and wrote textbooks for high school physics courses. By the way, when I was a student at MIT, Zacharias organized an evening seminar for highly motivated physics undergraduates, and I was lucky enough to be considered in that group. We had seminars where I really learned to think like a physicist or tried to, anyway. The PSSC had good intentions, but the concept never really got off the ground, unfortunately. Eventually it fell away, but for a while there were teachers who were educated to deal with the PSSC course in high schools, and many schools adopted it. Anyway, that was the zeitgeist: it's a brave new world, with the UN and everything, and we were going to do everything better now. Start again from scratch. It's very hard to convey this feeling if you haven't been through this brief euphoric phase. The 60s were a time full of optimism, though it saw events like the Kennedy assassination.

Let's go back to 1961, when you head off to Sierra Leone.

I'm coming to that. The PSSC and the *New Math* folks had a big conference at MIT. Their second goal was to redo the entire educational system of Africa entirely. One of the invitees was Davidson Nicol, the president of the University of Sierra Leone, one of the oldest universities in Africa. He was a noted biologist and had studied and worked at Cambridge. One of his young lecturers in applied mathematics was going to MIT on leave. PSSC suggested that I replace him because at that time, I was one of the few people who was adventurous enough to do that kind of thing. Going off to Sierra Leone was pretty unusual at that time. I went with my wife and infant, with financial support from IBM and Emanual Piore, its head of research. I stayed in Freetown for one year, after which I returned to IBM for one more year.

How did you get there?

We flew to Tripoli, Libya, and then to Freetown. The plane landed at Lumley, which is separated from Freetown by a body of water. You took a boat and eventually you end up in Freetown.

What did you teach?

Applied mathematics. That was what was needed.

To what level students?

They were at the British level of undergraduates, which was pretty advanced. The final exam was jointly composed together with the University of Durham. It was really at a high level. But it was British applied mathematics, so you learned about the motion of ferris wheels and gyroscopes and so forth. I had many experiences during this one year—one of the most important years of my life. At one point, I discovered that the physics department had a beautiful big gyroscope. You pull the string and it nutated and I said to myself here are these kids who are learning all kinds of advanced Lagrangian mechanics and stuff like that, which normally, kids in the US don't learn in the undergraduate course. But they have never seen any of this in real life—they've never seen a ferris wheel. So, when I found the gyroscope, I brought it into class and when I wound it up, the smartest kid in the class got up—I'll never forget this—and asked, "Is this physics or mathematics?" I said, "What do you think applied mathematics is all about?" The student replied, "Well, it's not mathematics. It's got nothing to do with the exam." And he walked out of the classroom. I could see somehow everybody was sympathetic to him, but they didn't have his nerve. I went on with the demonstration. Good for him, he was holding his own.

Sierra Leone had become independent in 1961. The British Queen came to visit, and this caused a lot of excitement. Time Magazine published some pictures of the Royals that were perceived as disrespectful of the country. Sierra Leonians did not like this, and politicians started saying that Americans are terrible people and should leave, even though there were very few Americans in this formerly British colony—the Americans were in Liberia. So, I wrote a letter to the newspaper and said, "There are Americans here like myself who are trying to contribute something. This is just a bad joke by Time Magazine, and this is part of journalism and nothing to get excited about." Then I was attacked with newspaper headlines like ‘Doctor Lieb should be deported.’ It was terrible. But then I understood a bit about politics because the leader of the opposition party was actually using this event to make himself popular. I didn't leave, I wasn't thrown out, I stayed the course. A few years after I left there were two coups in succession, and then this opposition leader, Siaka Stevens, won the election. It was a sad history, seen often in post-colonial Africa. When he became president of Sierra Leone, bodies started floating down the river. The economy got worse but nobody could touch him—everybody was afraid. When he died the economy was in shambles.

Anyway, that was an eye-opening part of my life when I perceived how politics really worked and what lay ahead for Africa. I wrote an article in an education journal about it that some people were not happy with. The PSSC folks didn’t want to hear what I reported, either, because it challenged their ideals.

What did you say?

I said that Sierra Leone is a country in which the education is mainly for people who are going to be gentlemen, so to speak. There is no focus on trade schools, unlike what I believe the French did in Ivory Coast, for example, where people were taught practical things. Ivory Coast was a model country, by the way, until it fell apart about a decade ago. But in Sierra Leone, everybody wanted to work for the government and have a desk job, which is not good if you want to have development. Somebody's got to string the telephone wires. Some people didn't like what I wrote. I went back to IBM the following summer, as agreed.

This was an amazing experience that you had.

During that stay I was living in the nice house I was given, and I was musing about the Bose gas problem that I had worried about with Bethe. It occurred to me that I could actually solve the problem when bosons are moving in one dimension instead of three. And if the potential interaction was a delta function instead of a regular potential, then I could solve everything using Bethe's Ansatz method. I worked everything out, including the excitation spectrum. When I returned to IBM, in order to complete this job, I needed some numerical analysts to solve the equations and draw conclusions and make a graph, etc. In Yorktown Heights I met a numerical analyst named Werner Liniger. We divided the work into two papers in *Physical Review*, the basic part is by Liniger and myself and the other part about the excitation spectrum was under my own name. This is now known as the *Lieb-Liniger Model*. It is one of the basic models in many-body quantum mechanics. It's still very popular among theorists. Improbable as it would have sounded at the time, it can nowadays be realized in real experiments and the predicted properties confirmed. That came out of my visit to Sierra Leone.

How did the job at Yeshiva University in 1963 come about? Did you specifically want to be in an academic environment?

No, but Joel Lebowitz, who had just joined the Belfer Graduate School of Science at Yeshiva University, was building a physics group. He wanted me to join because he knew about my work with Liniger. I accepted and I went there as an associate professor, but I still lived in Yorktown Heights and had a one-hour commute.

And it was a good program?

Yes. The undergraduate program, where I taught a course, was also good. Yeshiva University had the idea that the students could do two courses: a regular undergraduate course, say in physics, and at the same time devote an equal amount of time to studying the Talmud.

Most of your undergraduate students were religious?

Yes. They had to be because they had to devote an enormous amount of time to studying the Talmud. I had my first PhD student there, who was not religious.

The Belfer Graduate School was a secular environment?

Yes, it was totally secular. There were faculty there such as Lenny Susskind, Yakir Aharonov, Allen Cromer, David Finkelstein and others. It was a very good working environment.

I also continued to work with Dan Mattis. We solved another model in condensed matter physics, which was invented by Joaquin Luttinger. He proposed a model in one dimension, which was based on a field theory model invented by Walter Thirring years earlier, and Luttinger wrote a paper on its solution in the context of condensed matter physics. Mattis and I wrote a book called *Mathematical Physics in One Dimension*, in which we put together all the models in one dimension that had ever been worked on that we could find. Freeman Dyson wrote a very favorable review article about it in *Physics Today*. We wanted to include Luttinger’s model in the book, and we realized that his solution contained an inconsistency.

In the model there was something wrong?

No, the model was fine, but the solution wasn’t—there was a very subtle error. When Mattis and I wanted to include Luttinger’s paper in the book we realized it wasn't correct. We called up Luttinger and convinced him that there was an error. Eventually Dan and I figured out the correct solution, which contained bosons in a fermionic model. It was the first example in condensed matter physics of so-called "bosonization." As we later learned, the existence of bosons had in fact been known to some people earlier in the context of the Thirring field theory model.

I stayed at Yeshiva for two years and had one graduate student, Michael Flicker, who finished and left. One thing I remember about Yeshiva was the Vietnam War which was well underway then and young people had to go to Vietnam. There were lots of protests at universities across the country, much more than during the Korean War. There was one exception that I knew about—Yeshiva—but there must have been others.

What made it the exception?

Almost everybody—outside of the Belfer School—was in favor of the war.

The Orthodox generally hold right-wing views.

They were strongly in favor of the war. Another event I remember was a meeting of the whole university in the big auditorium. Everybody wanted to attend it to debate the Vietnam War. More or less most of the people in the university who were opposed to the war consisted of a few of us in the Belfer Graduate School. One of us presented the anti-war view on the stage, and the pro-war side was represented by a professor of philosophy, who spoke about what a great thing the war was, and how we had to uphold our country, and so forth. Most of the students there were so clearly in favor of this professor’s view. The people who were opposed to the war were tagged as communists and anti-patriotic. I felt terrible. It's not so easy to imagine unless you've actually been there and experienced this kind of thing firsthand. At almost every other university the students were rising in protest.

Did this event, besides the commute, encourage you to think about moving on?

I was perfectly okay with the Belfer School. I had to teach an undergraduate course in physics, which I didn't mind. But I thought, why not go back to Boston? Driving every day from Yorktown Heights to northern Manhattan began to get to me and I managed to get a job at Northeastern University in Boston. I felt at home in Boston for some reason.

Ancestral homeland.

My job was to help build up the physics department there. There was Dick Arnowitt who had written an important paper on general relativity theory with Stanley Desser. There was Roy Weinstein who did experimental particle physics. And there were also people who did not do any research but who taught a lot. Later on, I brought Fred Wu to the faculty from Virginia.

It was at Northeastern that I did some of my best work. One paper, with Wu, *Absence of Mott Transition in an Exact Solution of the Short-Range, One-Band Model in One Dimension* contained the exact solution of the one-dimensional Hubbard model of electrons. This is my second most quoted paper.

I also solved the problem of the entropy of Linus Pauling’s model of ice, but in two dimensions instead of three. When I wrote the paper about this, I was so excited I forgot to mention that I learned about this model from John Nagel, who had been considering coming to Northeastern. He was doing numerics on the Pauling model and his estimate was extremely close to the exact value that I later found.

The Pauling model has an amazing history. In 1936 two chemists, Giauque and Stout, in a difficult experiment, measured the entropy of ice at absolute zero temperature and found that it is not zero, as it should be. Pauling had the idea that the entropy was due to the disorder of the hydrogen atoms in the ice, and he proposed a model for this—in three dimensions. Nagel and many others tried to calculate the corresponding model in 2D as well as in 3D. It is a long story, but Andrew Lenard figured out that Pauling’s problem was mathematically equivalent to the 3-color problem on a large checkerboard: if you have three colors of paint at your disposal (say red, black and yellow) in how many different ways can you color the squares so that adjacent squares always have different colors—as in a map. My exact answer is *(4/3)*^*{3N/2}*, where *N* is the number of squares in the board. Note that *(4/3)*^*{3/2}* = *1.5396*... Pauling’s estimate was 1.5, which is pretty close, and Nagel’s was even closer! My exact solution started an industry which is very active even today. Many other properties of this model have been solved and are being addressed. Generally, these are called *six-vertex models*, since 6 is the number of different colorings that can occur around any given vertex of the checkerboard once one of the colors is fixed. Boltzmann’s formula says that the entropy of ice, *S*, equals Boltzmann’s constant *K* times the logarithm of the number of configurations namely, in 2D, S= *K N (3/2) log (4/3)*.The experimental entropy of real ice is very close to this. This theoretical prediction for ice is one of the most accurate in all of physics. Soon after, I got an offer from MIT, and I debated whether or not to leave Northeastern.

Were you happy at Northeastern?

I was quite happy there. Fred Wu and I did the work on the 1D Hubbard model, and we wrote a long review of the ice model and its many properties. But in the end, I decided to cross the Charles River.

Did it feel like the same place from your undergraduate days, or did it feel different?

It felt the same, but, of course, from a different perspective. I was now a professor instead of a student.

And it was also 1968 and not the early 50s.

That's right. It was a much more active place. Weisskopf was the head of the physics department. But I was recruited by Gian Carlo Rota to join the applied math group.

Did you get involved in any anti-Vietnam activities at MIT?

Yes, there was a lot of exciting activity. There was the *March 4, 1969, One Day Work stoppage*, which started at MIT, and with which I was involved early on. Recently someone sent me a couple of photos of myself giving speeches. I noticed your interview with Robert Jaffe, whom I met there. He was another involved professor. In that interview he gives a great summary, but it should be stressed that the work stoppage was not only against the Vietnam War, although it was motivated by the war. It was about the misuse of science (atom bombs, napalm, etc.) and against the industrial complex and military research use of science for bad ends.

The Lincoln Lab was in the middle of all this.

Lincoln Lab, and the Draper Lab too. They both still exist. Draper was making gyroscopes for guided missiles and things like that. In Lincoln Lab at least there were some purely academic projects. There were talks at the meeting about how to divest MIT of these labs and how to persuade scientists not to devote themselves to this war-making potential. We didn't particularly feel that we could do anything about the Vietnam War, because there were plenty of protests already throughout the country. We decided to couple the war and science research together. When the earlier Korean War came along the country realized for the first time that we've got a well-prepared military and we may as well use it. Unlike the onset of WWII, America was not attacked by the Koreans—we just decided to go in there and push back the communists. If we didn't, all the ‘dominoes’ would fall. We were told that we were protecting ourselves from communism. We lost the Korean War, just as we lost the Vietnam War later. The Koreans suffered greatly. Seoul was bombed to smithereens. This is not talked about too much, but a lot of Koreans died. The North Koreans remember it very well.

What new research did you take on when you got to MIT?

I continued working on the six-vertex ice model. Rodney Baxter came as a visitor for a year or two and together we worked on this model, but then he left for Australia and solved the 2D 8-vertex model on the boat going over. Douglas Abraham was also at MIT. We worked on the F model and the KDP model. All of these many models are similar, except for allowing different weights to the six different coloring configurations that can occur at each vertex. The models have unexpected, unusual phase transitions and form a major new class of models separate from the Ising type models.

With Joel Lebowitz, I worked on the existence of the thermodynamic limit with Coulomb (i.e. electrostatic) forces. One of the early problems in statistical mechanics is to show that if you have particles in a box, like electrons and nuclei, interacting with one another, and you add more and more particles and put them into ever bigger boxes, but you keep the density of particles fixed, then in the limit there will be a uniform homogeneous system. This uniformity is not the case for gravitating particles, as we see by surveying the universe. You would like to know that all the thermodynamic properties, such as entropy, scale proportional to the number of particles, i.e. to the size of the system. How do you know that these properties do not keep oscillating with size in some way? This problem had been successfully solved mathematically for short range forces, but what people didn't know how to do was prove this for particles having electric charges, like nuclei and electrons, where the forces are now very long range, like gravitational forces. So, all the strategies for proving the existence of the thermodynamic limit don't work, because the Coulomb interaction is just too long-range. Somehow, the positive and negative charges manage to cancel each other out, and you don't actually see the charges. When I look at a piece of paper, I don't see that it is full of charges. If you could separate all the charges in this piece of paper, it would take a huge amount of energy. This was a real problem, and Joel and I figured out how to solve it in our 1969 paper, *Existence of thermodynamics for real matter with Coulomb forces*. That was a significant step in statistical mechanics. The earlier energy lower bound by Freeman Dyson and Andrew Lenard and the importance of the Pauli principle was an essential ingredient in our work.

Another important topic is the *Temperley-Lieb algebra* of 1971, which has to do with knot theory or with counting non-intersecting loops on a square lattice.

When did you meet Neville Temperley?

I had known him for some time, maybe since the mid-60s. But the work here was done when I moved to MIT, shortly after I had solved the six-vertex ice model transfer matrix using the Bethe Ansatz. Neville was very interested in that solution because he was always thinking about statistical, mechanical models. He was a very good mathematical physicist and deserved much more credit than he got. He had an idea and we worked it out. Imagine taking a square lattice, a grid, on which you want to trace paths that connect specified points, with the condition that the paths can touch but not cross. You want to count the number of loops and their lengths. The simple transfer matrix idea won’t work for this problem. Neville thought he could see a way forward, so I went to visit him in Swansea, where he was at the time. We figured it out and wrote a paper with a long title, which became popular. The method was named the Temperley-Lieb algebra, perhaps by the knot theorist Louis Kaufmann. We did not even know that we had discovered an algebra. It turns out to be useful in lots of contexts.

An important collaborator for many years was the late Ole Heilmann. Our big 1970 paper *Theory of monomer-dimer systems* has since found several applications.

In 1971, I met Barry Simon at a summer school in Les Houches, France. He was a young star professor at Princeton. While I was still at MIT, and long after I came to Princeton we worked on the *Thomas-Fermi theory* of atoms and molecules.

In the early days of quantum mechanics, in 1926 to be exact, Llewellyn Thomas and Enrico Fermi independently hit on an approximation to describe a molecule or a large atom with many electrons. The early success of quantum mechanics was understanding the one-electron hydrogen atom but describing a 20 or more electron atom is another matter. Their idea was to treat the many-electron system statistically. No one knew to what extent their idea was exact, whether their equation had a solution and, if so, whether it was unique. And exactly what did the theory predict? Barry and I set out to answer these questions, and by 1973 we had rigorously answered them all. At Princeton, I continued with TF theory for a few more years, working with my first Princeton student Rafael Benguria, who became a Professor of Physics in Santiago, Chile. This TF theory is a big part of my mathematical physics life and Barry’s, too.

In this same period at MIT falls the work with the late Walter Thirring, the *Lieb–Thirring inequalities*, which is still a very active field of research. Walter had a huge influence on my scientific life. He was a professor of Physics in Vienna, and we became great friends and visited each other often. In addition to his scientific work he did many things for modern mathematical physics. He was the first President of the *International Association of Mathematical Physicists*, and he wrote and taught a four volume, four semester course on mathematical physics, which is still a cornerstone of the field. For a while he was head of the theory division at CERN.

The 1975 work with Walter reveals some of the fundamental facts about quantum mechanics, which are still not very well-known except to people with a mathematical physics bent. But it should be taught to everybody. The Schrödinger equation for one particle, like the electron in a hydrogen atom, tells you that the kinetic energy K plus the potential V equals the total energy E. Depending on what V is, you may or may not have bound states, that is, negative E solutions. Sometimes you do, sometimes you don't. The interesting case is when you do because those solutions are the bound states. For many electrons, as in a large atom, we need to know not just the most negative E but the sum of all the negative energies. While many estimates had been given for the existence of one negative E, no useful rigorous estimate existed for the sum of all the negative energies. This is what we provided. It forms the mathematical basis of Thomas-Fermi theory as well as of many other applications. That was one of the highpoints of my period as a professor at MIT.

While I was at MIT, around 1972, I wrote a paper with Derek Robinson on what is now known as the *Lieb-Robinson bound* and is well known among condensed matter theorists. It turned out to be a sleeper. It only became well-know about ten years ago and now it's widely used. It refers to a certain kind of solid called a quantum spin system, in which each atom has a spin associated with it. By poking one of the atoms with my finger a stir will be created in the lattice, which will propagate outward—like dropping a stone in water. And like dropping a stone in water, it will expand out with some speed. But the fact that this happens with spin systems was new. Nobody had any idea that there was a speed associated with the question as to how rapidly information gets carried in such a system. For example, in heat conductivity, heat travels instantaneously. It takes a while to build up, but there's no clear front as there is when you drop a stone in water. That there is a maximum speed of propagation in such a system was a completely unexpected finding. It's used very frequently now in theoretical calculations in condensed matter physics.

Another major result with Simon in 1974 while I was at MIT was to show that the Hartree–Fock theory for atoms and molecules, which chemists and physicists like to use, actually has solutions in electrically neutral systems at least. This was a difficult very non-linear mathematical problem.

Another currently active area of research goes back to work I did in 1973 with Klaus Hepp from Zürich. We discovered a model, due to Robert Dicke, of what was originally a maser but later came to be called a laser (it's really the same thing). Dicke was a remarkable scientist—a great physicist on many levels. He really deserved the Nobel Prize, but he did not get one. Hepp and I managed to take his model very seriously, and solved certain aspects of it exactly and discovered that it had the possibility of a phase transition as a function of temperature. On one side of the phase transition this model is not lasing at all. On the other side, it spontaneously lasers. The number of photons in the system goes from an order of one to an order of N, the number of atoms in the system—just like that. Hepp and I wrote several papers on the subject and that transition is now a popular topic.

In 1973, I also proved the *Strong Subadditivity of Entropy (SSA)*. Few physicists paid any attention to it back then, although the mathematics community instantly picked up on it. Now it's very well-known among the particle physicists, especially those who deal with black holes. I wrote the paper with Beth Ruskai, but it’s based on all the mathematics I had done in my 1973 paper, *Convex trace functions and the Wigner-Yanase-Dyson conjecture*. Some people consider that mathematics paper to be the mathematical foundation of modern quantum information theory.

Physical systems have entropy and SSA says something about a system that's made up of two or three subsystems? Suppose you have a glass of water here, and a glass of whiskey there. I can think of that as one system if I wish. If they are not interacting in any way, the entropy is just the sum of the two individual entropies. But if these two subsystems, whiskey and water, are interacting at all, then the total entropy is always less than the sum—this is subadditivity of entropy, and this fact is mathematically provable on the basis of a very general hypothesis.

What if you have three subsystems instead of two? You can compare the entropy of all three together, or two at a time, or just one at a time. There's a relation of inequality among them, and that's Strong Subadditivity. SSA was long known for classical mechanical systems but was conjectured to hold as well for quantum mechanical systems by Oscar Lanford and Derek Robinson in 1968; David Ruelle told me about it. Several people thought about proving the conjecture, but no one had any idea even how to get started on a proof. Oddly, it's one of the few properties of entropy that carry over into quantum mechanics from classical mechanics. Several other, simpler relations don't. For example, in quantum mechanics, the entropy of the whole system could be zero, whereas the entropy of subsystems could be quite large. So, if you believe in the idea that there's just one wave function of the whole universe, the entropy of that single wave function in quantum mechanics is zero.

What were the circumstances of your moving to Princeton from MIT in 1975? Were you looking to leave, or this was a surprise?

Strong subadditivity certainly helped. Simon and I had been working on the Thomas-Fermi and on Hartree-Fock theory. For these reasons I got an offer from Princeton. As it happens, I love Boston.

Did your research change at all as a result of moving to Princeton?

I had more interaction with Simon, but he eventually left for Caltech.

In which department would you be more likely to have graduate students?

I had nine Ph.D. students in my life, five in physics and four in math. But they all worked on mathematical physics, on the kinds of things I was interested in. That was one of the attractions of Princeton; there were no boundaries between math and physics.

Right. So, what were some of the first things that you started up on when you got to Princeton?

I don't know which was first, but Barry and I continued our work on Thomas-Fermi theory. It took a long time to write that paper.

Why so long?

There were a lot of theorems to prove. In 1981 I wrote a summary in *Reviews of Modern Physics* that includes some additional work on Thomas-Fermi theory with my student Benguria. In Thomas-Fermi theory, two atoms far apart influence each other a little and the shift in the total energy depends on how far apart they are. But how does the energy difference decay with distance R? People had believed, and convincingly proved, that it’s one over R to the seventh power. We proved that it was one over R to the eighth, which again shows the usefulness of rigorous mathematics. Other results in 1981 include a paper with Michael Aizenman on the third law of thermodynamics and one with Alan Sokal on Lee-Yang zeros. Another collaborator in this period was Juerg Fröhlich. Together with Simon and Robert Israel we wrote several extensive papers on *Reflection positivity* and its use in statistical mechanics. Do you want me to tell you about the 1987 AKLT (Ian Affleck, Tom Kennedy, Lieb, and Hal Tasaki) model?

Definitely.

It is one of the most well-known models in condensed matter physics. Ian Affleck, a card-carrying physicist and member of the Princeton Physics department, started it. Originally, he was in field theory but branched out to condensed matter theory. The other collaborators, the K, L and T, are all mathematical physicists. I want people to know about these backgrounds, because the work wouldn't have been done otherwise.

It required the four of you?

As it turned out, different math/phys backgrounds were needed to work together on that problem and we certainly worked together a lot.

What did you learn as a result of that work about the spin?

It was a spin-one model in which you could know the lowest energy state exactly. The most important thing was to prove an energy gap above the ground state. Nobody had managed to find a gap up to that point. A gap in energy between the ground state and the next lowest state was predicted by Duncan Haldane for spin one, and he received a Nobel prize for that among other accomplishments. Much earlier Schultz, Mattis and I at IBM had proved the *absence of a gap* for spin 1/2. Our AKLT paper was very much mentioned (nine times) in the Nobel Prize writeup, for it essentially proved Haldane right on a rigorous level.

In 1989 I wrote *Two theorems on the Hubbard model*, which still attracts much attention, has led to real laboratory experiments and might eventually have a technological application. It uses a new kind of reflection positivity called ‘spin reflection positivity’ and proves that certain kinds of solids automatically have bulk magnetism in their ground states. The existence of ferromagnetism still awaits a general, rigorous foundation; this paper gives one of the few clear examples.

One of my most favorite works, much later—in 1999—is the paper with Jakob Yngvason (with whom I had proved *4 pi rho a) on entropy in thermodynamics*. We produced a formulation of the meaning of entropy that starts with nothing but a few simple axioms and develops classical thermodynamics. Entropy always exists, but what is it for systems in equilibrium, where it should be unambiguously defined? You would like entropy to have certain properties such as that the entropy of isolated systems is the sum of the entropies of the separate systems. The definition of entropy should also account for the fact that after any process the total entropy goes up, not down. This is the second law of thermodynamics. It turns out that these requirements, plus simple axioms, give rise to a unique definition of the entropy of every equilibrium system, however complicated. Our construction of entropy does not need Carnot cycles or chaos or any mechanical model. It just follows from known processes in nature. This was all written up in our 1999 paper *The Physics and Mathematics of the Second Law of Thermodynamics*, although there are shorter summaries in the AIP Physics Today and the AMS Notices. Yngvason and I are happy to see that our perspective is slowly catching on.

What was next?

One of the young people whom I mentored at Princeton was a brilliant young Dutchman named Herm Jan Brascamp. We proved a whole slew of mathematical inequalities whose existence nobody even suspected, which are called the *Brascamp-Lieb inequalities*. The simplest version, called Young's inequality, had been known, and is very important in mathematical analysis. But ours are a vast generalization. Young's inequality concerns three functions. Our 1976 inequalities deal with arbitrarily many functions. There were limitations to our original inequality, which I removed in 1990—long after I moved to Princeton—in *Gaussian kernels have only Gaussian maximizers*. People in statistical mechanics, quantum mechanics, computational mathematicians and computer science know and use these fundamental inequalities. Brascamp preferred being a schoolteacher to being an academic and returned to The Netherlands.

A significant result for chemists was worked out at Princeton in 1981 with Steven Oxford, who was a postdoc for just one year. It's called the *Lieb-Oxford bound for the exchange-correlation energy* and it's one of the few solid results in quantum chemistry. This is a field in which chemists try to explain the structure of atoms and molecules from the principles of quantum mechanics. The structure of atoms and their binding is certainly very important in technology, and the ability to produce molecules with prescribed properties is key.

Why are there very few solid facts?

By solid, I mean a rigorous benchmark. Walter Kohn got the Nobel Prize for density functional theory, as it's now called. Nobody can solve Schrödinger’s equation when there are more than a few electrons. Kohn wrote down a scheme of approximations that ought to yield useful paths to correct answers. It's not based on any rigorous treatment of quantum mechanics, however. It is based on what is ‘reasonable.’ But remember you're trying to compute numbers, solid numbers, and you like to be more than just reasonable. Hartree–Fock theory and Thomas-Fermi theory are examples of a density functional theory. In other words, density functional theory had existed long before Kohn. But he woke people up to this kind of approximate approach aided by massive computers. It was very important, but still, nobody knows what the ultimate density functional theory is. You have to guess it and it’s like trying to guess what an elephant is by discussing its different body parts. What is needed are rigorous checks on the various theories, as they come along. The Lieb-Oxford bound does just this. Some theories have been thrown out because of failure to satisfy the bounds.

Another marvelous experience for me at Princeton was working with Fred Almgren, a former US Navy pilot, who was a great mathematician and who died at a relatively early age. I was his most frequent collaborator. We had deep results on two interesting topics: singularities in harmonic maps and rearrangement inequalities. The former is closely related to the theory of singularities in liquid crystals.

In 2001, by another stroke of good fortune, Robert Seiringer, who had just done his Ph.D. with Jakob Yngvason at Vienna, came to work with me, first as a postdoc and then as an Assistant Professor at Princeton. He is now a professor in Austria. Fairly quickly he solved a problem I gave him, which was to prove that two conjectures, one of which was famous and the other not, are equivalent. A proof of one implies the other. The famous one, three or four decades old, was announced by D. Bessis, P. Moussa and M. Villani at Saclay in France. The BMV conjecture was proved in 2011 by the German mathematician Herbert Stahl, and thus it proved our second conjecture about sums of matrices. That one still has no independent proof as far as I know; thus, there are two independent fields that are connected. Robert and I have been writing many other papers together, especially on the Bose gas, and we continue to this day, often with Mathieu Lewin, another brilliant young professor in Paris.

Alessandro Giuliani is another important mentee with whom I continue to collaborate. Also, with his student, Ian Jauslin, who is a very fine mathematical physicist and a brilliant computer virtuoso.

Did anything change when you became emeritus three years ago? Did you slow down at all?

No, I still continue to do research every day. Not as well as I used to, of course. I was 85 when I became emeritus, and I'm now 88. So, I keep chugging along.

You're doing great for 88. There's no doubt about it. For the last part of our talk, let's think broadly and retrospectively. You have worked with so many people. You have so many collaborators. Who sticks out in your mind in terms of bringing out the best in you?

They're all great and each brought out different interests in me. A great postdoc and then long-term collaborator is Michael Loss. We wrote several papers on the quantum many-body problem and quantum electrodynamics. We also co-authored a book called *Analysis*, which is a consistent seller with the American Mathematical Society. It's a unique textbook on mathematical analysis.

An important colleague is Eric Carlen, with whom I have written very many papers. One of them contains a new proof of SSA. Another prolific, long-term collaborator, not mentioned earlier, is Rupert Frank. Ingrid Daubechies, Anna Vershynina, Sabine Jansen and Beth Ruskai came as postdocs. A large part of my opus is coauthored with Frank. Each person has a unique view of the world, so to speak; some more physical, some more mathematical.

What kinds of collaborations have given you the most satisfaction, both in terms of the elegance of using the math to propel physics forward, or simply by understanding new ways that physics works?

Sometimes I see things more physically, and sometimes I see them more mathematically. And there was a period when I was engaged in organizational activities. There's the International Association of Mathematical Physics. Although the original idea came from my good friends in France, Moshe Flato and Daniel Sternheimer, I was one of the initial organizers. Every three years we hold a big conference somewhere in the world and a big prize is handed out. It started officially in 1976 with Walter Thirring as the first president, followed by Huzihiro Araki from Kyoto and then myself in 1982 and again in 1997. We have a newsletter, try to raise money for support of conferences, and put mathematical physics in the spotlight. I put a tremendous amount of effort into that.

What were your motivations?

The motivation was that mathematical physics in the sense that I'm using the word, goes back to the 50s only, except for earlier bright spots like Wigner. There was a first math physics meeting in Moscow in 1972, but it wasn't called “International”—that concept didn't really exist then. Araki wrote the constitution for the IAMP. It's one of the few international associations, where *the members are people*. Usually, the members of an international organization are scientific societies, not individuals. We all believe that mathematical physics contains a good part of honest physics and a good part of honest mathematics. And the fusion of the two produces some interesting results—or aims to do so. These are people who speak the language of professional mathematicians and the language of physics, which nowadays is mostly about condensed matter physics in one form or another, like statistical mechanics or atomic physics. Not too much particle physics, so far. That's been taken over by string theory. And there's not too much quantum field theory left among mathematical physics, perhaps because it's mostly been thought of as too hard.

Why too hard?

It is difficult to solve the problems in quantum field theory rigorously. Physicists use quantum field theory to make perturbative calculations and so on, and they're very successful in many cases, especially in quantum electrodynamics. No doubting that. But the foundation just isn't quite there. There are all these infinities in quantum field theory. Nobody knows what to do with them. Do you just side-step it? And the way you sidestep it is by renormalization. Even quantum mechanics doesn't have a completely solid foundation. The question of magnetism and the interaction between the spins of one atom with the spin of another atom leads to potential infinity—as Dyson observed—and people sidestep it by ignoring it. You have lots of foundational questions even in ordinary atomic physics.

Do you think any of Einstein's original misgivings about quantum mechanics hold true?

I'm not an expert on Einstein’s misgivings, so I can't say that for sure. But quantum mechanics is a lot more successful than anybody originally thought. Entanglement is the most important aspect of quantum mechanics. It was Schrödinger who pointed that out first. Today, every physicist talks about entanglement. It's kind of silly in the sense that we managed to do physics for decades with hardly a whisper of this word. At a certain point, the word got into the everyday vocabulary. People see entanglement everywhere. Up to now we've been doing very nicely in atomic physics and elsewhere without entanglement, but now you can't write a paper without entanglement in it somewhere. I find that this concept is not needed for most applications of quantum mechanics. What Einstein thought about entanglement is now not terribly relevant, I believe. There are other foundational questions that I think are very relevant. There's a belief among most physicists that the problems with infinities in quantum electrodynamics, for example, will be solved when we have a better, superior theory such as string theory. Then this big theory, reaching down into quantum mechanics, will solve the problems. I think this is against history.

Why?

Because every science, in my opinion, deals with a slice of reality. There's the reality of biology and there's mathematical reality, for example. For atoms, there's chemistry. If you want to be successful, you have to deal with the problems at the same level as the problems themselves. You are going to solve problems in biology by biological methods, and probably mathematics is not going to help too much. Even though people write papers about mathematical biology, they are not likely to solve the fundamental questions of biology. Within any given field, like physics, every level has its own explanatory power—nuclear physics, atomic physics etc. At each level there's an internally consistent picture within that level. These levels may be connected, but you don't expect one level to solve the problems on another level. You don't solve the basic problems about atoms by looking at nuclei, or the problems of black holes by looking at atoms or whatever. Each level of scientific thought must exist on its own. Just the way mechanics existed when Newton and Galileo invented it. I can do experiments in the laboratory without knowing about string theory or black holes or whatever. I think that's fundamental to science and its history. It's epistemologically questionable, I believe, to require a theory of strings to answer the infinities question in QED. There must be a self-consistent theory which doesn't bring in deus ex machina or machinae, and we haven't quite got there yet, but we're doing very well.

What have the demographics in mathematical physics been like since you started in the field? Has it grown? Has it been stable? Is it dwindling?

It's definitely growing, but slowly compared to other fields. There are more young people entering mathematical physics and trying to make a career. But, sadly, there are not many positions available beyond the postdoc level.

Have you noticed that the younger generation is utilizing computational powers in really exciting ways? Can mathematical physics problems be solved with supercomputers?

I was at first against the use of computers for that purpose, but I changed my attitude a long time ago. A lot of problems can be approached computationally. In fact, I just finished a paper on the Bose gas, which will appear in *Physical Review A*, with Eric Carlen, Markus Holzmann and Ian Jauslin, where we used computers heavily. We had a theory and we wanted to test it against data. There is only limited experimental data, so we used the output of well understood Quantum Monte Carlo computations as data. Our theory agreed with that data and thus has been shown to calculate things well beyond the low-density theories.

Are there any problems that you've never been able to solve that gnaw away at you?

There are several. Here is one about the ionization of atoms. Given a nucleus of positive charge Z, how many electrons of negative charge 1 can bind to it and form an atom? Experimentally the answer is Z + 2, at most. It used to be thought that this ‘fact’ is obvious because additional electrons would be blown away by the Coulomb repulsion from an object with net charge -2. This explanation is wrong, as Rafael Benguria and I proved in 1983, because if electrons were bosons instead of fermions the answer is (1.2)Z, and the ionization would be (0.2)Z, which is huge. The Pauli principle is essential here. Several people have worked on trying to prove Z+2, or even Z+constant, but the problem is still very much open. By the way, our proof of (1.2)Z for bosons uses Sobolev’s inequality, which is a deep fact from mathematical analysis.

Looking to the future, what else do you want to do?

Nothing really different. I just want to continue. I have collaborators and so long as they don't throw me overboard, I'm happy. What I have noticed is that there are periods in one’s academic life where certain topics and collaborators are at the forefront, and then they fade away into the background as a new topic comes up with new collaborators. It's not because of quarrels or anything like that. It's just that life progresses and so do academic interests. With me it’s a little bit like the famous clock tower of Prague. When the hour strikes the doors open and the carved figures come out one at a time and parade. As one disappears inside another one comes out. When they've all made their appearance the doors close.

Well Elliott, on that note, this has been a fun and epic conversation. We covered a remarkable amount of ground. And it was my pleasure.

Mine, too.