Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
We encourage researchers to utilize the full-text search on this page to navigate our oral histories or to use our catalog to locate oral history interviews by keyword.
Please contact [email protected] with any feedback.
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of Dudley Herschbach by John Rigden on 2003 May 21,
Niels Bohr Library & Archives, American Institute of Physics,
College Park, MD USA,
For multiple citations, "AIP" is the preferred abbreviation for the location.
In this interview Dudley Herschbach discusses topics such as: his childhood and early interests in chemistry and physics; recruitment by Stanford University; decision to do graduate work with E. Bright Wilson at Harvard University; working on molecular beams at University of California, Berkeley; meeting Otto Stern; transition-state theory; Yuan Lee; things learned from Linus Pauling; consequences of winning the Nobel Prize in Chemistry; dimensional scaling; prizes he has won; grant support for research; advising students; biophysics; staying between physics and chemistry and in between theory and experiment; history of science; changes during his career and looking ahead.
We’re sitting in the office of Professor Dudley Herschbach, Room 35, Mallinckrodt Laboratory, 12 Oxford Street, Cambridge, Massachusetts. I have known Professor Herschbach for some time and so I’ll probably call him Dudley.
I hope so.
All right, Dudley, we’re going to start with your early life, and I’ll begin by asking this. Can you identify any specific events or influences as a child that sparked your interest in science?
Oh, yes. First I should say I was born in San Jose, California, June 18, 1932. My parents had both lived in California quite a while. In fact, my dad was born in San Jose, as I was, and so was his mother, so I am a third-generation native of San Jose. My parents both were high school graduates, but we didn’t even know anyone who’d gone to college and I certainly didn’t expect to. However, a key event that sparked my interest in science occurred just after I turned eleven, on a visit to my grandmother’s house in San Jose (she lived a couple of miles from us). There I saw an issue of National Geographic magazine with gorgeous star maps. The star maps were in an article by Donald H. Menzel of the Harvard College Observatory. It was certainly the first time I’d ever heard of Harvard, but it surely didn’t even register then. When she saw how intrigued I was with those star maps, my grandmother gave me that National Geographic and I soon began making my own little pin-prick copies of the maps. There was a locust tree in our back yard that became my observatory. I’d sneak out at night, climb up in the tree, peek at my pin-prick copies with a quick flick of a flashlight and pick out the constellations. Many years later a friend, who had heard this story, somehow found a copy of that magazine (July, 1943) and kindly gave it to me; I still like to look at those maps from time to time! The grammar school I was going to then, and it’s unbelievable if you know what Silicon Valley is like in California now, was in the orchards. We were bussed more than ten miles to get to school. The school had about 80 kids in a four-room building, so two classes per room. It had a little library, really just a bookcase, maybe four feet high and four feet wide with five shelves. One of my teachers pointed out there was a book on the planets there, so I read that avidly. I might have wanted to become an astronomer, but I had the impression there couldn’t be jobs for more than four or five astronomers in the world. But the star maps really were what got me interested in science. When I went on to high school, I eagerly took science and math courses just because I knew wanted to learn something more, with no notion of preparing for college. But I was a good football player, and my coaches said “Of course you should go to college.” Then my teachers, since I was a very good student, began saying that too. The school, Campbell Union High, was small — fewer than a hundred kids in my class. Many were bussed for 25 or 30 miles. Most of the kids were like me, farm kids and few expected to go on to college. At any rate, I was advised to take what was then called a college prep course, along with wood and metal shop courses called vocational training. That’s why I signed up for chemistry in my junior year. The teacher, John Meischke, was terrific. He had a Master’s degree from Berkeley, really knew his stuff, and was a fabulous teacher. I can tell you some tales about him if you’d like, but I should first back up a bit and mention the way it was in 1946, when I entered Campbell High School. It was right after World War II and many of the teachers had just come back from the war. They didn’t talk a lot about the war or sermonize, but somehow made the kids aware that our generation too would have serious work to do. For instance, the very first class I had at Campbell High remains a vivid memory. While waiting outside the door, I met Mino Yamate, a Japanese boy who became my best friend throughout high school. He’d just come from Colorado where his family had been in the internment camps there. Soon after we sat down in the class, in walks our teacher, Mr. Drummond, an impressively large fellow. The first thing he said was, “I don’t know much about Algebra.” Then he went on to say, “But I can tell you one thing. In this class, if you calculate by the right method but get the wrong result, you’ll get no credit. I served in the Artillery Corps, and if we calculated by the right method but wound up shelling our own troops, we got no credit.” That established an attitude, maintained pretty consistently throughout Campbell High, that the standards were high. The teachers expected the kids to take responsibility for learning what they were supposed to. Within a couple of weeks, there were several kids in the class who had a better grip on Algebra than Mr. Drummond. But that was no sweat for him. As a former major in the Artillery Corps, he considered his job was to make sure the privates and corporals did things up to regulation. So he encouraged students who understood things to explain them to him and the other students. Well, of course, that was great for the kids that understood things; they learned it all the better. I suspect it was not bad for the other kids either, because they listened to peers. It worked pretty much that way all through high school. Chemistry was an exception because, the teacher, John Meischke, was really very, very good.
John Meischke was chemistry?
That’s right. He, too, encouraged the kids very much to think for themselves. He never lectured us very long, maybe 15 minutes at the start, and then we immediately, every day, went into the lab. The room had chairs in front and the lab was most of the room, so he’d have a little discussion with 20 kids or so and then send us right into the lab. Meischke would prowl around, asking questions of us individually while we were working. So you had to be prepared. Also, he gave us written tests every week; very challenging tests.
Was he a veteran?
No, he was not; he had some health problem that exempted him from being drafted. I remember a typical episode in his class. One spring day, we were going to be working with acids and bases, and he was telling us to be careful. He always wore a lab coat and safety glasses. I was probably dozing a little in the second or third row, and suddenly a girl in the front row screamed. Then we saw Meischke had slipped into the sleeve of his lab coat a skeletal hand, and then just gradually let it creep out. He didn’t say anything. Everybody got the message. Another time I remember that he said, “Excuse me a moment,” then walked to the back of the room. He came back a minute later and wrote the time on the board. He didn’t say anything. Then 15 or 20 minutes later we all smelled this ester, this strong fragrance, coming in. That touched off discussion about diffusion. That’s the way he did things. He made a great impression. Physicists who might be interested to know what my high school physics was like. Our teacher, Mr. Noddin, was famous in Campbell as a former basketball coach. There were stories about his tantrums during games. He would jump up and down on his hat, and chew a towel, or so we were told. On the first day of his class he explained that his only contact with physics was holding the pole for the survey team that laid out the baseline for Mount Tamalpais. He would write on the board a series of words with dashes between them and say, “When you know those words, you know unit one.” Every demonstration he’d try was a fiasco. The kids would have to rush up and save him when he started pumping water through the vacuum pumps and things like that. But by then, as seniors, we were very self-reliant. His exams were all fill-in-the-blanks. He’d write on the board: “Newton’s Second Law states blank, blank.” He’d supply the prepositions and we were to fill in the other words. Although his course would seem to be about as bad as you could get, the students were interested and we learned physics decently. I came to appreciate that when I went to Stanford. I took, in my first year, chemistry and then later physics.
Don’t get to Stanford yet.
Well, I just wanted to mention that when I got there I discovered how good my high school education had been, despite everything.
Well, let me ask you a different question. You now started out with a very interesting example. You got a National Geographic, and then you had some good teachers, but you were in this rural environment. You read about a lot of scientists, and they come from New York City and they go to the Bronx School of Science, and there’s a lot of intellectual stimulation in that city. You came from almost a farm.
You lived on a farm.
Yes, it was.
And you lived in a rural area.
We had cows and pigs and potatoes and a vegetable garden and all of that.
Have you ever thought that this puts you at any disadvantage, this rural beginning?
No, I think it put me to an advantage. For one thing, I loved to read. I learned to read before I went to school because I wanted to figure out what was in those balloons in the comic strips. I can remember being stretched out on the kitchen floor with the Sunday comics trying to figure out the little helpful words. So I had read a couple of books before I went to school at all. I didn’t go to kindergarten; I started school at age six, in the first grade, and I missed most of the first grade because I had terrible earaches, but I’d already read the Dick and Jane books and gone well beyond. We had a kids’ encyclopedia, History of the World and so on, and I had read two whole volumes of that before I went to school. I always read a lot, and the little town of Campbell had a library that I loved. I was there a lot. The librarian, Mrs. Vogel (the bird), would often give me books she’d set aside for me. Of course, there were many others I would just pick out. I remember that she once gave me an anthology of Russian literature. This was when I was maybe a sophomore in high school. It was about 2,000 pages printed on large-sized, thin pages. Since books were due in two weeks, I’d thought I was supposed to read it in two weeks and I did. It was an overdose, causing me to avoid Russian literature since, but I have a lasting impression of it as being almost relentlessly gloomy. Right across the street from the library was the Campbell Press, where I had a job all during my freshman year. I would set type and write articles. As a Boy Scout one of the things I did was a journalism merit badge. I thought as a freshman that I’d be a journalist. I loved writing articles and setting type, done by hand then. This, again, to leap ahead, is one reason I got interested in later life in Ben Franklin so much because I identified with him. In any case, it was the books and all that had huge impact, because again, I think the key thing is when you have a young person who does things on his own initiative, he takes ownership; he has a feeling for it and he makes all these little discoveries on his own. There’s just so much there. This library, for example, I could just see in my mind’s eye now; it just was almost home to me. I was so happy whenever I walked in and just looked around. I always found interesting and exciting things.
Here’s a different question. Can you think of any particular book in that reading period, as a child, a young person, that really influenced you?
Gee, there are so many that it’s hard to say exactly, but my mother knew how much I liked books and she gave me books every birthday and Christmas; fifty cents for these editions of classics like Robinson Crusoe and Swiss Family Robinson. There were a series of books that started with Indian Brother and that I read and reread. They had nothing to do with science, but I guess appealed because basically they proclaimed life is an adventure and there are lots of exciting discoveries to be made. That kind of thing I definitely got from all the reading I did, even though I can’t point to one single book that itself I would say transformed my life.
All right, you have described a good, early environment with good teachers, a good school. By the way, I went to a four-room schoolroom, too; two classes in every room. So how did you decide to go to Stanford? How did that happen?
Well, I was recruited pretty heavily as a football player. Berkeley, in particular, had a coach, Pappy Waldorf. I remember meeting with him as well as his end coach, Eggs Manske (wonderful names!). When Pappy learned I was interested in chemistry, he sent me to see Glenn Seaborg, who was on the athletic committee. This would have been in the spring of 1950. Stanford also recruited me. I remember going up there on a weekend and witnessing a game announced as with the “Stanford of the East;” Harvard University was coming out to play. Stanford had an unusually weak team that year. They won only one game. They beat Harvard 44 to 0. My high school team would have beaten that Harvard team. The return engagement was cancelled, and Harvard and Stanford haven’t played since. I was a good football player. I started as a freshman, already, in high school. I played right end, and you played both ways then; I loved to play both ways. I made All-County, All-Pop Warner, and things like that. Actually, I got telegrams from quite a few colleges that I hadn’t even applied to, congratulating me on being admitted. I was offered a football scholarship to Stanford; but I was also offered an academic scholarship, which was actually better. That’s probably not true anymore, I’m afraid, but it was then. In high school I also played basketball and tennis. Taking up tennis was a result of Mino Yamate’s influence as he played first singles on the team. I played first doubles with Harold Taylor. In our senior year we were co-captains of the football team; he was center, I was right end. In tennis, we were called the 1st Armored Division. We were big guys with big serves and just blew away our opponents. We were undefeated for three years. But what I started to say before I digressed was that I was terribly interested in football as an intellectual thing. I had shoeboxes full of 3x5 cards with all sorts of systems of plays, and I read lots of books by and about famous coaches and football players. You probably had the same feeling about academic work that I had in high school. I felt that I really had total mastery of all the academic subjects I’d studied. I had almost no inkling of what was ahead of me when I went to college because I didn’t have any way of knowing how much more there was; that I was just in the foothills of a great mountain range. When I got to Stanford as a freshman, the football team arrived a couple weeks early. So the first thing I remember was seeing all these guys out for football, and some of them were very good. But within three weeks some of the best talent wasn’t there anymore. For one reason or another, college wasn’t the place for them. So one thing I learned from football is that the people who are the most naturally gifted in any field may still not be the people who really are successful in the sense of really contributing and doing something noteworthy, because it takes a combination of things for that. Of course, you see that in science in particular. I remember that, during orientation week, before the start of classes, we freshmen dutifully went to a talk by a dean, held in a big auditorium. The Dean was saying how important ideas were, and I was thinking, “That’s a bunch of malarkey.” My parents, although they were always very encouraging and happy with whatever I wanted to do, and it was the same with all their six kids (I should have said I was the oldest of six), they were still dubious about my going to college, as the first kid, of course, to do so. Although we didn’t know anyone that’d gone to college, they’d heard stories of people who did and became arrogant and too proud to work with their own hands, and there were a lot of funny professors there with egghead ideas. The impression was that what you learn in college really didn’t have much to do with the real world, so you didn’t take it all that seriously; it was just a social kind of thing. College was for an upper class who felt it was necessary for their kids, but not real people. That was really their attitude. Of course, I learned they were partly right. But very soon discovered there was much more to college. Anyway, I enjoyed playing football enormously, and indirectly it helped my academic performance as a freshman in a major way. I was so weary after football practice every afternoon that I couldn’t really study very long in the evenings. But I also had a weird schedule, as it happened. I had six 8:00 AM classes, MWF and TuThSat, and nothing then until noon, when I had five 12:00 classes MTuWThF, and then I had three 1:00 classes MWF. So I had 9:00-12:00 free every day. Well, it was obvious that was about the only weekday time I was going to have to study. Also, I soon discovered that near the freshman dorm there was the Hoover Tower. I wandered in there and I found this the spacious reading room nearly empty, so I started studying there. Well nothing could have been better. For four years I studied at the same desk in the Hoover Tower, in a beautiful wood-paneled and serenely quiet room. So I had those three hours a day. My roommates, of course, hardly saw me studying — this jock sacked out not long after dinner. One of my freshman roommates was from Lowell High School in San Francisco, which was regarded as among the academically strongest in the state. I was very surprised when I got A’s in all my courses, even an A+ in History of Western Civilization. None of my three roommates got any A’s, even the fellow from Lowell, who had been much more confident than I was. So all of us were astonished at my academic performance, which I think benefited a lot from the study regime imposed by football plus my unorthodox schedule.
They never saw you studying?
Well, hardly ever, because I did it in Hoover Tower even on weekends. To do so well surprised me, because I had been told by my high school teachers, “Well, you got all A’s in high school, but it’s going to be all B’s because it’s tougher in college.” But it wasn’t tougher. For instance, I loved history, to start with, and History of Western Civilization, which all freshmen had to take, was the most rewarding course I ever took, partly because it was taught in sections entirely, it wasn’t lectures. They had a wonderful syllabus and special library resources, and we went there to read. They told us every week to be prepared for our three discussion sessions. It just opened my eyes to so many things I had observed that seemed strange and mysterious; I had no idea where the immense variety of institutions and traditions came from. So I saw all of that in this course. It was a great course and I really seriously began thinking about majoring in history. But I loved science, and I knew I could read history already. It was clear to me that unless I went further into science I soon would not be able to stay connected with it and feel I at least understood it. My dad had urged me to be a doctor. He said, “Don’t be a lawyer; they’re all crooks. But doctors are okay.” I remember asking lots of my classmates who were planning to be doctors why they were doing it, and so many of them would say, “Because, well, it’s a good racket.” I didn’t like that. I came to feel that’s probably a good thing if you’re going to be a doctor, because you can’t be too softhearted. To me, I wasn’t cut out to be a doctor. I think because I may have put down chemistry in some preliminary query, I was assigned as an advisor Harold Johnston, who was an assistant professor then in chemistry. He had a huge influence on my later career.
Was he your Freshman advisor?
He was my Freshman advisor and then all the way through college.
Let me just ask this. You came out of a high school with having been influenced by a chemistry teacher, and then you got assigned, or did you ask for, a chemistry advisor?
Well, as I say, I think probably in some preliminary thing, where they want to know a probable or possible major, I may have put down chemistry because of Meischke. It was the most intellectually interesting experience I’d had in high school, other than the football plays that I’d worked out. I suspect that’s how I came to be assigned to Harold Johnston. I can remember still, going with several other advisees on the Sunday before classes started, to meet Harold Johnston. We met in a little room by his office. He was then very shy guy, but he told us what a university was all about. Unlike the dean, he didn’t give us some business about how important ideas were. He said, “The University has three missions. It preserves knowledge,” well, that was obvious, with libraries; “It transmits knowledge,” and I knew about that; that’s teaching, “and it creates knowledge.” Well that was a completely new idea. I thought a university was just sort of a higher-level high school, and then he told us about research. He even told us something about his own research. I’m sure that’s the first time I’d ever heard of research. I remember his telling us about his field of chemical kinetics. You had usually to contend with a network of reactions, lots of things going on, and there were these intermediates that you couldn’t observe directly. I remember thinking, “Gosh, it’s wonderful that guys like him try to figure all of this out. It sounds practically hopeless.” And it was pretty hard in those days to ever know what you’re really doing. It is amazing what chemists figured out, regardless.
By the methods they had.
The methods they had, which all dealt with substances in bulk. You could vary the temperature, you could vary concentrations, and then you’d have special things like isotopes. More and more they were beginning to make use of spectra to follow molecules. In fact, that was one of the key things Johnston had introduced; following fast reactions microscopically.
Let me ask this. You said in something I read that you took ten courses a semester.
Oh, I’ll tell you how that came about.
All right, but just in the end, you ended up with three majors.
And you chose mathematics to be the major of record.
That’s right. Here’s how that happened.
In fact, wait before you do that. I think we’ll switch here. Okay, so ten courses a semester, three majors, and mathematics. Go ahead.
Well maybe I should fill in a couple of things on the way to that. First of all, I should explain that I played spring practice in football and then quit, for two reasons. One was that the rules changed and allowed unlimited substitutions. So in spring practice, unlike the fall, suddenly there were four times as many coaches and players were assigned to either offense (my case) or defense. It showed immediately that the game didn’t belong to the players anymore. People forget, but it used to be that substitution in football was just as limited as in soccer. You could send in a punter, but if the coach was seen making motions that looked like they were signaling a play, you were penalized. The whole philosophy was that the game was played on the field game by the players, totally different than today. I didn’t like that. Also, I had found by then, the spring term of my freshman year, so many exciting academic experiences and some vision of more. I’ve mentioned the history course. The chemistry course I had was a good, solid course, but I was amazed to find, as I alluded to earlier, that my high school course had prepared me so well I really was totally on top of it from day one, no problem. But from Dr. Johnston I had learned about research. My English Class was lively and satisfying. Finally, five days a week I had an excellent German course. The teacher, Mrs. Josephson, was such a sweet lady. Everyone in the class, about 15, all loved her So I had found the academic experience that year very congenial, as it has been ever since. I had a summer job back at home after my freshman year, but after my sophomore year a key thing in my career occurred, because Harold Johnston invited me to be a lab assistant that summer in his lab. I heard the graduate students talking about something called quantum mechanics, and they were always referring to probability. Well at that point I had taken all the math required of a Chemistry major, which then just extended through calculus. In those days, for example, most afternoons in my sophomore year were devoted to labs. The quantitative analysis course had six labs a week, Saturday afternoon included. People couldn’t imagine it today, but it was a wonderful social experience. There were a small number of chemistry majors, who became great buddies. I’ve emphasized to my students since, “The lab is a social opportunity. You’re not glued to your chair. You walk around and talk to people.” That summer was transforming because I decided, “Well, gee, maybe I should study some probability theory.” So I looked in the course catalog for next fall, my junior year, and there was somebody named Polya teaching a course on probability theory. Of course, I had no idea he was one of the great authorities in the world on probability theory and also a famous teacher. Well I totally fell in love with Polya. I took everything Polya taught after that, and when I discovered that he had a sidekick, Gabor Szego, also Hungarian, of course I took everything Szego taught. Pretty soon I was taking every math course I could fit in. Polya has a book still in print in Dover called How to Solve It. He was really an eminent mathematician. I remember years later writing a letter to him congratulating him on his election to the National Academy, and he wrote back and thanked me and said, “Well, of course, I was elected into the French Academy in 1925,” which is a much more distinguished thing. One thing I got from Polya is the word heuristic. I’d never heard it before. Somebody pointed out to me that heuristic appears in many of my own papers; I do find non-rigorous but insightful theory very appealing in chemistry and physics as well as in mathematics.
Einstein’s 1905 particle light paper, the March paper, the heuristic paper.
Yes, right. Polya impressed me so much. He always started with some concrete example, usually a rather colorful, interesting one, and then he drew the generalization from it. He would say so many things I’ve quoted over and over to my students. For example, one thing he said, “When you solve the problem in mathematics (and it applies more generally, too) look around. You’ll find you’ve solved others. Because doing this is like looking for mushrooms: if you find a mushroom in the woods, you can be sure there are other mushrooms right around, because it always takes special conditions to grow mushrooms and it’s never a point sort of scenario.” He was very interested in the strategy for approaching problems. He had a two-volume series called Mathematics and Plausible Reasoning, published by Princeton, and much of that is devoted to Euler, an incredibly prolific mathematician. But, unlike most, Euler usually gave the qualitative reasoning that led him to postulate a theorem before he went on to prove it. Polya just would dissect the reasoning and bring out the heuristic aspects. He also taught a course beyond probability theory. I think he had a couple semesters of probability theory and then one called Higher Mathematics from a Lower Point of View, and the next term Lower Mathematics from a Higher Point of View. He had a wonderful sense of humor. Whenever I got back to Stanford (until he died in 1999) I would go see Polya. He was such a joy. He had a huge impact. So inadvertently, by my senior year, I’d fulfilled all the requirements for mathematics, and I’d taken a lot of physics, too.
Was physics a second major and chemistry a third?
Yes, I’d fulfilled the major in chemistry, too. For some reason, Stanford would only allow you to get one major. I decided I’d get my undergraduate degree in math because the teaching was so excellent. It was inspiring, whereas it wasn’t quite so uniformly so, elsewhere, especially in chemistry. Physics was also good, especially courses in Mathematical Physics. I had a course in mechanics from Daniel Webster, which may be a name you don’t know. I can remember now him telling us how he’d built his x-ray apparatus by going to the dump in San Francisco and picking up old coils of wire and things. It was wonderful. I loved the physics department. I should mention also a course in kinetic theory of gases and statistical physics that I took from Walter Meyerhof, because that’s when I first heard the about Otto Stern and molecular beams. In a brief digression, Meyerhof mentioned, in perhaps three minutes, Stern’s first experiment confirming the molecular velocity distribution. Only last February I was in Frankfurt to take part in dedicating a new experimental physics center named to honor Stern and Gerlach for their famous experiment done there in 1921-22. I learned then that Walter Meyerhof was, like Stern, among the many who emigrated from Germany in 1933 when Hitler came to power. Meyerhof was from a very distinguished old-line Frankfort family. In fact, his father was a Nobel Prize winner in medicine. If Meyerhof had not been forced to emigrate and come to Stanford, I might not have heard about Otto Stern, at least probably not at such an opportune moment.
Okay, back to Stern-Gerlach in Frankfort.
Yes, I can talk about that later if you’re interested. It was an interesting experience to go there.
Let me go back to Stanford for a minute.
Sure, what do you want to know?
Your introduction to chemical kinetics was really through Johnston.
You can probably date your career from that early experience with that professor?
Oh, absolutely. Yes, yes. It seemed to me a fundamental thing to try to understand how reactions occur at the molecular level. Immediately that interested me. In fact, that’s why when I came to Harvard, I wanted to work with Bright Wilson, because I felt I needed to have more of an understanding of the mechanics of molecules before I could really try to understand chemistry at the level of what molecules are really doing, making and breaking bonds, instead of in this sort of gross macroscopic way that chemists were limited to before. As soon as I heard about Otto Stern, I thought, “That’s the way to study chemical kinetics.” Using molecular beams, you can really find out whether or not a reaction occurs as an elementary step. Otherwise, it’s very hard to tell what is really happening when many reaction steps are occurring at once. That was the problem chemists faced in trying to unravel elementary steps in reactions. Resolving elementary reaction steps is much like establishing the elemental composition of substances. Just as we want to know what elements are in a material, we want to find out what molecular processes are happening in chemical transformations. Ordinarily, you don’t have any way to separate out the processes when they occur in bulk. With beams, you could say, “Gee whiz, I can intersect two beams and see whether products emerge.” I remember telling Harold Johnston about this. He laughed and said, “But there’s not enough intensity.” I did a few simple calculations, because you just use gas kinetic theory, the same thing I was learning in Meyerhof’s course, and you could see, “Well, it should be possible if you had an unusually large reaction yield.” Then I learned about Michael Polanyi’s early work with alkali atoms reacting with halogens. Those did have a large yield, and alkali atoms and alkali halides can be detected by means Stern had used.
I want to come back to this later.
Yes, we’ll talk more about that later. But anyway, I became convinced that it was possible, so even when I came as a graduate student at Harvard, I already had in mind that I wanted to do molecular beams and study reactions, but I wanted to start first with Bright Wilson to learn about molecular mechanics.
Still at Stanford. The way you talk and the way your career has developed, you really think like a physicist, I would say, more than a chemist.
Sometimes I do, I’m sure.
No, you focus on a simple system, you want to do calculations, and you want to be guided in a way by calculations that give you —
Yes, I like simplicity. I like to understand things.
Well, but that’s physics. Did you ever consider physics?
No I didn’t, I think because of my roots. I always liked physics. I always have. I think it’s such a beautiful subject. But chemistry appealed to me because it’s sort of rough, and wild and woolly, and so broad. I’ve often said to young people, “Well, if you don’t know what to do, but like science and math, chemistry is good. You can do anything. You can just solve equations or run computers or you can raise mice and you’ll be a respectable chemist somewhere.” “But physicists,” as Harold Johnston once remarked to me, “tend to run in packs.” They all say, “There’s this problem right now that matters or this problem or that.” But chemists are more odds-and-ends every which way. Actually, I think part of the reason that chemistry interests me is philosophical. I remember Meischke’s course back in high school was the first thing I’d encountered where I didn’t see right away what was the hang of the subject; what it was all about. It took several weeks before I began getting a feeling for chemistry, whereas everything else seemed pretty obvious from the beginning what the basic notion was. It took me many years to appreciate the special epistemology of chemistry, and it’s quite different than physics. I say it this way: Chemistry is like an impressionistic painting. If you stand too close to such a painting, it appears to be just meaningless dabs of paint. If you stand too far away, it’s an equally meaningless blur. But at the right distance you see wonderful things come into focus. A physicist tends to stand too close to chemistry, looking to reduce things to first principles. The old-time biologist wanted to stand too far away, to avoid getting swamped in too much molecular detail. Of course, many modern biologists have become chemists, for all practical purposes. But the chemist’s intermediate domain, where you see the impressionistic beauty emerges, is fascinating. That appeals to me. In physics, you really want to get things down to the absolute foundation and all, but in chemistry you can’t do that for the most part so you have to operate in this way where you blend intuition and rigor. The chemical physicist is trying to put as much rigor in as he can, but for problems that are really interesting to true chemists they still work more or less the way they did in the 19th century and even earlier. You have to bridge that cultural gulf. It’s the blend of intuition and rigor that I think appeals to me in mathematics, too. Of course, there are parts of physics that are like that. A lot of solid-state physics is that way. A lot of cosmology too.
When did you know you wanted to get a Ph.D.? You got a Master’s. You didn’t need to get a Master’s. You could’ve gone directly to a Ph.D., could you not?
Well, in those days it was more fashionable to get Master’s. I got two Master’s, actually, because in my senior year I applied to Harvard and other places. I already knew that I wanted to go work with Bright Wilson. I had read a lot of Wilson’s papers and I was so impressed with them. There’s another thing I should mention that was important in my intellectual development. That summer with Harold Johnston after my sophomore year was followed by another summer with him, and then I went to Los Alamos after that in a summer internship. But that summer, that first summer, really was the first research I had a chance to do, and that meant I read a lot of research papers. At first I was so impressed with everything I read, especially in the Journal of Chemical Physics. I could not imagine myself writing papers like that. But the problem I was working on with Harold Johnston was such that I had to read a lot of different papers. After a while I began noticing suddenly, as if scales fell from my eyes, “Hey, these guys don’t seem quite to appreciate what these other guys have done and so they don’t really have quite the right perspective.” Then I realized that the authors were more or less ordinary people doing these things. The more papers I read the more I noticed that some were not as good as they should be, and others were better than I would’ve imagined. Hal Johnston was very, very good as a model because he had been an English major through junior year in college. And then, as he told me, he decided to go into chemistry, partly for patriotic reasons at that time. At any case, when I got to read papers by Bright Wilson and George Kistiakowsky and other outstanding scientists, I began to understand why they were so highly regarded. So I felt eager to work with such a guy in grad school. I don’t know if I would have even thought of going to graduate school if it hadn’t been for that transforming first summer, in which I was mixed in with Hal’s graduate students. After that, it seemed the natural thing to do. My senior year I took the graduate record exam. I remember noticing that all the problems in physical chemistry were multiple choice. It was obvious there was only one choice with the right units, so you didn’t have to know anything, just figure out the units. But there were other problems where you needed to know colors of solutions and such. Maybe those were why I didn’t do well on that exam, and I didn’t get an NSF Fellowship. People were shocked when I didn’t. I took the exam the next year and somehow did well and got an NSF. Anyhow, because I didn’t get an NSF, I stayed at Stanford for one more year and got a Master’s in chemistry, working with Hal Johnston. Then I went to Harvard the following year. I wanted to get a Ph.D. in chemical physics, but that program was administered by a committee, not a department, so did not offer Master’s degrees. As I already had a Master’s in chemistry, and was going to be taking mostly physics courses at Harvard, I decided to get a Master’s in physics. Grad students then tended to get a Master’s degree, saying that if drafted to serve in the Korean War, it might enhance an early obituary. In fact, that first year at Harvard, 1955-56, I took four courses each term, and I also started research in Bright Wilson’s group. The courses were terrific. The course in electromagnetism, taught by Roy Glauber was a super performance. He never brought any lecture notes to class, but worked out all the mathematics on the blackboard. Another fine course was Solid State Physics, taught by Nico Bloembergen. We students relished his wonderfully insightful explanations. I was delighted to have a course from Norman Ramsey on molecular beams, a seminar course using the proof sheets for his newly published book, which became a classic. I also audited a course on group theory by John Van Vleck, and one on quantum measurement theory by Julian Schwinger. I already had a graduate level quantum mechanics course at Stanford, taken by only about 10 students, so I was astonished that more than a hundred people — most auditors — attended Schwinger’s more advanced lectures. He would arrive in class with a large stack of notes, plunk them on the desk but never even glance at them, as he unreeled a dazzling lecture. Once asked to describe his style, my response was: “Schwinger was an awesome virtuoso, playing original cadenzas of breathtaking beauty.” I also audited an undergraduate course, I think it was called “Waves and Particles,” given by Ed Purcell. It too had many auditors, there to enjoy Purcell’s lucid lectures, often enhanced by strikingly simple demonstrations. On the other side of Oxford Street, I took another unique course from Bright Wilson, based on his just published book, Molecular Vibrations. But the most unique of all was a course given by Peter Debye, who was at Harvard that year as a visitor. I didn’t know about that in advance. His course was titled, Introduction to Chemical Physics, but it could have been “My Life Work.” I was one of only three students taking the course for credit, but there were at least 40 auditors. Debye was fabulous, just fabulous, rightfully legendary as a wonderful lecturer. He didn’t assign any homework, although his lectures led at least his three enrolled students to look further into many things. At the end of each semester, we had one-on-one oral exams from Debye. To this day, I much regret that I failed to make any notes at the time and cannot remember what Debye asked me in the exams. One striking aspect of his approach I must mention. He always presented things — it was all theory in the whole course — in what might seem backwards. He wouldn’t plunge in and derive something. He would tell you the result, why it was important, what role it had in history, and what came from it. After that you felt so familiar and comfortable with it that the derivation seemed easy and inevitable. But I can also remember coming out of his lecture, which seemed so lucid, and trying to explain something that had excited me in the lecture to someone else. It was not easy to do at all. So after that I really paid attention to just how he did it. I’ve tried to emulate him in many ways since. Like Polya, he had a big influence on me, and so did Bright Wilson. I have to talk a lot about Bright Wilson.
Yes, we’ll come back to Wilson. In your Master’s thesis, you did some things with internal rotation at Stanford.
Now, was that the point at which you came to recognize Wilson?
Yes. That and his work on molecular dynamics generally led me to read Wilson’s papers. Of course, I was very happy to be accepted to join his group and discover internal rotation was a major focus of their current work. I thought of it as “almost a kind of chemical reaction, switching a methyl group from one potential well to another.” It was particularly interesting to get involved with that.
Do you remember Bob Beaudet?
Bob Beaudet said to me one time that they tried to keep you out of the lab because you were too valuable at the desk doing calculations and stuff.
Oh, nobody kept me out. I can’t imagine how they could have tried to keep me out because everybody worked in the lab on their own —
Well, there were words to that affect. I don’t remember exactly, but how did you divide your time between doing experiments on the microwave spectrometer and all the calculations you did under Wilson?
Oh, it was very simple. Wilson had a sizeable group, and they were quite excited to be able to determine the barriers to torsion of a methyl group. It was a classic problem in physical chemistry, and most of the early information had come from thermodynamic data, which couldn’t be too reliable. And here Wilson’s group had this method which depended on tunneling from one potential minimum to the other. Since the tunneling splittings were very sensitive to the barrier height, it was a wonderfully sensitive way to do it. So there was great interest in the group, but that meant you could only sign up for the spectrometer for four hours at a time. However, I discovered right away that nobody wanted to work on Sunday. So I just had all day Sunday to myself. I’d work on the spectrometer only Sunday. I’d come in at about 8:00 AM, work all day Sunday, except for a break for lunch. I’d go down Harvard Square to the Wursthaus (which no longer exists) and treat myself to a hot pastrami. Then I’d come back, and do spectra for a few more hours. When I’d go to bed on Sunday nights and close my eyes I’d see the oscilloscope traces going across my eyelids because I’d been watching them all day long. Because I had this long stretch of time, I got much more done than you can, say, in two or three of the four-hour segments. So I did all my microwave spectroscopy on Sunday and the rest of the time I was taking classes, that first year, particularly, or trying to work out various things, like mathematical calculations. Analysis of the spectra required solutions of the Mathieu equation. When I joined the Wilson group, there was a nice problem waiting. A few barriers had been determined using the splittings due to the tunneling but these were cases of unusually low barriers. It appeared that most of the barriers of interest were so high as to make the splittings too small to be observable. Bright suggested, “Maybe you could go to very high J transitions, high rotational levels, and the splittings would be more important.” I made a little calculation that convinced me that approach didn’t look too promising. Then I realized that the torsional frequencies of methyl groups were not terribly large. They were typically comparable to kT, the average thermal energy. So the population of the first excited torsional state, where the barrier would look lower, was roughly one-third that of the ground state. Everybody had been just thinking of the ground state, where most of the population resides. But of course you should have satellite transitions that come from rotational motions of molecules in the first excited torsional state of vibration, with intensities about one-third of the corresponding rotational transitions of molecules in the ground torsional state. So you should be able to find and identify those satellite lines. They’d have the same Stark affect, which was a key thing for identifying lines in spectra, and they should be shifted a bit because in the excited torsional state the moments of inertia on average are a little different than the ground state. They’d be shifted in a way that you could estimate, roughly, from the ground state line. So if you knew the ground state line, which wouldn’t be split because a barrier was too high, you could look for the excited satellite which could show a splitting because the barrier looks lower for molecules in the exited torsional state. But there weren’t available, then, the matrix elements needed to calculate the splittings for the excited torsional state. So I had to do a lot of calculations, and that’s probably what Beaudet was referring to.
All right, let’s come back. We’re done here. Okay, I believe you discovered that the Mathieu functions were what would enable you to do the calculations on this splitting and the tunneling and so forth, the internal rotation problem.
Well, it was recognized that the Mathieu functions were what you wanted for the models that we were using to analyze these spectra, but there were two key things that were roadblocks. One was that the quantities that you needed, that you could derive from Mathieu functions for this excited-state business, were not available. I went to Los Alamos after my first year here, much to Bright Wilson’s surprise — it wasn’t usual — and I did some calculations there with the help of a friend who was a computer whiz. We called it Project Top. I always wondered why we got the computer results back immediately, and I think it’s because somebody thought it was a top-secret or top-priority project. At any rate, I came back with these tables that had all the stuff you could ever want. Thirty years later Bright told me he was still getting requests for those tables.
They were hard published. They were published in the hard volume.
Yes, a nice little book. In any case, they were very useful and that’s probably one of the things Beaudet had in mind. But in the course of doing that, I discovered that for some molecules, and in fact a fair number, you needed to carry this calculation out to rather high order perturbation theory. Well, you know you need lots of matrix elements, and energy denominators; it was a huge nuisance to do that. I was stuck in Chicago on the way home at Christmas in a great blizzard and I was thinking about this. In particular, I thought of something that was a habit because of my experience with Polya, to always look at limiting cases. At Michigan, Dennison had done some fundamental work on the problem of ethane — which has two coaxial methyl groups — and there the problem was such that if you transform, as he did, to a special coordinate system, you can minimize the coupling between the two groups. That gave him a kind of Mathieu function different from the ones that were appropriate for the Wilson treatment. There you had an asymmetric blob of a molecule and a methyl group sprouting out of it, so used a fixed-axis system. In contrast, the one that Dennison used was an axis system that actually was intermediate between the two ends; it moves in a way that minimizes the angular minimum coupling. I thought, “Let’s see what happens if I try to connect these two.” So I took the limit of the Wilson treatment, and that gave you a Tailor’s series expansion because in the ethane-like limit various complicated terms you have in an asymmetric top drop out. The Dennison type treatment gave you a Fourier representation because the energy became a periodic function. So I realized I could get a connection between the Taylor series expansion and the Fourier series. It turned out — I call this the bootstrap method — that I soon saw that I could use the wrong Mathieu functions, boundary conditions completely wrong, but still get information to allow me, in effect, to build up information about the terms I’d need in higher perturbation theory. So I could do tenth-order perturbation theory by adding and subtracting a few eigenvalues. It was really a huge shortcut. So that was the bootstrap method, which was very, very pretty and made a big difference for the esoteric little field of internal rotation. But I enjoyed it a lot. It’s just that if you have the right tool — I’ll tell you another story a little like that later that again came from the habits inculcated by having good education in mathematics.
I think I know where you’re coming to. Let’s switch to Wilson.
Wilson was an unusual man.
Just talk a bit about what you took away from your association, and then one other question. You knew him as a student; you were his student. And then you knew him as a colleague. How did that compare? Was Wilson different in one case than in the other?
In the most important respects, he wasn’t the least bit different. When I first met him, I remember jotting down a note in my little coop book. As you know, I wanted to do my Ph.D. with him. When applying to Harvard, I’d written to him asking that and he responded, saying, “You don’t have to decide before you come.” On meeting him, Wilson impressed me as deeply interested in science, not just in publishing papers. That’s the note I jotted down, because I’d already encountered enough people that were more concerned about their social standing and in publishing papers. But Wilson wasn’t like that at all. Wilson had luminous integrity. I think everybody fortunate enough to be associated with him would tell you this right away. I remember Frank Westheimer at the memorial service for Wilson said, among other fine things, that at department meetings or anywhere else when you were with Wilson, everybody behaved a little better. You just could not be as petty as you might otherwise have been in the presence of Bright Wilson. He was that kind of guy. It was his character that made him so special. And he was a brilliant guy. That made an impression too. But it was his character as a person that I think counted most. That applies to Harold Johnston, too. A year or two ago, a guy I like a lot said to me, “Dudley, I don’t envy you the science you’ve done or the awards you’ve received, but I do envy you the students you’ve had.” I said, “Well, that’s very perceptive. That’s right, I’ve had fabulous students and I’m very fortunate. But, you know, you should envy me for my mentors too. To have had Harold Johnston and Bright Wilson, who were so admirable as people, and had the highest scientific standards, was a tremendous blessing.”
Were you at all familiar with anyone in Wilson’s group who didn’t get along with him? I don’t know anyone.
I can’t imagine that. No, I think he was just tremendously respected by all of his students and all of his colleagues. Frank Westheimer was also such a person. Everybody just felt they had to try to emulate mentors of such outstanding character.
Was Wilson instrumental in getting you back from Berkeley to Harvard?
Well, yes, in the sense that I’d had such a wonderful experience as a graduate student here. Of course, a lot of it was due to being in Wilson’s group. Although I loved Berkeley and everything was just fine there, I just couldn’t help but feel my own students would have a richer experience here because of Wilson and other people — Kistiakowsky, Purcell, Bill Klemperer and others that I admired so much who were here — the whole aura that they fostered. It was very tangible.
And that was very obvious in Wilson’s group.
Do you want to say anything more about the Harvard-microwave-Wilson days?
I think we’ve covered it pretty well, but I should mention another couple of aspects. Among the fine people I met in Wilson’s group was Jerry Swalen. I wanted to find a molecule suitable for determining the internal rotation barrier from tunneling splittings in both the ground and excited torsional states, to test whether the results would agree. Jerry was already working on a molecule, propylene oxide, which looked appropriate for that so we teamed up. Working out the bootstrap business made me realize that molecule should have a particularly interesting kind of structure in its spectrum. However, the predicted structure was too small to resolve with Wilson’s spectrometer. By then Jerry was a post-doc in Ottawa at the National Research Council, a famous laboratory headed by Gerhard Herzberg. There Cec Costain had built a microwave spectrometer with much higher resolution. So I went up to Ottawa and worked with Jerry during two visits of three months each. It was just marvelous. I greatly admired Herzberg and others I met there, including especially Boris Stoicheff and Alec Douglas. Jerry and I had a great time. The structure we found was so pretty. It had a threefold splitting that exhibited, over a long series of transitions, going over from the Wilson framework to the Dennison one, so you could see the Fourier series emerge rather directly. We found the threefold splitting had one component twice as intense as the other two, and they moved back and forth like a braid as you went up the long series of lines. It’s among the favorite things of my whole career; this braided spectrum in propylene oxide. Also, the barrier heights found from data for the ground and excited torsional states agreed quite nicely. Another favorite episode I should mention involves propylene (not the oxide)... My office mate then was Larry Krisher, a guy I liked immensely. He was such a wonderful character, fond of smoking cigars and a terrific trombone player. One day, opening up the latest issue of the Journal of Chemical Physics, I saw an article by David Lide reporting the microwave spectrum of propylene. That was the first measurement of it. I said to Larry, “Gee whiz, if we could put a deuterium on the methyl group, we could find out whether a methyl group is oriented to eclipse or stagger the double bond in propylene.” Another of Wilson’s students, Rob Kilb, had done the analogous thing for acetaldehyde. So we asked Tom Katz, then a grad student with R.B. Woodward (and later professor at Columbia), how we could synthesize the deuterated molecule. Tom kindly looked into the literature and reported “first spend two weeks purifying dioxin solvent, etc.” We weren’t about to do anything so complicated, so that night we went up to a little lab outside Bright’s office, which had a VAC line. We just dripped D2O onto allyl bromide with a little zinc dust as catalyst and distilled the product over on the VAC line. It took us half an hour.
You had your propylene with a deuterium atom on the methyl group.
Yes! Of course, we didn’t need a pure sample because the resolution needed was easily provided by our microwave spectrometer. We had calculated where lines for the deuterium-labeled propylene should come. Lide’s data made that easy to do, so we knew where to find the lines that would reveal whether the methyl group was oriented one way or the other. It took only about another half hour or so to measure several lines that provided a definitive answer; a C-H bond of the methyl group eclipses the double bond. The next morning we told Tom Katz and of course he was amazed that we could find out so quickly. Thirty years later, I was astonished when I heard Yoshito Kishi, my distinguished colleague in organic chemistry; cite what we had learned about the methyl group orientation in propylene as a key to achieving his synthesis of palytoxin. It’s a huge molecule, with more than 400 atoms, and a long, snake-like structure with many twists and turns. At 72 places there is a twofold choice in constructing the structure. Thus there are an enormous number of structural isomers, namely two raised to the 72nd power, which is 5 x 1021 (5 times a billion trillion). Only one of the isomers is the natural, biologically active product. Kishi was able to synthesize exactly that one, by getting the spatial configuration of the atoms correct at each of the 72 branch points. How did he do it? Well, the first thing he mentioned was knowing that one hydrogen of the methyl group in propylene eclipses the double bond. Therefore the other two are sticking above and below the plane that contains all the other atoms in propylene. So, if in his synthesis Kishi temporarily inserts a double bond with a large group on one side, he can then add a carbon atom to form a new bond and know that it has to attach on the other side. He made use of this strategy over and over. It’s simple in principle but requires a lot of creative architectural work. It’s tricky to design a multistep synthesis of a big molecule so that a series of reagents attack just where you want them to and nowhere else. You have put on and take off a lot of scaffolding. Kishi’s synthesis of palytoxin has been hailed as “a masterpiece, seminal for the field of stereospecific synthesis.”
To me it’s a beautiful story showing how what chemical physicists learn about small molecules can help real chemists do incredible things with big molecules. Bright Wilson liked such stories. He sometimes remarked, a bit wistfully, that chemical physicists tend to be regarded somewhat scornfully by real chemists who work with big molecules, and also by physicists who pursue things much smaller than molecules. But it all fits into the grand tapestry of science.
That’s a good story. Let’s move to your Society of Fellows, just for a bit. How were you selected? Who interviewed you? Who were some of your contemporaries?
I’ll tell you how that happened. A nice tradition in Wilson’s group was that when you thought it was time for you to finish and get your Ph.D., that year you wore a tie. Nobody would wear a tie until then. During that year you would also interview for jobs. I interviewed with a fellow from Bell Labs. He was Bill Slichter, Charlie Slichter’s brother. That resulted in a visit to Bell Labs. I was very impressed and pleased as well as startled that they offered me a job. It sounded terrific, so I came back and told Bright, and Bright said, “Well, Bell Labs is a fine place, but you know I’ve just nominated you for the Society of Fellows.” Of course, I didn’t know he’d done that. I’m not sure I had heard of the Society of Fellows before. He told me a little about it, and he’d been a junior fellow, and so had Woodward and Bob Pound and lots of very outstanding people. Well, soon after I went up to Canada for a few weeks and then came back on the train to be interviewed at the Society of Fellows. I had worked very hard and so I was very groggy going to my interview with the Senior Fellows, who chose the Junior Fellows. Purcell was a senior fellow. Another was Fred Hisaw, a biology professor I later enjoyed a lot for his storytelling. Another was Arthur Darby Nock, who was a divinity or religion professor, a legendary character at Harvard. I remember Nock, who had a strong accent, asking me something toward the end of my interview that I couldn’t make out. I had to say, “Excuse me?” and someone translated. He was asking, “Any hobbies?” After the interview I went to dinner, which was the usual thing. The Society had dinners on Monday nights in a room in Eliot House with the Junior Fellows, Senior Fellows, and any distinguished guests that might be there. Well, everybody to my left was speaking German and everybody to the right was speaking French. At the end of the dinner they rolled around this little cart with Madera and Port, and out came cigars. I thought, “Oh, this is really interesting to witness,” but I didn’t have any inkling that I would be invited to join. I thought this was another world in which I’d be out of place. But when invited, I decided to join and it was a great experience. I was in the Society of Fellows for two years, 1957-59. It was customary for people who were at Harvard to be elected before they finished their Ph.D. The Society was originally formed by A. Lawrence Lowell because he thought there should be another path to an academic career without necessarily having a Ph.D., but it turned out in practice people usually did. It was a year before I turned in my thesis, and then I stayed one year after, before going to Berkeley. It was a wonderful experience because there were eight junior fellows in each class, so there was a total of 24 at any one time. We were from all sorts of fields. Besides the Monday night dinners, twice a week the Junior Fellows met for lunch. We had far ranging discussions at the lunches. At the dinners, I often sat near Purcell; I especially remember that he often spoke with such pleasure of basic physics concepts and remarkable work by other scientists. He hugely admired Fermi. Many times I left the dinners so excited I had trouble getting to sleep. My two years as a Junior Fellow were a wonderful experience.
Isn’t it normally a three-year appointment?
It’s normally a three-year appointment.
Why did you take two?
Actually, even before I turned in my thesis, I got a letter from Linus Pauling out of the blue inviting me to come and give two talks at Caltech; one on what I’d done in my research up to then and what I wanted to do. He didn’t say anything about a job possibility, so I asked Bright. He said, “Yes, yes, they’re looking for a theoretical chemist and I recommended you. Well, I hadn’t even turned in my thesis. I went there in February of 1958 and met Pauling and gave the talks. Then I went up to Berkeley to visit Harold Johnston, who’d since moved to Berkeley. Since I was coming, he said, “Why don’t you give a seminar there?” so I did. The next day the chairman came around and offered me a job,. Caltech had also offered me a job, and then Stanford phoned up. Everybody was offering me jobs. It was really embarrassing. I eventually decided to accept the Berkeley offer, but I said I’d like to stay another year in the Society of Fellows. I didn’t feel I could ask Berkeley to wait another two years. Also, I wanted to get started with experiments. There wasn’t such a good way to do that as a Junior Fellow, at least not then, so that’s why I stayed only one year.
Wait a minute. You said you wanted to do experiments as a Junior Fellow?
I was already still working in Bright’s lab then, but I wanted to start the molecular beam experiments.
But you wanted to do your calculations to see if this was all —Didn’t you do a lot of calculations?
Yes, I did a lot of calculations. I’ve always done calculations of one kind or another.
From my reading of your stuff, it was during this period as a Junior Fellow that you really convinced yourself that you could do this crossbeam work.
That’s right. I did lots of calculations and designed apparatus, but then I was ready to go. I didn’t think I needed more time to do that. In fact, in the talk I gave at Caltech I said I wanted to do molecular beams. I remember mentioning Otto Stern, and I referred to him as a physicist. Pauling gleefully burst out, “Otto Stern was a chemist.” Of course, Pauling was right. Stern’s Ph.D. was on osmotic pressure of solutions of carbon dixoxide in various solvents. As I like to say, generalized soda pop! So that’s why I left early. I should mention something that is one of my favorites among things I’ve ever done in science. I mentioned visiting Harold Johnston after the Caltech visit. Hal at that time wanted to extend something I’d done with him in my Master’s thesis. His idea was to make use of a method Ken Pitzer had developed for estimating thermodynamic properties of hydrocarbons, especially chain hydrocarbons. Pitzer’s method allowed you to approximate the contributions to entropy and free energy as a sum of those from the constituent atoms or groups. Hal thought that could simplify application of the transition state theory to chemical kinetics, because often terms from atoms well removed from the reaction site would cancel out between the reactants and transition state. It’s easy to do for chain structures, but for branched or more complicated structures, it’s not so obvious. On the plane back I figured out how to generalize Pitzer’s method, because I was familiar with Bright Wilson’s techniques for treating molecular vibrations. Bright had something he called s vectors that you use to set up the kinetic energy in the vibrational problem. I compared the conventional way using partition functions for the overall molecular translation, vibration, and rotation of a molecule with the Pitzer form expressed in terms of individual atoms. That showed that the key thing to evaluate was a Jacobean factor, a volume element. Happily, I found that could be obtained as a triple product of Wilson’s s vectors for each atom, whereas Wilson’s use of them involved just the scalar product. It’s very pretty differential geometry. That gave a general way to write the partition function as a product of factors, one for each atom. That’s actually the result for classical statistical mechanics. It has to be multiplied by a quantum correction, but that’s easy because the quantum correction usually comes in only for vibrations. For that it’s ordinarily sufficient to just use the ratio of Einstein’s quantum harmonic oscillator partition function to the classical limit, which is just one over the frequency. Thus you get a neat formulation. In calculating an equilibrium constant for a chemical reaction, you have the same atoms on both the reactant and product sides, so a lot cancels out. It reduces the problem down to a ratio of effective volumes, one for each atom. Often just the atoms that are transferred in the reaction contribute significantly. The conventional way of calculating molecular partition functions is so engrained in the literature that few people know of this “local properties” method. Of course, my students know about it because I’ve long preached about its virtues. Hal Johnston made much use of it in his fine book on gas phase reaction theory; it deserves to appear in textbooks because it makes it easy to visualize the partition function in terms of effective volumes for each atom that are defined by the vibrational motions of its neighbors. The conventional form, which involves the mass and moments of inertia of the reactant and product molecules, hides the atom-by-atom cancellations that are made apparent in the local properties form. So both for calculations and heuristic insight, the local properties method is very satisfying.
That’s nice. I tell you what I’d like to do. We will turn this tape over. Okay, we’re starting another tape. I want to move now into your research. You have said that your interest in chemical kinetics dates from your Stanford undergraduate days with Johnston, that you learned of molecular beams there at Stanford and learned about Otto Stern, and that your microwave days didn’t enhance that much, except later it did, for sure.
Yes, I think it enhanced — Well, it did just what I’d hoped because it gave me a good grounding in the mechanics of molecules; vibrations, rotations, and all.
Okay, and you recognized at Stanford that you needed that?
And you put your excitement for chemical kinetics and the beam work on hold to come east and work with Bright Wilson?
Yes. Actually, when I got to Harvard and had Ramsey’s course, I discovered a paper published in 1955 by Ellison Taylor and Sheldon Datz, using crossed beams of potassium and hydrogen bromide. They used a very simple surface ionization detector with two filaments, one of tungsten (also used by Otto Stern) that detected both K atoms non-reactively scattered and KBr reaction product, the other a platinum alloy that they found that responded to K but not KBr. The difference between the filaments readings enabled them to obtain an angular distribution of KBr. Because I’d learned from Schiff’s quantum mechanics book about the transformation between lab and center-of-mass frames involved in collisions, I understood right away that they’d chosen an inappropriate reaction. What matters in collisions is the relative motion as viewed from the center of mass. In the K + HBr reaction to form KBr + H, the detected product, KBr essentially coincides with the center of mass because it is so much heavier than the H atom. So the KBr can only go where the center of mass goes, so you don’t learn anything about the relative motion of the reaction products. That’s what you need to learn about the forces involved in the chemical reaction. I remembered a high school physics problem, perhaps posed by Mr. Drummond: What happens to the center of mass when a shell explodes in mid-flight? Of course, it doesn’t affect it at all; momentum conservation requires the center of mass of the fragments to be the same as for an unexploded shell. The same thing applies to reactive collisions of molecules. Actually, right after I turned in my Ph.D. thesis, in May 1958, I visited Taylor and Datz in Oak Ridge, and discovered they really didn’t understand about transforming from the lab to center of mass reference frame. One of the first things I did as a Junior Fellow was work on elementary kinematics, to see what you could learn just from conservation laws. Kinematics is independent of forces; it just depends on masses and velocities. I came up with a simple diagram to exhibit the constraints imposed by conservation of energy and linear momentum. Called it a Newton diagram just for fun because physicists talked so much about Feynman diagrams. Constructing a Newton diagram requires only Newtonian mechanics. At a glance, it shows the range of angles and velocities with which the products of a reactive collision of two beams can emerge. If a product appeared uniformly in angle and energy, it would spread over the whole allowed region. If the reaction dynamics caused the product came out in a preferred direction or with preferred translational or internal energy it would appear in corresponding regions of the diagram. As it was like mapping out a spectrum, we called it collisional or translational spectroscopy.
That happened during your junior fellow year?
Yes, I had figured out all of that. That helped in designing the first experiments.
But the critical question is, and was, intensity. And your friend Johnston and Kistiakowsky, when you left for Berkeley, sort of scoffed at your idea and the chemists quite generally were very skeptical about your getting any results from cross-beams. What made you…?
Kistiakowsky said it very well. He said, “Well, there are no collisions in a molecular beam, so when you cross another beam with no collisions, there are still no collisions.”
Yes, you have quoted that.
Yes, I’ve quoted it, and it was true in a first approximation.
What gave you confidence that they were wrong?
Again, experience with microwave spectroscopy helped a lot. Most of the molecules chemists want to study are asymmetric tops with thousands of lines in the microwave spectrum. That dismays physicists. But you learn that you have to calculate beforehand where the lines that you can identify from their Stark effect are likely to be found. Starting with an approximate structure, you calculate and then search. Once you get assignments for three lines, you can evaluate in the rigid rotor approximation the three moments of inertia and from them predict all the others. At the next stage, you take account of centrifugal distortion, with more calculations and eventually can check everything nicely because you have so many lines to test the assignments. With that background, when you go into a new field you calculate everything you can. Otto Stern advocated that approach too. As I learned much later, Stern was oriented as a theorist because he worked with Einstein as a post-doc, and he loved to calculate all he could pertaining to his experiments.
[phone ringing] We’ll stop a minute. Okay, we’re back on. Go ahead.
So I guess I should be arriving at Berkeley right about now.
Yes, and your first experiment at Berkeley in 1960 was potassium and methyl iodide.
Was that choice an experimental choice due to detection issues or was it a calculation that shows that particular —
Kinematic aspects determined the choice. Nobody knew yet how the products would come out in a crossed beam experiment. If the products came out randomly, uniformly scattered with respect to the center of mass, then of course the intensity would be diluted a great deal. So I chose K plus methyl iodide because the methyl group is heavy enough so that the potassium iodide formed can get away from the center of mass enough so we could tell whether it had a preferred direction with respect to the center of mass, but could not get so far away that the intensity would be spread so wide as to make detection difficult. How far away a product can get is determined by the ratio of its mass to that of its partner product. By picking a methyl group as partner product, I knew the KI product should be confined to a specific region allowed by the conservation laws. Otherwise, if the partner product were too heavy, the detected product could emerge in 4 pi steradians, and the intensity get too low to detect. If the partner were too light, the case with K + HBr, detecting the heavy product would not give any dynamical information. So K + CH3I was a “Goldilocks” choice. Of course, we started with an alkali atom reaction, in order to use the simple two-filament surface ionization device of Taylor and Datz. A hot tungsten wire, as shown in Otto Stern’s lab, is a very sensitive detector. It just turns every alkali atom that hits it into an ion. Of course, you have to detect the difference between an elastically scattered potassium atom and a reactively scattered potassium salt. They both give potassium ions. And the key thing that Taylor and Datz had found was that if they used a platinum filament, for mysterious reasons only later clarified, it didn’t ionize the salt but still ionized the atom, so you had to take the difference between readings on the tungsten and platinum filaments. That’s how Taylor and Datz got the KBr distribution, but it was a small difference between monotonic curves, just a little dinky thing. I remember showing this to Kistiakowsky. And they had reasoned in some funny way that led them to conclude their data said (which it didn’t) that the H end of HBr was more reactive than the bromine end. That didn’t make sense chemically. .It really was a misinterpretation of their data. Kistiakowsky pointed to that, as well as the small difference between much bigger signals, as a reason not to believe their experimental results. By picking a methyl group instead of a hydrogen atom in our experiment, it meant that if the reaction turned out to be as it did — the so-called rebound reaction, where the reactants come together and then there’s a strong force that throws the products apart — the KI should be out at a region where it would appear as a bump on the tungsten readings even without subtracting the platinum readings. That’s exactly what we found. The location of that bump immediately told us that KI was recoiling into the backward hemisphere with respect to the direction of the approach of the reactants. So we learned immediately the thing we most wanted to know just from the location of that bump. Now, of course, I’d give talks in the early days about this. Audiences of chemists were used to spectroscopy or thermodynamics; they were naturally dubious about velocity vector diagrams and kinematics. Newton diagrams looked abstruse to them, which was really ridiculous but that’s the way it was. They’d see this little bump and this guy so excited trying to explain what could be learned from it. Many of them must have wondered what the fuss was all about. But as results soon followed for other reactions and young theorists got interested…
But that bump came right away.
But the bump came right away, and I think it surprised lots of people. First of all, I didn’t really realize when I went to Berkeley that, like Caltech, they were looking for theorists, and they thought I was a theorist. They thought they were sort of humoring me when I said I wanted to do these beam experiments. So they actually gave me what we would call startup funds; a modest amount of money. It was actually, I found out later, a third of what they’d given to another assistant professor who came to do experiments. It was still adequate. At any rate, I think when it all worked so nicely, they were very surprised. That fall — I’d been there a year — the physics department was the first to invite me to give a talk. Of course, I’d been interacting with Bill Nierenberg’s group. So I gave this talk in physics, and, of course, I began by talking about Otto Stern. I wrote his name on the board and I noticed a couple of faculty members in the front row saying something to each other, and afterwards one of them asked me, “Did you know Otto Stern was in the audience?” Of course, I was dumbfounded. I knew about his 1920 experiment on velocity distribution, and the 1922 Stern and Gerlach experiment and the rest, but I didn’t have any idea he was still alive, much less at Berkeley. He had retired to Berkeley because a niece of his was married to David Templeton, a faculty member in chemistry. So Howard Shugart, who was then an assistant professor in physics, kindly set up a little tea party so we could met Stern. I was there with my graduate students. Stern at first was very shy, but the more tea he drank the more stories came out, and I’ve enjoyed so much passing them on. I could tell you some now or we could come back to it, if you’d like to hear some of Otto Stern’s stories. I only wish that I had been more aggressive and tracked him down to learn more.
You’d have had more stories.
Yes, I would have gotten some more stories.
Let me ask this. Your first equipment, you fixed the detector and rotated the incoming beam. That seems so crazy. Why did you not fix the ingoing beams and rotate the detector?
Well, in this first apparatus, the beams are very short. Again, intensity. One beam was a little more than a centimeter long, the other one about six or seven centimeters. The detector was about ten centimeters away. It was just the pair of surface ionization filaments. By having the detector stationary, I had in mind that we would want to go on and do things like putting a Stern-Gerlach magnet in, if it turned out that for other reactions this differential surface detector didn’t work. And in fact that’s what happened. We found when we went on to reactions with bromine or iodine, the surface ionization detector was poisoned so badly it was almost useless. It took us a long time, in fact, about two years, before we figured out how to cope with that. In the meantime, we installed a Stern-Gerlach magnet to tell the difference between alkali halide molecules, which are essentially nonmagnetic, so don’t deflect and alkali atoms, which deflect like mad. I always loved to do the deflection experiments, incidentally, because you could just turn a knob and wham! You’d see a big effect right there; instant gratification. And later we installed an electric deflection field ahead of the detector. I had wanted to measure the deflection in electric fields, the electric analog of the Stern-Gerlach experiment, to get information about the rotational energy of the alkali halide product molecules and even the direction of the rotational angular momentum vector. So we did all of that. It’s much easier if your detector is fixed. So we had the beam sources mounted on a rotating lid to scan the product angular distribution. That was a prime feature of that first apparatus. We called it Big Bertha, because when we got to Berkeley we’d found that they called leak detectors Annie, so we chose Big Bertha, and then as we added apparatus, assigned names alphabetically. Charity was the next apparatus. Dodo — appropriately named as it turned out, was the next one and so forth.
When the molecules collide, energy has various channels it can go into, but to what extent do these other channels in the frame of the molecule get shielded by the electrons on the outside? Is it the electronic excitations that drive this whole thing, mostly?
Of course, all of chemistry is governed by electronic interactions and rearrangements that determine the chemical bonding. But at the energies of the collisions we’re speaking of now — we later did study higher energy collisions — electronic excitation doesn’t occur. The beams come together with just thermal energy, I guess physicists would say, typically only a tenth of an electron volt collision energy. Of course, chemists would say a couple of kilocalories per mole of translational energy. Those are weak energies compared with the strength of chemical bonds. At first, we had to study reactions that did not have any appreciable activation energy barriers to rearranging the bonds. On a potential energy surface, the reaction would just go downhill. That’s one reason we started with alkali atom reactions, which typically are not inhibited by barriers. But the most important reason, of course, was that for alkalies we had the wonderfully sensitive surface ionization detector. People called this early work “the Lunatic Fringe of Chemical Kinetics,” to our delight. Because alkali atoms are exceptionally reactive with halogens, the reactions we could study in crossed beams were thought to be idiosyncratic, unrepresentative of chemistry in general. Yet, the alkali reactions were a blessing. At the beginning they were the only reactions we could do from a practical experimental point of view, but also ideal from a theoretical point of view. You have an atom glad to donate its extra valance electron (potassium, sodium…) attacking a molecule (methyl iodine, bromine…) that wants to accept an electron. The reaction forms an alkali halide, which is really ionic, an alkali cation combined with a halogen anion. So a covalent bond in the target molecule is converted to an ionic bond. The ionic bond formed is usually considerably stronger than the reactant covalent bond, resulting in what chemists call an exothermic reaction. A great advantage provided by using molecular beams is that you can observe the products freshly formed, as under beam conditions they do not suffer further collisions on the way to the detector. A basic question that then becomes accessible is energy disposal. How is the exothermicity, the extra chemical energy released because you’d formed a stronger bond than you’d started with, distributed among translational motion, vibrational motion, rotational motion, in the reaction products. We first looked at a different question, to find out if there are preferred directions in which the product emerges. That was the question we could get to most simply, most primitively. Then we added (again, it was an advantage to have a stationary detector) velocity analysis of the products, to get information about the energy released into translational motion. Then, as mentioned already, we used electric fields to get information about the rotational energy disposal. The product alkali halides are very polar, so interact strongly with an electric field. Then knowing the energy supplied by the reactants and the bond energies of the reactants and products, we could derive what went into vibrational excitation of the products. Later, laser spectroscopy provided more directly much more detailed information about the vibrational states of the product molecules and also further information about rotation, including the spatial orientation of product rotation. Even with deflection experiments we got some nice information about orientation of the rotational angular momentum. That was a special project I started on my own. I drew all the plans and personally assembled the fields, in a configuration that enabled the orientation of the deflecting gradient to be rotated with respect to the scattering plane. Thereby we could study how the angular momentum of product molecules is oriented in space. A lot of the background for this I’d gotten from working on microwave spectroscopy with Bright Wilson. Between Hal Johnston and Bright Wilson, I was very well prepared.
It was early when you built the second chamber where you took the product of one collision —
Oh, yes, our triple-beam experiment; that’s one of my favorites.
Didn’t that compound the intensity issue?
Ah, but it was quite feasible because what we needed to detect, in order to answer the question we were after, was emission from an electronically excited potassium atom. That emission is very strong and the background very low. The triple beam experiment was undertaken to clear up questions posed in the 1920s about conversion of vibrational energy into electronic transitions. More than 40 years later, we were able to do it. We had found that for quite a few reactions — potassium plus Br2 was one — the product, in that case potassium bromide, was very highly vibrationally excited. Most of the difference in energy between forming the new bond and breaking the old one went into vibrational excitation of the new bond. In fact, about two electron volts worth or 45 kilocalories per mole.
Wait just a minute. The energy into that new bond came from what?
When potassium reacts with bromine, the bromine bond strength in chemists units is about 45 kilocalories per mole, about two electron volts. Then you form KBr, which has something close to 92 kilocalories per mole, or about four electron volts. It’s a much stronger bond, so the difference is released into modes of motion of the products; translation, vibration, rotation. Well, in that case, it turns out it’s mostly in vibration. You can give theoretical reasons why you might expect that. In any case, it meant we had a way of making highly vibrationally excited product molecules. That harks back to Michael Polanyi’s experiments in the 1920s on alkali atom reactions. He had a tube about a meter long filled with argon. He would diffuse sodium atoms in one end, so get a concentration gradient decreasing as you went down the tube. From the other end he’d diffuse some halogen species, like bromine, and thus get a concentration gradient of the other reactant, in the opposite direction. In the middle, where the joint probability of, sodium and bromine was highest, of course you get a deposit of sodium bromide as a salt. From the width of the salt deposit Polanyi actually could infer, since he knew the diffusion rate, how fast the reaction went. He used this elegantly simple method to do the first systematic survey of how reaction rates varied with changes in the structure of the reactant molecule. His method was called diffusion flames. Why? Because, actually, there was bright light emitted, the familiar yellow light from excited sodium atoms. He was able to show that the emission involves sodium dimers, which are present in about one percent. The dimers react with bromine atoms that are released in the primary reaction. That secondary reaction produces an excited sodium atom that promptly emits the yellow light. The dimer bond is much weaker than the sodium bromide bond, so the reaction releases enough energy to excite sodium. Eventually we did experiments with alkali dimer molecules reacting with halogens that confirmed Michael Polanyi’s work; even with crossed beams, far weaker than his diffusion scheme, we could see the emission with our “naked eyes.” Now, in the 1920s there was much speculation whether the emission could come from vibrational excitation of the alkali halide molecule being transferred in a collision to the sodium atom. Michael Polanyi thought he had settled that by adding nitrogen to his diffusion flame experiment. Comparison with other work on quenching of sodium emission by nitrogen collisions indicated that in his flames it didn’t come from vibrational transfer. This work in the 1920s was coming out about the same time as the Born-Oppenheimer approximation for separating electronic and nuclear motions and the Frank-Condon Principle. Both recognizing that electrons move much faster than the nuclei can rearrange. So that entered into the discussions of whether the nuclear motion in vibrations could be efficiently transferred to excite electronic motion. Eugene Wigner, then a grad student with Michael Polanyi, took part in the debates. We had the opportunity to test those long unresolved speculations, since we could make beams of highly vibrational excited molecules traveling in a vacuum, free of collisions with background gas. If crossed with a beam of atoms, you might get vibrational to electronic transfer, and because the excited atom would emit a photon, you could detect that with high sensitivity. There’s no other way to make the photon, so even in a triple beam experiment you should be able to do it. Actually, it proved to be an easy experiment. It came about the following way. My first woman graduate student — her name was then Margaret Moulton and it later became Margaret Clark — was an orphan. She’d been working with a junior faculty member for three years when he left, and her experiment wasn’t panning out. So she came to me for a project for her Ph.D. I said, “If you do something really novel and it works, you get a Ph.D. in less than a year. If you want to do something that’s surefire, it’s probably going to take you longer. You’ll still get a Ph.D. for sure, but it may take longer. There’s this notion. I think it should work.” I showed her the numbers. Margaret, in six months, did the experiment, wrote it up, and turned in her thesis. She was an excellent experimentalist. She found the vibrational energy was indeed transferred to electronic excitation very efficiently in reactive collisions but less efficiently in nonreactive collisions. I remember this experiment very much impressed Christoph Schlier, a physicist from Freiberg. I had talked about Margaret’s experiment at a meeting, and afterwards Christoph wanted to visit our lab. Well, you know how Germans are. The apparatus they build is gorgeous. I believed in over-designing the engineering part, the pumping system and all of that for the apparatus, you don’t want to have to keep fooling around with it. But under-designing the parts that you’re changing all the time from one experiment to another. Christoph wanted to see all the details. So we take off the flange and cold shields and he looked inside. To keep straight the beam flag used to interrupt a beam from time to time, Margaret had used a bobby pin from which hung a heavy nut. It was ugly but worked fine. Christoph peered intently at everything, ovens, etc., but clearly was intrigued by Margaret’s beam flag. Then he said, “I have learned something from this.” He said, “You don’t need to have a beautiful apparatus to do a beautiful experiment.”
All right, we’re back from lunch. We’ve taken a break and now we’re ready to start again. It’s still the 21st. Let me ask you a general question and then I’m going to talk to you starting about the second machine. The first question is a more general one. When reactants come together, there is a moment, a fleeting moment, when they have not lost their identity and there are products that have not really gotten their identity, and for this moment they kind of dance together. What have you learned about that dance? What have you learned about this in-between moment?
Of course, that’s a domain that chemists refer to as the activated complex or transition state, and it’s thought to be this fleeting thing as it goes over typically a barrier between reactants and products. The transit takes of the order of 13 seconds, typically. That’s the kind of picture chemists have. What we observed, of course, in the kind of experiments we were doing, is what you observe in the final asymptotic state of motion of the products as they emerge from this collision. So we prepare the reactants coming together in these colliding beams and then we measure these asymptotic properties after they’ve flown far enough apart so they’re just molecules traveling through the vacuum. So we really measure things that connect the initial to final state. There is a lot of information there because there are no collisions other than the one that creates the products. That’s the key thing. It’s a single-collision phenomenon. Soon Martin Karplus, Don Bunker, John Polanyi and others carried out classical trajectory calculations to evaluate the asymptotic properties for comparison with the beam experiments. Later, as computers developed further, quantum scattering calculations became feasible too, for three-or four-body atom systems. Following an approach pioneered in the 1930s by Henry Eyring and Michael Polanyi, a potential surface of qualitatively realistic form could be constructed. Then the dynamical properties calculated for the surface, such as the asymptotic energy disposal and product angular distributions, were compared with experimental results. Later, a priori electronic structure calculations improved greatly, providing potential surfaces derived from first-principle theory rather than empirical data and correlations. That remains feasible chiefly just for three-atom systems with not too many electrons. Such surfaces of course give particularly welcome information about the transition state region you ask about. In practice, the reliability of the potential surface can be assessed by comparing how well it predicts the experimentally observed asymptotic properties; if the agreement is good, we have some confidence that properties predicted for the transition state region are realistic. For some of the asymptotic properties, the connection with major features of the potential surface is pretty well understood qualitatively. For instance, some reactions release most of the exoergicity into relative translation as the products separate. There are others, and I mentioned an example earlier, where most of the energy went into vibration. For such cases, the energy disposal correlates well with location of the transition state early or late in the entrance valley of the potential surface. John Polanyi documented such correlations very well. Yet colliding beam experiments ordinarily observe just the asymptotic properties. To directly observe the transition state region requires means to look very quickly! Zewail’s work with femtosecond lasers has achieved that, as in effect he gets snapshots on the time scale of the transition state duration.
This is the guy at Caltech?
Caltech, that’s right. Alumed Zewail. And of course that came years later and had to await the development of femtosecond lasers.
In conceptual terms, do you think of there being something like kind of quasi bonds between the combination of reactants and products.
That’s right. You’re melting the old bonds and forming the new ones. As formulated by Henry Eyring, transition state theory deals with partial bonds or fractional bonds. My Master’s thesis done with Harold Johnston dealt with applying Eyring’s theory using data from spectroscopy and thermodynamics to in effect interpolate properties of the partial bonds. Subsequently, Hal actually worked out a tidy procedure using empirical correlations between bond strengths and bond lengths. He called it the bond energy bond order method (BEBO). He postulated that the sum of bond orders would be maintained all the way through the reaction. So if in an atom transfer reaction, A + BC to form AB + C, if B-C and A-B have single bonds, the transition state A-B-C has fractional bonds that change in the course of the atom transfer but their sum remains one. That’s the sort of picture chemists have long had and a fair description of what actually does happen according to more sophisticated theories.
All right, in March of 1967, what happened? Do you remember? March of 1967; your new machine. You began to construct the new machine.
We began drawing plans in March, yes.
Here’s my question. I am thinking now about science in more general terms. Here you are initiating the construction of new apparatus. You were guided by your hopes as to what this new machine would do, the kind of data you were hoping to collect and understand and achieve and so forth. How did your anticipation as you constructed this compare with the reality a year later or whatever?
Well, I should say we had by then a lot of experience, but there were new challenges in going beyond what I like to call the alkali age of molecular beam chemistry. Yet the new apparatus, which we named Hope, was completed in only nine months. It worked perfectly in the first experimental run, done on 22 December, 1967. That was a doubly exciting day because my wife Georgene gave birth a few hours earlier to our second daughter, Brenda.
And there was a transition between that old machine and this one out of the alkali age, in a sense.
Yes, the significance of this new machine was to take us beyond the alkali age, greatly enlarging our chemical scope. We were not limited anymore to that eccentric family of reactions where the promiscuous valance electrons of alkali atoms hop onto unusually electrophilic molecules such as halogens. As I mentioned earlier, really we were always studying up till then conversions of covalent to ionic bonds. We found a lot of variety that we could relate to the nature of the orbitals in the target molecule. We learned a great deal from that, and it’s lucky, from a theoretical point of view, that we studied so thoroughly the alkalis first. People had become convinced that it was really impossible to go beyond that. The alkalis have a special virtue: they trap very easily. The apparatus we’d built originally was essentially a big nitrogen trap, a copper box cooled by liquid nitrogen, so you condensed everything, both the alkali and the halogens depositing on the walls. That augmented the pumping enormously. We wanted to go beyond alkalis and study, in particular, reactions of hydrogen atoms. You have a special friendship with hydrogen and so do I. One of the first we studied with Hope was the reaction of H atoms with chlorine. It was at the center historically of a lot of developments in chemical kinetics. In crossed beams, only a small fraction of the reactants form products in the intersection zone of the beams, so reactants that don’t trap easily bounce around the apparatus. Molecules being what they are, they react in the background to form more of the product than that produced in the intersection zone. So you have first the problem to reduce the background very far down, and then you have the problem of detecting well enough. The surface ionization detector, that turns alkali species into ions with nearly 100% efficiency, doesn’t work for other species because they have much higher ionization potentials. The best we could do was use an electron bombardment ionizer, which for a typical molecule such as HCl, can ionize only about one in 10,000. That’s even true for laser ionization, although that offers some special advantages, but some years later was used. With Hope, it had to be electron bombardment. So it was very important to do design calculations carefully to see what you needed to do. I had a pretty good idea of what we needed to do, but it makes a big difference who is on the scene to build it. Yuan Lee had come to Berkeley in 1962 to be a graduate student, and he’d talked with me, because as I later learned, he wanted to work with me. That was at a time when we were having a lot of trouble with poisoning of the surface ionization detector, as I mentioned earlier. So, I told Yuan that I wasn’t absolutely sure we were going to be able to go much further with these reactions we were doing then with the alkalis. But I had designed the original apparatus so that, in case studying reactions fizzled out, we could do spectroscopy. Actually, from the start that was in mind, because I tried to look a little way ahead. Also, as a research mentor, I have always been reluctant to tell people what to do. I just never liked to be told what to do myself, and so I would typically say, “You might want to consider…” Yuan’s English wasn’t so good yet, and he construed that to mean that I really didn’t want him in my group. Students had to join a research group very early at Berkeley. So Yuan joined Bruce Mahan’s group and had a great experience, quickly completing his Ph.D. with an important study of alkali dimer molecules. Meanwhile, in 1963 my group had moved to Harvard. But then Yuan still wanted to work with me, so he applied to come as a post-doc. Of course, I knew he was a good experimentalist because Bruce was an old friend, so I was happy to accept Yuan. He had built an ion molecule beam machine for Bruce, so I knew he was really good at that, but nobody could have known how extraordinary he really was until he was actually on the scene. But he was fantastic. I like to call Yuan the Mozart of chemical physics. Musicians will tell you the thing they most admire about Mozart is what he did with minimal material. He could get so much out of absolutely minimal musical material. No other composer was quite as outstanding in that regard. I could talk for a long time about Yuan. I hope I sent you my article about Yuan and his 60th birthday.
No, I don’t have that one.
It’s in J. Phys. Chem., 1997. A wonderful story. He was inspired to become a scientist by Eva Curie’s biography of her mother, which I read fairly early, too. I proposed to Yuan that it was time to build the new apparatus. He was the perfect guy to undertake it. Two first-year graduate students, Doug McDonald and Pierre LeBreton joined the project. Meanwhile, Yuan had another completely separate project. It actually had roots in nuclear physics because I was impressed at how much physicists did with the deuteron. Because it was a weakly proton plus neutron, when you collided it with a target nucleus it was much better than a proton because you could get all kinds of reactions when by breaking up the deuteron. Either the proton or neutron would stick and the other fly off. As Michael Polanyi’s alkali dimers are relatively weakly bound, so we could expect something analogous would happen in chemical reactions. We could sort out with our Stern-Gerlach magnet the dimers from the atoms and carry out a lot of nice reactions. In fact, we checked up on some things that Polanyi had inferred from his diffusion flame work. When we studied chlorine atoms reacting with alkali dimers, we got chemiluminescence so bright we could see with it with bare eyeballs, even in a beam experiment. That was a very interesting sideline. Yuan was teamed with a grad student, Robert Gordon, working on reactions of alkali dimers with hydrogen atoms. That came out very well; it’s a very interesting story but I’ll leave it aside. Just wanted to mention that went on while we started designing Hope. Yuan taught himself how to draw plans for machinists when he was at Berkeley. So we would talk about things and Yuan would start drawing them up. We went to George Pisiello, foreman of the chemistry machine shop. We wanted to get going right away because Yuan wasn’t going to be here more than two years. George had never been asked to build a major machine like that. Bill Klemperer had built a sophisticated beam apparatus to do electric resonance spectroscopy. It’s known as the Wharton Machine, designed by a remarkable grad student, Len Wharton. It proved very important, opening up a wonderfully fruitful field for molecular beam spectroscopy. But Bill and Len opted to commission an outside company to build this complicated apparatus. I preferred to have the local guy see what he could do. Well, it was a perfect match again because George Pisiello was a very proud man and an extremely good machinist, and he had some good colleagues in his shop. George was overjoyed to tackle such a project. I can’t stress too much how valuable George was. For example, we needed a pretty big vacuum chamber, too big for the ordinary milling machine. He phoned around and found that some outfit was getting a brand new milling machine — usually there’s a long queue for these large-sized machines — he got first in line by talking to or maybe bribing a bit somebody. George had great admiration for Yuan and it was mutual. With their leadership and enthusiastic work by all hands, Hope was built in less than nine months from the first drawings. As mentioned already, Hope worked perfectly and the very first experiment gave just what it should have. When Yuan later went to Chicago and to Berkeley, where they have a tremendous machine shop facilities, it still took him twice as long to build what were more or less replicas of Hope with a few improvements. You see, it was the capability and enthusiasm of the people that were so special. Another special aspect was I had applied in vain to NSF for money to build an apparatus. In those days the idea of building your own apparatus of this magnitude to do chemistry was just completely beyond the pale. They wouldn’t give you money for that. If you wanted to buy one, yes, but not to build one. So basically we borrowed the money from Ron Vanelli, Director of the Harvard Chemistry Laboratories. Ron likes to tell the story of how every once in a while he would come down to the lab to check up and ask how this project was going and said, “If this doesn’t work, you’re going to have to shovel an awful lot of snow.” It wound up costing $80,000 to build Hope, the whole works; electronics, the machining, everything. That was about a year of my NSF grant at that time, to run my whole research group. Salaries are always most of the grant, so it took me several years to pay Vanelli back. Nowadays you couldn’t do it because people are so uptight about accountability. But because of Ron Vanelli and George Pisiello, we were able do it at Harvard. You couldn’t do it at Berkeley, for example, as I know from my time there. They had so much administration that we would have had to do everything via the proper channels, have plans drawn by a professional, etc. That goes much more slowly. It’s an impedance matching question.
Does that machine still exist?
Yes, it’s in Taiwan. It’s in a museum in Taiwan. I thought it was appropriate to be in Yuan’s country, and when I learned they would like to have it I said, “Great.” So that’s where it is. Nobody around here wanted it.
Did that make you feel bad to see that go out of here?
Well, I asked around, and as expected, nobody was interested. You know, I visited Columbia once to give a seminar and I asked to see Rabi’s apparatus. Somebody said, “Oh, it’s around here somewhere.” They found it in a corner. It was all dusty. I didn’t expect people would revere our Hope anymore at its birthplace. Anyway, I thought it was appropriate to send Hope to Taiwan because that was Yuan’s homeland. My role in the building of Hope was chiefly cheerleading. Yuan knew every nuance of machining. He’d done a lot of it himself at Berkeley. He would draw these plans and George would look them over, often suggesting improvements. Sometimes the machinist would make a mistake and then we’d just modify the next part of the apparatus to fit the mistake. It was really fun. The project was done on the fly, building it while it was being designed and all. It was really great fun. You can imagine our exhilaration when Hope worked perfectly, giving lovely results for the first reaction we tried, chlorine atoms plus bromine molecules. I thought it was the optimum choice for experimental reasons. But the literature said the reaction cross-section was ten times smaller than I thought it probably was. The published value was extracted from photochemical data, requiring interpretation in terms of a multi-step mechanism. I was convinced it couldn’t be that small, but I didn’t tell my students, because if it were that small our first experiment would have been more difficult. As it was, we found out immediately that the cross section was 20 or 30 times bigger than reported in the literature. Later the photochemical studies were repeated and got agreement with our results. The product angular distribution we observed was particularly interesting, showing clear evidence for short-range attractive interaction of Cl and Br2. In heuristic chemical terms, that was akin to the known stability of some trihalide complexes.
Tell me what the significance was of the supersonic nozzles.
That played a very important role.
But it sounds counterintuitive. It sounds like your beam is coming out of there at supersonic speeds, high speeds.
Yes, but the high speed is not all that high. It’s just faster than the speed of sound in the static gas, so it’s at most the square root of three higher than an ordinary thermal velocity at whatever temperature.
So why were these important? Why was this important?
Well, the way I like to say it is this. If you read the physics literature, going back to Otto Stern, it emphasizes having a low enough pressure inside the beam source so that the mean free path, the distance between collisions within the source, is large compared with the size of the orifice from which the molecules emerge to form the beam. That way you get a sample of the gas within the source, undistorted by collisions in the exit orifice. But of course as chemists, from the very beginning we had to push hard for intensity. So we always had our pressure up to the point where the mean free path was comparable to or even much less than the dimensions of the exit orifice. Chemical engineers had gotten quite interested in these supersonic beams, or free jets as they’re called, in which the pressure’s very high inside the source. The source is just a pinhole, and there are lots of collisions as the molecules come out. That not only enhances intensity because the pressure’s so high inside the source; the collisions in effect organize the beam. As the colliding molecules flow through the pinhole orifice, their velocity spread is narrowed and the velocity shifted upwards. The collisions act to channel the random translational motions that otherwise go in other directions into the forward direction.
What’s the dimension of the channel, of the orifice? Is it a millimeter long?
The depths can be a few mm, but usually less; the depth for our purposes is not very important. What’s more important is the diameter of the nozzle, which is usually just a few microns.
But the depth would channel it.
Channeling can help. We used channeling a lot in our early work. We’d build these things we’d call Gatling gun sources, where we have wrinkled foils. So you had a whole series of little orifices to get maximum output. But the supersonic beam is operated a little differently. I like to say it this way. In Boston there’s a department store, Filene’s, that’s famous for Saturday morning sales. A big crowd gathers outside as they’re anticipating sales and are very excited. When the doors are thrown open — that’s like opening the orifice — the crowd rushes in. People collide. Everybody flows in the same direction and about the same velocity whether they want to or not. If there’s somebody in the crowd excited at the prospect of a bargain, they might be turning handstands, like rotating molecules, or waving their arms, like vibrating molecules. Such people suffer more collisions, maybe even black eyes and bloody noses. So, most everybody gets calmed down to the average energy of the whole flow. That’s what happens in the supersonic expansion.
That’s a good analogy.
A bit corny, but I’ve said it many times. Another advantage is if you want to accelerate the molecules, you can have heavy molecules seeded, as we say, in a great excess of light molecules. Then the light molecules collide with the heavy ones and bring them up to the about same velocity, so of course the heavy ones then have much more kinetic energy. Or if you want to slow light molecules you can seed them in an excess of heavy molecules. Both of these seeding techniques have been widely used. As I like to say, in molecular biology, the great lesson was to let the bugs do the work, then you pick up the pieces to find out how they snipped up DNA. Here it’s the same principle; the molecules can via collisions produce a beam collimated in velocity and direction better than we can by mechanical means. That and the enhanced intensity were blessings. The supersonic beam was a very important part of the success of Hope; people liked to call it a super-machine, though we never called it that. The other key things were the differential pumping to reduce the background and a “pass-through” design of the detector region. I teased Yuan a bit, saying “Maybe it was destined thousands of years ago in China that you would build this apparatus, because your ancestors liked to carve puzzles that had balls within balls within balls.” When product molecules entered the electron bombardment region, only one in 10,000 is going to be ionized. If there’s something that interferes or blocks the other 9,999, they’re going to bounce around and create a horrible background right in the inner sanctum region of the detector. That’s what happened in some other intended super-machines, because the designers were focused on the signal they wanted to generate. It’s much more important to think of the noise. The signal can take care of itself if you take care of the noise. What you want to do is make sure those 9,999 out of every 10,000; molecules that don’t get ionized just leave the scene. The best way is to take advantage of Newton’s first law. Hope had differential pumping in three stages on the way in to reduce the background in the electron bombardment region, by a million times compared with what it is in the collision region of the apparatus where the beams are crossing. Hope had the same thing on the way out. So the electron bombardment region was nested within three differentially pumped auxiliary regions, like the nested Chinese puzzles. That was one of the key things that made Hope work so well. We wrote an article right away for the Review of Scientific Instruments, because other people had tried and some had become convinced it was hopeless. Other labs that I won’t mention by name spent a lot of effort and money trying to build super-machines. They put in nice features, but missed key things. In this game, if you get the key things right then you can use hairpins to hang beam flags and such.
Other things are forgiving.
You had to do the calculations to convince yourself what’s important and what isn’t. Yuan has gone on to build many other beautiful instruments. He’s a truly great experimentalist. It was just incredibly lucky that he came to our lab at just the right time. Yuan would say after we had one of our planning sessions, “Should be all right.” After 18 months at Harvard, he went on, and did marvelous work, first at Chicago and then Berkeley. After he left, we put on Hope a photo of Yuan, with a little balloon, like a cartoon, saying “Should be all right.” Doug McDonald and Pierre LeBreton carried on, followed by several generations of students who carried out studies of a whole series of basic reactions. Extending molecular beam collision experiments beyond the alkali age advanced considerably the basic aim, to understand how electronic structure governs reaction dynamics. The ability to study reactions in single collisions gave access to dynamical properties that revealed a lot about the forces involved in breaking and making chemical bonds. This linked to the saga led by Pauling in molecular structure. Geometric structure, bond lengths, angles, and all of that are so important to chemical thinking. Dynamics is much tougher because bonds are changing, so you want to see how the changes are related to electronic structure. Well we had really reached a satisfying understanding of the alkali atom reactions. We could often predict from the electronic spectra of the reactant molecule what would happen. The key thing to know was the nature of the molecular orbital that received the transferred electron. If that orbital had a node between the originally bonded atoms, its location often had a major role in the dynamics. When we gained the ability to study a number of prototype reactions that involved only covalent bonds, the electron transfers were more subtle yet things learned from the eccentric alkali family turned out to be more pertinent than expected.
Okay, we were interrupted. Dudley was writing his tombstone. Go ahead.
I just said if there’s going to be something stuck on my tombstone I would like it to be what came to be called a Newton Diagram which is a plot of the product distribution in angle and velocity of the first reaction we studied, potassium plus methyl iodide. The potassium iodide distribution, displayed as a contour map, looks like a crescent moon. The KI recoils backwards with respect to the center of mass and the direction of the relative velocity. If trace the contours you see that the velocity distribution is bell shaped and about the same in all recoil directions. The peak velocity corresponds to a large release of energy into product translation. We have lots of maps like this for different reactions and they contain key information about the nature of the forces. Another favorite reaction, among the first we studied with Hope, was hydrogen atom plus chlorine molecule. The contour map for the HCl product is virtually identical to that for KI from potassium plus methyl iodide! Except for the velocity scale which is entirely due to mass factors and the energy release arising from difference in bond strengths. The H + Cl2 reaction was the prototype case for John Polanyi’s development of his infrared chemiluminescence method. It seems uncanny that its velocity-angle contour map for forming HCl is essentially the same as that for forming KI from K + CH3I. The recoil of the product KI so prominent in the map is understood in terms of the “harpooning” mechanism proposed by Michael Polanyi. The electron transferred from the potassium atom lands in a strongly anti-bonding molecular orbital of methyl iodide. It has a node between the carbon and iodine atoms so that gives strong repulsion. That’s like photodissociation, where an electron is transferred from a lower lying orbital up to the same anti-bonding orbital that receives the harpooning electron. That’s why for many alkali atom reactions the energy release could be related to the spectrum of the target molecule. Now the H + Cl2 reaction certainly does not involve harpooning. The H atom is far less willing to transfer an electron than is an alkali atom. Yet, examination of the molecular orbitals for H-Cl-Cl shows that the frontier orbital into which the H atom electron enters is indeed anti-bonding with respect to the Cl atoms and had a node about halfway between them. So that is the heuristic explanation of why the recoil map for HCl is so similar, despite the lack of harpooning, to the map for KI from the methyl iodide reaction. This view is supported nicely by comparison with a contour map for photodissociation of chlorine, which shows recoil energy release very similar to that in the H + Cl2 reaction. Moreover, simple molecular orbital calculations predict that in the analogous reactions of H atoms with bromine and iodine, the repulsion would lessen and the HX angular distribution shift to sideways rather than backwards. That’s just what was found in the experiments.
So what’s on your tombstone?
The tombstone could have the contour maps for the H + Cl2 and K + CH3I reactions. I’ve called them “kissing cousins” because of their “first-born” roles in John Polanyi’s work and in ours. The congruence between the maps shows that the 19th century notation we still use to write down chemical reactions is very misleading. The maps say the reaction dynamics of these cousins is actually the same, even though one involves just covalent bonds, the other transition from covalent to ionic. So this gives us a new perspective: we need a notation that helps us to understand and characterize dynamics. Then we could recognize cousins among nominally very different reactions. The contour maps for the cousins have appeared in several review articles. The one I’m showing you now came from a talk I gave at the 90th birthday party for Pauling, in 1992. I focused it on things we’d learned from Pauling. First came a picture depicting the development of physical chemistry in the 20th century. This was from my Nobel Prize talk and emphasized cultural changes. The broad foundational era of thermodynamics concerned with macroscopic phenomena remained dominant until the 1920s. Then came the era of molecular structure that prepared the way for the era of molecular level dynamics. I went on to illustrate links to three of Pauling’s favorite themes in electronic structure: electronegativity, hybridization, and resonance. Electronegativity is not rigorously defined but very valuable in heuristic chemical thinking. It is manifested in a simple way in many of the reactions we studied. Favorite examples were reactions of hydrogen atoms, halogen atoms, oxygen atoms, or methyl radicals attacking I-Cl. In each case, the attacking group bonds predominantly with the I atom, although that bond is much less strong than the bond to Cl would be. The large electronegativity difference between iodine and chlorine provides a neat explanation of that and other features of the dynamics. That difference makes the uppermost molecular orbitals, which are anti-bonding, predominantly I atom orbitals, as pointed out in the 1930s by Robert Mulliken in his interpretation of fine structure in spectra of I-Cl. Hybridization, another concept very familiar to chemists in the context of bonding, had a key role in creating spatial orientation of molecules for collision experiments, a topic I expect we’ll come to later. Resonance, as developed by Pauling, involved a particular form of approximate wave function. Critics had long objected, “Well, if you use a better wave function, you don’t see anything that looks like resonance.” It turned out that our dimensional scaling approach to electronic structure, another topic we’ll discuss later, actually shows you get resonance phenomena like Pauling’s in the large-D limit quite generally, without invoking any particular wave function form. So these things added up to a nice 90th birthday talk for Pauling. He was my scientific grandfather since he was Bright Wilson’s, Ph.D. mentor.
That’s right. You said your goal was in all of this crossbeam dynamics to understand how chemical kinetics is governed by electronic structure. Have you achieved that goal, do you think?
Yes, I think that’s basically what these experiments did demonstrate. Although the interpretations of observed dynamics in terms of electronic structure were chiefly qualitative, such insights are fundamental. Of course, now electronic structure calculations have become much more incisive, largely due to computers getting more powerful. So for three atom systems without too many electrons, such as F + H2, pretty accurate potential surfaces can calculated. Likewise, full-scale quantum scattering can be calculated using such surfaces. So in such cases the computational chemists can predict what the dynamics must be and how it stems from the electronic structure. None of these things can done as tidily or for as many systems as we’d like, but all the pieces are there,
Do you think you’re closer, because of your work, to being able to take an unknown reactant A, and an unknown reactant B, and predict what kind of products will come out of that?
Well, we’d have to know about the electronic structures of A and B. The insights gained from results of single-collision studies of reactions have considerable scope, a lot wider than we otherwise would have. Yet, what we understand now about reaction dynamics certainly can’t be claimed to be universal. There is much more to be learned. I hope deeper and broader understanding will continue to develop at the qualitative Pauling-like level, taking advantage of the empirical correlations emerging from experiments as well as electronic structure calculations. What we did has historical value but much of it should become obsolete as better experimental and theoretical tools become available. However, the chief qualitative concepts recognized or reinforced by the dynamics experiments may long endure. Even if we ultimately have an all-powerful computer, so you just push a button and it’ll give you the answer to any question. You’d still want to understand the computer answers in terms of simplified models and heuristic thinking. Pauling’s ideas about structure are still very valuable because they changed how chemists thought about molecular structure. Likewise, the experiments and theory on single-collision processes dynamics did the same for molecular reaction dynamics. Many people contributed to this, bringing chemical kinetics to a fundamentally new level, moving beyond phenomenology to deal with the actual molecular interactions as governed by electronic structure. From the beginning, I thought of this as part of a historical imperative, the development of tools to enable more direct focus on chemistry at the molecular level. In this, it was wonderful fun to be among the Pied Pipers. I was especially an enthusiast for heuristic electronic structure interpretations. People still remind me that in a talk I gave at a 1970s Gordon Conference I said, “Let us go through Robert Mulliken’s papers with notebook and harpoon.” I loved showing how much you could understand using just rudimentary molecular orbital theory. During that talk and others like it, there were always guys sitting in the front row and frowning deeply. Like me, they were attracted to molecular beams as a way to get at the physics of chemistry. You could study single collisions and learn about what’s really happening in terms of forces. They were annoyed that I was talking in terms of ugly, old MO theory; the sort of low-brow stuff they felt gave chemists a bad name. But to me such qualitative insights, familiar ideas in interpreting molecular spectra and chemical bonding, were exactly what we wanted to have for reaction dynamics. It would show the real chemists, who had to use that way of thinking, how you could learn something from what we were doing that didn’t depend on arcane theory and elaborate calculations. So we should try to extract ideas in chemical language. I’ve preached a lot about that. Some of my students pursued that theme, especially Roger Grice. But there are still quite a few people in the field who aren’t comfortable with it. Many times I’ve asked eminent chemical physicists simple questions like, “Can you predict so-and-so?” and found they were stymied because they don’t think in Pauling-like terms. Rather they say something like, “Well, if I knew the potential surface, or if I can calculate…” But in chemistry often you still can’t reduce matters to first-principles. You need to call on a tool kit of heuristic thinking. That’s the analogy to impressionistic painting that I spoke of earlier.
So you don’t have the Maxwell equations yet?
Well, the Maxwell equations of chemistry, as far as we know so far, are the Schroedinger equations. As Dirac said long ago, it’s a matter of just applied mathematics. But even if you had that perfectly you’d still need qualitative, heuristic thinking. I think you need that in physics, too, in many domains.
Yes, but I do think it’s more important in chemistry.
Yes, by the nature of chemistry; it’s more like impressionistic painting.
Okay, we may come back to some of this, but I want to move on now to two steps that were taken. One, you were able fix the dihedral angle of your reactants.
Yes, I liked that.
What came out of that?
Let me tell the story this way. I call this forbidden fruit. Everybody likes to pursue forbidden fruit! According to the Bible, Eve was the first human experimentalist. As a youngster, I read the whole Bible twice, because I’d heard a lot about the Bible and thought it must be important. The story about Eve and the apple struck me as really peculiar. I thought God was unkind to say the Tree of Knowledge was off limits for the creatures he’d just created, as he must have known that would make them all the more curious. So Eve seemed admirable, while Adam was a wimp. The forbidden fruit involving dihedral angles in collisions has to do with averaging over random initial conditions. Discussions of scattering theory, in any book I’ve ever encountered, always mention that scattering is cylindrically symmetric around the initial relative velocity vector. Consider the approach of two particles, say A and B, from a reference frame traveling with their center of mass. We can’t control whether they collide head-on or with what we call an impact parameter. That’s defined as the distance of their closest approach if you switched off the interaction, so they’d just pass by, like two cars headed in opposite directions on parallel lanes of a highway. Of course, it’s the interaction that perturbs the trajectory so A and B actually collide. But you can still describe the initial conditions in terms of whether they’re aimed directly for a head-on collision or have an impact parameter of , say, three angstroms, with a dihedral angle phi around the relative velocity vector of, say 37 degrees. Now, in crossing beams we don’t shoot a particular particle A at a particular particle B; we shoot a whole horde of A at another horde of B. The beams are so dilute that any one particle A has virtually no chance to encounter more than one of the B particles. Most of them miss altogether but when an A and B do collide the impact parameter and dihedral angle are randomly distributed because of our inability to specify them in the initial conditions. So you have to average over the random initial conditions, which in particular impose cylindrical symmetry around the relative velocity vector, i.e., a uniform distribution of the azimuthal angle. That had long been done in both classical trajectory calculations and quantum scattering theory for comparison with experimental angular distributions. Eventually, I realized that there was a way to undo that cylindrical average, a perfectly simple way. The textbooks assume a usual situation, namely that you specify the initial relative velocity and observe the final relative velocity vector. So you can’t possibly observe anything except the polar angle between those two vectors. You can’t know anything about the azimuthal or dihedral angle. What you need to do is simply observe another final state vector simultaneously with the final relative velocity vector. Then you could determine the dihedral angle between two planes, one defined by the initial and final velocity vectors, the second plane defined by the other final state vector and the initial relative velocity vector. The azimuthal angles around each of the final vectors would be cylindrically symmetric because of the random initial conditions, but in general the dihedral angle would not be uniform if there is any dynamical correlation between the two final vectors. A chemical analogy may make this clearer. The helium atom has, in the ground state, two S electrons. If you look at one of them and ignore the other, you’ll see on the average it’s spherically symmetric and vice versa. But if you observe the two electrons at once, or measure a property that depends on the position of both electrons at once, you’ll find they’re highly correlated because of electron repulsion; they tend to dance around on the opposite sides of the helium nucleus. In the same way, if you have an initial and final relative velocity vector and then you measured simultaneously another one, like the vector for rotation of a product, now you have three vectors and you have three polar angles and a dihedral angle. Even if each of the two final state vectors by itself has cylindrical symmetry around the relative velocity, in general you should expect a correlation. We did an experiment demonstrating that and worked up theory to show what you can learn from it. I had a delightful time talking about this at a Faraday Society meeting in 1988. I’m very fond of Faraday Society meetings; have taken part in eight or so of them and somehow always had especially interesting material to present. This one happened to be held on the centennial of the publication of the first Sherlock Holmes story. So I wrote the paper as a Sherlock Holmes problem. It was based on trajectory calculations done by Seong Kim, then a grad student from Korea (and now a distinguished professor at the Univ. of Seoul). The results appeared very confusing, indeed contradictory, when you looked at various properties separately, as people were accustomed to do. Properties such as the product angular distribution, and the orientation of the rotational angular momentum vector. But if you looked at the dihedral angle it became perfectly clear what was going on. But everyone who had been doing trajectory calculations had been throwing away the dihedral angle information for 25 years by then, because they didn’t realize that it was something observable, both experimentally and in the calculations. Another wrinkle that emerged had to do with a misunderstanding about quantized vectors. We all learned in quantum mechanics that you can only measure the projection on some axis and its magnitude. You can’t also know its azimuthal orientation. But it turns out, again, the forbidden fruit is really accessible after all. If you have some distribution, even of quantum mechanical vectors such as rotational angular momentum vectors, you can arrange to make measurements, say, with your laser, which you can do by changing the polarization of the laser with different axes of quantization. Then if you think you have a steady state distribution of the vectors, you can still get the azimuthal information. Quantum mechanics doesn’t prevent it. You can get the information just as if the vectors were classical; you just have combine results of measurements using several different axes of quantization. So that’s another favorite little theme about forbidden fruit and the Garden of Eden.
It sounds like you’re violating quantum mechanics.
Yes, at first glance, but you’re not, of course. I would never do such a thing even if I knew how! Here’s a sports analogy. In baseball, a major league pitcher is supposed to be able to throw a ball in the lower inside corner of the zone or the high outside corner at will. (That can’t be entirely true because sometimes batters hit homeruns, but at any rate, the pitcher is supposed to be able to do that.) That would be like our being able to control the impact parameter when particles A and B collide here or there, instead of having to accept a cylindrical average over azimuthal angles. We can’t do that, but we can in effect do what the umpire does — we can call where the pitch went by using the other final state vector to find out. So in essence we can undo the azimuthal averaging by being an umpire instead of a pitcher.
You can after the fact say where it was.
Yes, although the analogy may be a bit misleading, because the other state vector is observed simultaneously, so perhaps should say the umpire’s call is instantaneous rather than after the fact.
Now, do you do that routinely? Did you do that routinely at that point?
We did it enough to demonstrate how it works, both theoretically and experimentally, yes. I preached about a lot, but most of what we did with it experimentally was in the alkali age using just electric fields. But now it’s being used more and more because often people use two or three lasers per experiment. It’s not quite routine yet but ought to become so.
All right, one other nice thing you did was find a way to essentially control the rotational orientation of molecules. You called it putting molecules into pendular states. You did this, as I understand it, by getting these molecules in the lowest rotational state, essentially. Talk a bit about that.
Actually, it has to do with hybridizing rotational states by an external field; thereby suppressing tumbling and making the molecules oscillate about the field direction like tiny pendulums. Back in the 1960s, Dick Bernstein, who was then at Wisconsin, and Phil Brooks at Rice (one of my former students Berkeley) independently undertook to do experiments with oriented molecules. They both made the wise choice to use symmetric top molecules. Symmetric top molecules have a threefold or higher axis of symmetry which guarantees that the moments of inertia about the two axes perpendicular to the symmetry axis are equal. As a consequence, in most of its rotational states the molecule precesses instead of tumbling. In contrast, a diatomic molecule or a linear molecule or an asymmetric top tumbles. This has a huge effect when you interact a molecule with an electric field. The symmetric top has its dipole moment along the symmetry axis, and because it precesses, that dipole moment will have a constant projection on the field direction. That produces what we call a first-order Stark effect. But if a molecule is tumbling, the interaction with the field averages out in first order because there’s no fixed projection, so you get a second-order Stark effect; much, much weaker. It was thought that the only way you could do collision experiments with oriented molecules was to use a symmetric top and send it into a focusing field. Each precessing molecule is oriented with respect to its local field. So by rotating the field gradually you could get a beam where most of the symmetric tops were oriented the same way, more or less. As the molecules are precessing, their symmetric axes aren’t perfectly oriented in any given rotational state; the spread is determined really by the Uncertainty Principle. Both Bernstein and Brooks did some nice work with oriented symmetric tops. But up until the early 1990s, it was thought not to be feasible to orient molecules that were not symmetric tops. Many review articles contained statements saying that. I suspect it goes back to a classic 1955 book by [Charles] Townes and [Arthur] Schawlow; in the chapter on the Stark effect, they emphasize that the directional average of the second-order Stark effect vanishes. At ordinary temperatures, diatomic molecules typically rotate with peripheral velocities comparable to rifle bullets. Even for molecules with unusually large dipole moments, the field required to impede such tumbling is very large, so large that you’d surely have electric breakdown producing lightning bolts within your apparatus. But suppose you slow down the tumbling. Producing a beam by a strong supersonic expansion can easily cool down the rotational energy to the equivalent of one degree Kelvin. Then it becomes feasible to use a strong field (but not so strong as to make lightning) to arrest the rotation, and thereby convert pin-wheeling into pendular oscillations about the external field direction. The quantum mechanical description, as mentioned already, the pendular states are coherent superposition of the field-free rotational states. Thus it’s like hybridization of electron orbitals in atoms, very familiar to chemists. The electric fields of interacting atoms induce hybridization of s, p, f, orbitals, thereby sharpening the directional geometry of chemical bonds. In the same way, the interaction of the molecular dipole with the external field hybridizes the rotational states, producing spatial orientation of the molecular dipole moment. Recognizing this opportunity led to an extensive series of studies, again experimental and theoretical, carried on at Harvard in collaboration with Bretislav Friedrich, a Research Fellow. Independently, Hansjurgen Loesch at Freiburg also did lovely work exploiting this technique. Loesch called it “brute-force” orientation. We preferred to speak of pendular states. In practice, in favorable cases we could hybridize 20 rotational states, hence getting a quite narrow angular range of pendular oscillations. For a diatomic molecule, the interaction between its dipole moment and the electric field goes like the cosine of the angle between the dipole and field. Thus, the correspondence with the familiar clock pendulum is precise. Of course, to make use of pendular orientation you have to do experiments in the presence of the field to maintain the hybridization; if the field is turned off the molecules start tumbling again and the orientation vanishes. Another couple of pendular orientation episodes should be mentioned. More forbidden fruit that Bretislav and I enjoyed. How about a molecule that doesn’t have a dipole moment? An induced dipole moment can be created by a strong field acting on the polarizability of the molecule. However, the magnitudes of polarizability and feasible field strengths are such that with a static field you can only induce a very weak dipole that way, too weak to do much good. But in the induced dipole case the interaction is proportional to the polarizability times the square of the field strength. Since it’s squared it won’t matter if the field is oscillating in direction; you’ll still have the force in the same direction. That suggests using an off-resonance laser field. It turned out that acting on the polarizability anisotropy can indeed induce a dipole moment big enough to really get good pendular alignment. Okay, in the induced dipole case, we speak of alignment rather than orientation. That’s because for such molecules, e.g. chlorine, the two ends are the same, so we can’t point one end north, say, without the other pointing south Back in my grad student days working with Mathieu functions, I learned how to solve the cosine squared case, which is what you get into for an induced dipole moment. Instead of the cosine theta, as occurs for the permanent dipole. The cosine squared theta problem can be solved exactly and appears in some acoustics phenomena, which is where I came across it. So Bretislav and I wrote a paper for Phys Rev Letters in 1995 pointing out how to do pendular alignment via the induced dipole. Since then lots of people have made use of it because lasers are so ubiquitous now. Here’s my favorite use of the induced dipole. The cosine squared is a symmetric double well potential, so you have a minimum for both ends of the molecule. The height of the potential barrier for flipping the ends depends on the strength of the laser, the polarizability of the molecule, and its moment of inertia. For a strong enough laser, the lowest energy levels in the double well potential will almost coincide in energy, just slightly split by tunneling between the minima. Those levels have opposite parity, so if the molecule is polar and you add a static electric field, even a very weak one, those almost degenerate levels will split very strongly in first order. So you can induce a first-order Stark effect by having both laser and static fields acting simultaneously on the molecule. You can make it act as if it were a symmetric top molecule even if it’s really quite asymmetric. That’s delightful. Knowing or estimating the polarizability, dipole moment and moment of intertia, you can chose fields that turn on the first-order Stark effect. So you can pluck a molecule right out of a crowd of others very specifically. [Tape cut for phone] At the start of the saga of orienting molecules, it looked as if you could only do a symmetric top, and now we’ve reached a point where we can make almost any molecule act like a symmetric top. Andy Warhol supposedly said everybody’s entitled to 15 seconds of fame. Almost any molecule is entitled to act like a symmetric top, but only about for 15 nanoseconds. That’s because you need to use a pulsed laser to get enough intensity. Of course, future lasers might allow more time. Anyhow, 15 nanoseconds is considered a long time now. Nowadays, I’ve been told it’s embarrassing to do an experiment without using a femtosecond laser. Incidentally, we tried to get a modest grant from NSF to do some applications that demonstrate efficacy of our laser + static field technique. Of six reviewers, five of them said “excellent,” one of them said “good.” The reviewer who said “good” commented “I see no reason to do this experiment because they’ve done the calculations. Surely it must work.” Then he added that the experiment we proposed was also too hard to do! On the one hand it’s too easy, and on the other it’s too hard. NSF didn’t give us the grant.
That’s awful. I’ll come back to that.
That’s one of the disappointments about getting a Nobel Prize. Every grant I applied for from any agency before, I had 100% success, exactly what I asked for; never less. Emulating Bright Wilson, I never asked for more than I thought we needed. Since receiving the Nobel Prize, about a third of the proposals I’ve submitted have been turned down and those that were funded have always been cut.
How do explain that?
I suppose they think, “Since this guy has a Nobel Prize, surely people will give him money, so we don’t have to give him money.” I don’t think my standards have changed. If you have this awesome experience of having your name on a list with people like Einstein, Pauling, and all the rest, you feel you have to try to live up to it. That’s another story. To conclude the induced dipole episode: Bretislav went to Gottingen to take part in an experiment that offered an opportunity to apply the laser + static field technique. In the Max Planck Institute there, lovely work was being done with big clusters of xenon formed in strong supersonic expansions. They worked with clusters of a thousand xenon atoms. The cooling that occurs in supersonic expansions makes that possible. At ordinary temperatures such clusters wouldn’t form because prohibited by the loss of entropy. But since in the free energy the entropy change is multiplied by temperature, the severe cooling in a supersonic expansion quenches the entropy term. The experiment I want to mention is quite remarkable. Just one or two hydrogen iodide molecules were seeded into each big xenon cluster. Photochemical excitation induces a reaction between HI plus Xe that produces HXeI molecules. These are exotic species. Electronic structure calculations indicate the bonding is largely ionic, roughly (HXe)+I-. So they are pseudo-alkali halides, isoelectronic with Cs+I-. Accordingly, HXeI has a big dipole moment and also a big polarizability anisotropy. That made it possible to attain very pronounced orientation of the HXeI molecules, by using the combined fields to turn on the first-order Stark effect. To be able to create and orient HXeI within a cluster of a thousand Xe atoms was dramatic.
It’s such a nice story.
Let’s move on because you have one more area of your research. You’ve got so many other interests, Dudley, that I really want spend some time on those too.
This is your latest thing, which you’ve also had trouble getting money for, which is using space —
It’s too crazy. Actually it’s not quite the latest. This biochemistry stuff was very interesting; perhaps we’ll talk about it later.
All right. Tell me a little bit about this. You take atomic and molecular properties and you go to smaller dimensions or larger dimensions, which are easier than three dimensions, and you learn about things. Describe this.
Yes, I’m fond of this Dimensional Scaling escapade too, and it’s another one of these lunatic fringe things. I’ve always had the attitude that science is wonderful. I just love it as a fan as well as a scientist. But I feel that I should, because it’s part of my temperament, work on things that other people aren’t doing. Schwinger and other very competent people are working on the hard problems, so I should move to lunatic fringe areas because they’re under-populated. Here’s how it came about. I was reading Physics Today, which is my favorite journal. I love it because it has so many wonderful tutorial articles and wide scope. It was actually a 1980 issue about a year old at the time I saw it because, as you’ve seen my office, it’s cluttered up. There was an article by Ed Witten, who was a Junior Fellow when he wrote it, but he was no longer at Harvard. It was on quarks and gluons and quantum chromo-dynamics. Ordinarily I probably wouldn’t have looked too closely at such an article feeling it was beyond me. But it had a subheading which would have attracted your attention. It said, “The Hydrogen Atom.”
Of course, as a chemical physicist I loved the hydrogen atom, so I read Witten’s article. What he was saying was this problem of quantum chromo-dynamics is tough because you can show that, just as in the hydrogen atom, all the physical variables scale out, so you have a pure mathematics problem to solve. They don’t know what the equations are in the case of the quark business, but they can show the physical parameters all scale out. He said in that case what you do is take something that is ordinarily thought to be a fixed parameter, like the dimension of space, or the number of colors of the quarks in his case, and you vary it. You go to the infinite limit, in our case dimensional, or infinite number of colors in his, and then you can develop a perturbation expansion around that. As an example, he showed how it would work if you did the helium atom that way. He got a crummy result; it was off by 40% for the ground state energy of helium. That’s ridiculous. As you know, the first order of perturbation theory taking the electron repulsion as the whole perturbation gives you 5%, and Hartree-Fock gives you 1.5%. So, 40% is pitiful. He said, “Of course this kind of thing is of no pragmatic use except qualitatively.” But I was teaching a quantum mechanics course then. I’m always on the lookout for something particularly interesting for students, so I thought I’d work up a homework problem for them. When I sat down to do it I decided to scale out the hydrogenic part of the helium problem first and then do the infinite-D limit. And then I remembered helium could be solved in the one-dimensional limit too. So what happens if I interpolate between the two limits?” Well I didn’t know, of course, how you should interpolate, so I just tried a geometric series. That needs only two parameters and I had two, the D = 1 and D goes to infinity limits. When I set up the series and put in D = 3, the result for the He ground state was good to something like one part in ten to the five or ten to the six. Either that was just a quirk or a fluke, or it was meaningful. Just then appears an innocent young graduate student, John Loeser. I’ve always been blessed with students, good ones appearing at just the right time. So I tell John about the He result. He undertook to do highly accurate calculations on helium as a function of dimension by generalizing the Pekeris treatment, one of the very best for D = 3. On a visit to the Weizmann Institute, I got to visit Pekeris. He is a geologist, but in the 1950s he was given the responsibility of overseeing a computer built at the Weizmann He wanted show how good the computer was by doing a calculation recognized as difficult. He picked helium to see if he could do better than the epic, well-known calculation done by Hylleraas. The program Pekeris devised used a simple form of variation function but needed a great deal of storage capacity, which he welcomed because the Weizmann computer was designed to have exceptionally large storage. He got excellent results, much better than anyone else had at the time. I remember hearing Bright Wilson praise the strategy Pekeris had devised.
How do you spell Peckerus?
So we generalized the Pekeris method and studied what happened as a function of dimension. We were especially interested in the correlation energy, which is the error in the Hartree-Fock approximation. As you know, the Hartree-Fock is what’s called a mean field approximation; it replaces interactions between electrons by a self-consistent average. Although the HF approximation had long been a mainstay of electronic structure calculations, it’s error — the correlation energy — often makes a major contribution to differences in chemical bond strengths and activation energy barriers. In conventional D = 3 calculations, it’s difficult to get accurate results for the correlation energy. We found that for two-electron atoms the correlation energy was almost inversely proportional to the dimension. It was easy to evaluate the infinite dimensional limit and one-dimensional limit, then just draw a straight line between them and read off the interpolated result for D = 3; the resulting correlation energy was accurate to about a part in a million. That remarkable result spurred us on. Over the next few years, other students joined in and we examined a lot of other aspects of what is now termed D-scaling. The whole story up to 1991 is presented in a book of about 500 pages, Dimensional Scaling and Chemical Physics, which grew out of a workshop held in Copenhagen. About 20 “D-scalers” took part; it was a happy gathering, close by the statue of Neils Bohr. As D-scaling is indeed a multidimensional topic, probably I should not go too far into it but just refer you to the book. It really offers a new perspective. Some very interesting things have emerged that convince me much more awaits. For instance, I suspect there likely is some connection with the renormalization group method developed by Kenneth Wilson, the eldest of Bright’s four sons. Kenneth himself tried to find a way to apply his renormalization group to electronic structure. He gave a charming talk with Bright present, saying how a son naturally likes to solve a difficult problem his father talked about so much. Kenneth hasn’t so far found a way to do it with his renormalization method. We probably won’t be able to either, but I think it’s worth pursuing further. I can’t resist mentioning another example that suggests something of the promise of D-scaling. This is a classic problem involving strongly interacting particles, the virial expansion for the equation of state for a fluid of hard spheres. The first term is just PV/nRT = 1, the ideal gas approximation. The next term, linear in density, has a coefficient called the second virial coefficient. That depends just on the interaction of pairs of particles. Further terms involve successively interactions between 3, 4, 5…particles; the nth term is proportional to the (n -1) th power of the density and its coefficient depends on interactions among n particles. For hard spheres, the interaction potential is rather simple, but the problem of calculating the virial coefficient becomes very difficult as n increases. Even big computers have never gotten beyond the seventh virial coefficient. The calculation involves quantities called Mayer cluster diagrams, which involve multiple integrals. By n = 7 there are several hundred diagrams to calculate, each containing 15-fold integrals. By n = 10, there are over a million diagrams, each comprised of 24-fold integrals. Moreover, increasing accuracy is required as n increases because extensive cancellation occurs between positive and negative integrals. Results of the D-scaling approach were uncanny. The cluster diagrams for virial coefficients of any n could be calculated exactly in the infinite dimensional limit and in the D = 1 limit. Also the third virial coefficient can be calculated analytically for any D. To interpolate we drew straight lines between the two limits using nonlinear scaling defined by the D-dependence for n = 3. We found that all the points that had been determined by computers up to = 7 for hard disks, D = 2, and hard spheres, D = 3, actually fell on the interpolated lines, within the accuracy of the computations. As I like to say, this shows that for hard disks and spheres three might not be a crowd but it’s enough to give you the dimension dependence needed to enable accurate interpolation between the limits for infinite hyperspheres and hard points, D = 1. With some further maneuvers, we were also able to obtain estimates of the virial coefficients for any D up to n = 10. Despite the continued increase in computing power, it will surely be many years before direct computations will be able to test our D-scaling results up as high as n = 10, because the number and complexity of diagrams and the accuracy required increases so rapidly as n increases. For this notoriously formidable problem, D-scaling provides a tremendous short-cut, made possible by an unexpectedly generic D- dependence. The calculations actually become quite simple and are largely analytical.
So do you see it as a calculational device, or do you see physical insights coming out of it?
Some of both. For example, here’s a quite general heuristic insight. All dynamical problems we have considered are much easier to solve in the infinite dimensional limit than the one-dimensional limit. Moreover, as energies generally go as 1/D, it is natural to take the infinite dimension limit as the origin (1/D ? 0 ). Then our real world is at 1/D = 1/3, which is closer to the origin than the one-dimension limit at 1/D = 1.
So infinite is easier?
Yes, as well as closer to our “real world” at D = 3. Remarkably, results at the infinite dimensional limit, which are often analytical and rather simple, usually exhibit features familiar at D = 3, whereas at D = 1 there is often a qualitative difference from the D = 3 case. So it really should be common practice to start at the infinite dimension limit. I was going to say something about a favorite molecule of yours, diatomic hydrogen. It’s discussed in the Pauling 90th birthday article to show that his resonance phenomenon was more rigorous than he could have known. When you work out the potential surfaces for H2 in the infinite dimensional limit, you’ll find that as you change the inter-nuclear distance the surfaces develop multiple minima. Tunneling between the minima corresponds to Pauling’s resonance. There’s a nice figure in the article that shows how the geometry of H2 changes with inter-nuclear distance in the infinite dimension limit. It exhibits the change in the dihedral angle between the planes containing the molecular axis and the two electrons. In the infinite dimensional limit, the electrons take up fixed positions in a D-scaled space. I better explain that later, but for now all you need know is that in the D-scaled space the infinite-D limit is classical: in effect quantum mechanics becomes classical there. So in the figure you see how the classical H2 structure changes with the distance between the two protons. When you go to zero inter-nuclear distance — the united atom limit, where H2 becomes the helium atom — you’ll notice the dihedral angle between the two electrons is 95.3º. You say, “Wait a minute, what’s that mean?” You might have expected it would be 180º due to the electron repulsion. We’ll have to come back to that; it’s another heuristic insight that emerges in the infinite-D limit. For now, what I’d like you to see is just that as you separate the protons, a kind of phase transition occurs in the large-D limit. As long as the protons are less than roughly one Bohr radius apart, the two electrons reside in a plane that bisects the inter-nuclear axis. When the proton get further apart than that critical distance, the electrons shift to form what we might call different electronic isomers. The lower energy isomer has an electron on each proton, the higher energy isomer has both electrons on one proton (or on the other proton). These classical structures in the infinite-D limit correspond to Pauling’s valance bond structures at D = 3, and tunneling between them corresponds to his resonance. You might ask how we can speak of tunneling in the infinite-D limit, when the electrons are in fixed positions there. Well, they are only fixed in the D-scaled space, so happily tunnel in actual space. This H2 example shows nicely how the classical electronic structures found in the infinite-D limit actually do resemble what exists at D = 3. It also shows that resonance is really an intrinsic, generic phenomenon. When you take good wave functions for D = 3, and look at them from the infinite-D perspective, you will actually see evidence of the classical electronic isomers. Without that perspective, they may not be noticed; so this illustrates a heuristic benefit of D-scaling. Let’s go back to the helium case to explain the basics of D-scaling and show how easy it is to carry out. First, have to note that “dimension” is an overworked word; it’s used for a variety of different things. What we mean by dimension D is the number of Cartesian components in a vector. So we set up the Hamiltonian exactly the usual way, keeping the form of the potential energy the same as in D = 3 — there are good reasons for doing that — but we have to change the Laplacian because a bunch of derivatives go into it.
The correct dimensionality.
Yes, the dimensionality affects the number and form of the terms in the Laplacian. The Jacobian volume element also depends on dimension. The role of the Jacobian is very important, and deserves more attention than it usually gets. Often people say, “The probability density is the square of the wave function,” but fail to mention that to calculate any observable quantity that must be multiplied by the Jacobian and the product integrated over the relevant range. If you are dealing with Cartesian coordinates, it doesn’t matter because then the Jacobian is unity. But in most problems, the potential is more naturally expressed in some sort of curvilinear coordinates. Then the Jacobian factor matters a lot, sometimes it’s more important than the wave function itself. Since the Laplacian, the Jacobian, and the wave function all depend on D, when you want to vary D you are likely to find things get unwieldy. A nice simplifying procedure is to define anew probability amplitude as the product of the square root of the Jacobian times the wave function. Then, by construction, the new probability density will have a Jacobian just equal to unity, so independent of dimension. The corresponding Laplacian takes much simpler form. The terms that contain derivatives will look pseudo-Cartesian. In addition, the Laplacian has scalar terms that contain all the D-dependence. Those represent a centrifugal potential; its D-dependence is always simply quadratic. Thus, this transformation shows that dimension is intimately related to angular momentum. For the hydrogen atom, or any central force problem, in this formulation the Schroedinger equation for general D has just the same form familiar for D = 3, except that the orbital angular momentum is augmented by (D – 3)/2. For the helium atom or any other problem involving noncentral forces, the general form is the same, although the relation of angular momentum to D is less simple. D-scaling is quite easy to implement for electronic structure. The Coulombic interactions all go like one over distance, and the kinetic energy, both derivative terms and centrifugal potential, always goes like one over distance squared. So if you introduce a distance scale that depends quadratically on D, and then go to the large-D limit, you’ll find that the derivative terms are quenched, whereas the D-dependence of the centrifugal potential cancels out So you’re left with just an effective potential that consists of the scaled centrifugal potential plus the Coulombic terms. All you have to do is find the minimum of the effective potential to get the energy in the large-D limit. At that minimum, as mentioned already, the electrons are sitting still—in the D-scaled space. In the large-D limit, the Schroedinger differential equation has morphed into a Newtonian algebraic equation. Electrons sitting still at fixed locations might seem to violate the uncertainty principle. However, using a distance scale that expands as D increases implies the conjugate momenta shink, so the scaling preserves the uncertainty principle. It just shows that in the D-scaled space, quantum fluctuations diminish with increasing D and vanish in the limit, giving way to classical mechanics. The distance scale is also proportional to Planck’s constant squared over the electronic mass. So going to the large-D limit corresponds to letting Planck’s constant go to zero or the electronic mass go to infinity. That is the usual way to get to the classical limit of quantum mechanics. However, the D-scaling way is distinctive, particularly because its effective potential includes the centrifugal contribution. It’s a major reason why the large-D limit resembles so much the D = 3 world. A favorite heuristic aspect that startles both physicists and chemists: as mentioned earlier, in the large-D limit the ground state of helium is “bent.” The interelectron angle is 95.3 degrees, as a result of competition between the centrifugal interaction, which favors 90 degrees and electron repulsion, which favors 180 degrees. Moreover, high quality wave functions show that at D = 3 helium really is bent, with an interelectron angle of about 100 degrees, only a few percent larger than the large-D limit. The general presumption that it would be 180 degrees comes from a static view of the atom.
Is this work going to have legs?
I don’t know. The electronic structure professionals have so far totally ignored D-scaling. In this regard, there may be an analogy between quantum mechanics and automotive mechanics. It’s said that if things started right now, nobody would use the internal combustion engine in automobiles. But with about a century invested in developing engineering and infrastructure, internal combustion has a huge edge over upstart competitors. I hope that D-scaling might produce something good enough to enhance the conventional methods. At least it offers a fresh perspective in teaching quantum mechanics.
Doesn’t this work strike you as really important?
Well, I hope so but I’m captivated by it. Whether it has legs remains to be seen. So far we’ve just done rather elementary things with D-scaling. These encourage optimism about the possibility of finding a way to significantly improve electronic structure calculations. But like other things I’ve done I just fell in love with it, so had to pursue it. That’s already been rewarding aesthetically.
Yes, it is. It’s very interesting and it’s great. Let me ask you a question of a different sort. You take the hydrogen molecule; you’ve worked with that. What you’re doing now is at a high level of abstraction. It’s amazing that you’re able to get from it information that hangs together the way it does. Do you think that the hydrogen molecule has two electrons whose behavior is independent of your perception of it?
Now we get into serious philosophy!
Well, are you a realist?
I’ve been accused of being a realist by a postmodern guy, and I guess he would know better than I would, but that’s another story. My naïve view is that the hydrogen molecule really does exist and existed way before I came on the scene and will way after I depart. If that’s the definition of a realist, yes.
Doesn’t it strike you as certainly interesting, and maybe even as a bit odd, that we can put together these extremely interesting schemes, mathematical formalism and so forth?
Oh, I couldn’t agree more.
…and come out with —
Wigner talked about the unreasonable effectiveness of mathematics. I agree it’s one of the great wonders. Yes, I totally agree.
I guess the question was have you ever wondered whether we’re all deceiving ourselves in some way, somehow.
I have. There are moments when you wonder about all these marvels we witness and if this is all some kind of dream. Actually, somebody just sent me a curious request: to give him the first word that came to my mind. As a result, I wound up writing a little poem last night about the word I gave him, which was “Wonder.” The first stanza — I can’t recite it right now; I can show you if you want, I have it on my computer — expressed joy and exhilaration in discovering or experiencing something wonderful in science or in art or spiritually. But the other stanza was about wondering all too often about witnessing lies and crimes and misery. In an awful way that other side of the coin makes me think we aren’t living a dream. But I really do think there’s a universe out there and we are able to learn more and more about it—thereby appreciating how much more there is that we don’t understand. Perhaps I haven’t convinced you yet of why it should be standard practice to inspect the D-dependence, especially the infinite dimensional limit. Let me just describe one more example because it’s relevant to something else I want to mention. Consider the classic random walk problem. It’s often discussed for D = 1, 2, or 3. In the 1920s, Polya discovered an important theorem. For D = 2 or lower, you will always return with 100% probability to every point in the random walk if you walk long enough. For D = 3 you only have about one third probability of returning. For D = 5, it’s a little less than one-fifth, etc. So you can get lost in three dimensions forever, but not in two. That is a major factor why surfaces are so important in catalysis. In D = 3. even a random walk has a shape. If you want to characterize it by the radii of gyration, which are like moments of inertia, it’s a big calculation for a long walk. That’s because there are so many permutations in the order and numbers of steps taken in x or y or z directions. Such calculations have been done, because random walks enter into a lot of phenomena. But consider what happens in infinite dimension. There you can do the whole calculation analytically. As soon as you take one step in some direction, you realize you won’t take another in that direction, because there are infinitively many other directions to step in. So, the random walk will consist of one and only one step in each direction. You can label the axes in the order you stepped on them, thus write down the coordinates and calculate everything analytically. At the infinite-D limit the walk is really not random anymore! If you want to get a first-order correction, [tape cut out for phone] just back off a little bit and consider large but finite dimension. Then you realize you can take two steps in each direction, but not more as the probability for three steps would be comparatively negligible. For two steps you can work out the result analytically too. It turns out its equivalent to a harmonic oscillator. You get a one over D term as the first-order correction. That’s very general. The same happens when you consider electronic structure and other problems. You always get a 1/ D term as the first correction to the infinite-D limit. For the random walk, with just those two terms the approximation you get for D = 3 agrees within a couple of percent with the results of a huge computer calculation. And you got it all analytically. It’s fair to conclude, “Holy smoke, this is the way we should do it.”
Have people given you that nod and acknowledged that?
No, I’m a bit surprised that D-scaling hasn’t caught on much yet. We’ll keep plugging away anyhow.
Since your honors, although I’m sure you’ve gotten honors associated with teaching, let’s talk about honors next and then I want to get into —
We can dispose of that pretty quickly.
Well you’ve said some things that are interesting. You’ve gotten a lot of honors.
Yes, I certainly have. Goodness.
Is there one that you prize the most?
Well, of course, I have to mention the Nobel Prize, because that’s so sobering. When it came I thought, “Oh, my gosh.” People seem to value it so highly. I’ve always told my students that the best prizes are given by Nature. The prizes really should go to atoms, molecules, and ideas, but we don’t know how to do that, so give them to people instead. As you know, we have the privilege of working on things that we get excited about and the joy of working with other people who share the excitement, our students and colleagues. I can’t think of any prize that can compare with that. I think of Nature as an angel up there. She sees these eager, curious mortals down below fumbling around. Now and then she says, “Well there’s one who seems very earnest and hardworking, so I’ll drop him a plum.” By plum I mean a happy insight. I’ve gotten my share of plums.
You’ve got a lot of plums.
Yes, I’ve got a lot of plums. As an old prune picker, I appreciate that.
How did the Nobel Prize change your life?
Of course, I’ve been asked that same question many times and I say, “Well, nobody wanted to interview me before that.” Many people invite you to give talks, serve on boards or committees, foster programs, endorse petitions or book jackets, write letters of recommendation and such. It adds up to a lot. Much you have to turn down, as either not worthwhile or feasible, but many opportunities are things you feel glad to be able to do. I especially welcome opportunities to talk about science education to all sorts of audiences and take part in science fairs and other activities with high school and middle school kids. You can’t help but be aware that the Nobel Prize changes the way people act toward you. I’ve never liked that. Beforehand, I’d noticed it and wondered, for instance “What is it like for Purcell?” Knowing and loving Purcell, I didn’t think he wanted that. Of course, there are times when it’s quite delicious, even absurd. One of the most satisfying things was getting letters and calls from former students and friends who were so happy about it. The press conference and reception in the Chem Dept. was the same way. People were really excited, so much so that I automatically behaved more calmly. One call was especially striking. It was from Gary McClelland. Perhaps you know him. He’d been a graduate student with me and quite a good one. He said when he heard the news he jumped up and down shouting for 45 minutes, until he was hoarse. That’s off-scale, even for Gary, a habitually cheerful fellow. A few days later, I bumped into a good friend, a psychology professor at Harvard. I asked him, “Have you psychologists figured out this phenomenon? Something like this makes people get so unreasonably happy.” He said, “No, we psychologists don’t study happiness, only unhappiness.”
Did you anticipate the prize?
No, I absolutely didn’t. Actually, several people had thought fit, even people I didn’t know, to send me letters years before saying, “I’ve nominated you for the Nobel Prize.” I’d always written back, “I’m grateful, and pleased that you think so highly of my work.” But I really didn’t think that the sort of thing we were doing, as much as I loved it, was likely to bring a Nobel Prize. So I did not expect it at all. When it did happen it was awkward in one way. I felt sorry for a couple of other people who were outstanding in the field and clearly were hoping to get a Nobel Prize. Knowing that, I actually phoned them. One fellow said that his colleagues all seemed to be avoiding him and he thought it was because they’d expected he would have gotten the prize. As you know, in a given year no more than three people can share the prize. The way I learned about it was odd. I was just getting ready for my class and my secretary called, saying “Somebody wants to speak to you about the Nobel Peace Prize.” I thought that was odd. A few times I’d gotten phone calls from reporters after the Nobel Chemistry Prize was announced. If it went to somebody I knew, sometimes the reporter would say they’d suggested getting “a quote from Herschbach.” For example, that happened with Roald Hoffmann.” So, I thought this call was something like that. The Associated Press reporter identified himself and said, “I’d like to get your comment on the Nobel Prize in Chemistry.” I replied, “Sure, who received it this year?” because I didn’t know. So that’s how I learned. The pleasure was all the more intense because it was unexpected and because the prize included Yuan Lee and John Polanyi, two fellows I admired immensely. When we were in Stockholm we had a very pleasant time with our families. I remember talking with John about our regret that George Kistiakowsky, who had died a few years before, didn’t have the pleasure of receiving a Nobel Prize. He was a truly great experimentalist, a giant in chemical kinetics and also as a scientific statesman. Kisty certainly deserved a Nobel, and would’ve been delighted with the Stockholm festivities. I felt the same way about Bright Wilson. So you’re very aware how capricious awards like that are. You can’t take them too seriously. But you do feel a special responsibility because you’ve been anointed to represent science.
I have two questions. Talk a little bit more. Do you really think that your Nobel Prize has influenced people’s assessment of your proposals to NSF?
Well I don’t know how to understand otherwise a case like the one I mentioned where five reviewers said excellent and one good but contradicted himself saying the proposed experiment was both too easy and too hard. When it comes to papers and grant proposals, I’m almost absurdly particular about what I write. I really sweat to try to make it good. When I feel ready to write a paper or proposal I usually think I have understood the problem pretty well. But nearly always I learn so much more from the discipline and effort of trying to describe it well. It’s true that I often don’t put everything I could in a research proposal, partly because I’m always trying to think of the reader and not overburden or oversell. But I try to say enough to make the case. So the big difference in my “before and after” experience with proposals is hard to understand. Maybe I’m kidding myself, but I do think a major cause is the view that, “Well, this guy should be able to get support elsewhere.” As mentioned above, when you get something like a Nobel Prize you feel all the more responsibility to be a good citizen on behalf of science. That takes energy and time. When it turns out harder to get funding to continue research, naturally you feel that’s something of a penalty. I’ve heard from a couple other U.S. laureates that they also had more trouble getting funding after. Yet people imagine it’s a snap. Actually, in Europe and Asia it’s the other way around.
Dudley, the last question that’s sort of related to this. We’re going to talk about science in the more general way. Over my life, the number of prizes has escalated. There are so many prizes.
Yes, that’s very noticeable.
I actually wonder whether science has been helped by this or hurt by this. Or has it been hurt by this? There’s an industry of prizes now.
Yes. I’m asked to write many nominations or supporting letters for prizes. Leo Szilard wrote an essay about that, foreseeing a limit that shut down science because everybody’s busy writing recommendations. You could generalize this danger to not just prizes but research proposals. It’s really frightening now. In a field like chemistry, you almost have to have a grant for each student, and that’s requires dealing with a lot of granting agencies and a lot of reviewing. As a good citizen, you have to review many proposals from other labs too. Ed Purcell told me he never applied for a grant in his career, and he didn’t think he could have. You have to know Ed to understand what he meant by that second remark. His work was all supported by what we would call a block grant to the physics department, from the Office of Naval Research. The ONR did not ask for proposals, and specified the purpose of the grant in a single sentence: “For the investigation of the properties of atomic nuclei.” Obviously in that golden era it was appreciated that guys like Purcell were going to do the best science they could. You just give them money and they’ll do something special. The attitude has changed completely now. Rather than supporting people, it’s viewed as buying projects. That has many drawbacks. One of the worst is the impact on young faculty. For them, it’s life or death to get a grant to students doing research. Young faculty have often shown me very discouraging reviewer reports on their rejected proposals. More often than not, the reviewers are at fault. The reviewing process doesn’t make amends for that. Moreover, the process is very slow; for proposals to NSF that are approved, it now usually takes a year between submission and arrival of funds. If a proposal is rejected, the applicant is not notified till a year after submission, then has to rush in a new or revised proposal in hopes of getting funding that if it comes at all won’t arrive for another year. There’s also a bias against truly novel proposals. NSF requires five or six reviewer reports. If a proposal is “far-out” it’s likely that one or two of the reviewers will fail to fully understand it, so the chance gets pretty small of getting nearly unanimous “excellent” scores, as needed for approval. Less novel proposals, closer to the mainstream, are more likely to be approved because the reviewers are doing similar work themselves. In his autobiography, Luis Alvarez strongly criticizes the current peer reviewing system, saying it’s a disaster.
Yes, it is.
Today most graduate students and postdoctoral fellows in science serve as hired hands on a project defined by a research grant. That has at least three bad consequences. (1) It limits their freedom to explore too far away from the defined project. (2) It is, I think, a major reason why the time to complete a science Ph.D. in American Universities has expanded to a norm of six or seven years. For those hopeful of faculty positions, a postdoctoral stint of two or three years is expected. The age of people getting their first independent NIH grant is now 42! This stretching out of the apprenticeship has occurred, I think, largely because a veteran grad student or postdoc is much more valuable, in producing papers to support renewal of a grant, than a neophyte would be. (3) The funding system has also degraded the quality of a Ph.D. Grad students now tend to take fewer advanced courses outside their research specialty, and faculty teach fewer small advanced courses. That’s because the pressure to feed grant proposals discourages students from devoting time to such courses and faculty from teaching them. If support of grad students, and preferably also postdocs, on grants to individual professors were abolished, the same money could be put into expanding greatly the number of fellowships that students can win for themselves and into block training grants to university science departments. In applying for the training grants, the departments should have to describe a Ph.D. program structured to foster breath, not just narrow specialization. The huge burden on individual faculty members to raise funds to support students would be removed, so proposals would involve chiefly apparatus and auxiliary items. Students would be able to join a research group without concern about status of funding for specific projects. They would no longer be hired hands. I’m pretty sure such a liberating system would alleviate the drawbacks I mentioned. I was a grad student before it became usual to support grad students on research grants. In 1955, when I came to Harvard, 33 of the 35 chem or chem physics grad students entering had their own fellowships, some from NSF many from private corporations. That certified students as national resources rather than hired hands. It profoundly influenced students’ outlook and approach to grad study and the time to complete the Ph.D, was usually close to four years, often less. I’d like to see funding agencies adopt a policy that students awarded a fellowship or training grant who complete the Ph.D. in four years be rewarded with a postdoctoral stipend for a year to work at a lab of their choice. I’ve talked with many faculty members concerned about the drawbacks of the current funding system. Unfortunately, most fear that reforms of the sort I’ve suggested might reduce the overall investment in university science.
In addition to your research, which speaks loudly, you’ve been a prominent teacher on this campus, well regarded. You’ve probably advised students and undergraduates.
So let me start by asking this. When you said earlier today that at Stanford, as an undergraduate, Western Civilization may have been the most important course you took. When you advise students, chemistry majors probably, do you advise them to take humanities courses and broadly educate themselves?
Yes. I often point out that people think science is essentially a technical subject, but actually frontier science is better considered as architectural. An architect has to understand a lot of technical things, but it’s not the essence of architecture. An architect, at any historical period, has to consider the materials and construction methods available. Those are available to anybody, but the architect sees how to put them together in a way that opens up new possibilities and serves the purpose of the project particularly well. When we encounter that, we recognize it as good architecture. Frontier science is like that. A really fine architect has to have a broad view of human culture, human psychology, and much else that isn’t directly connected with bricks and mortar. Again, I think the same goes in science. I tell students that they will find that in science, as in many other fields, what you will be able to accomplish depends very much on how you interact with other people and how you communicate with them. Many resources that you can draw on are really cultural more than technical. What is called liberal arts education is valuable in science. And vice-versa. Like Rabi, I think science ought to be a central part of a liberal arts education which aims to cultivate the habit of self-generated questioning and critical thinking. “Why should I believe this? What is the evidence?” Those should be habitual questions. If you study history or psychology or philosophy or whatever, and there you practice critical thinking of a different sort than in physical chemistry, so much the better. It’s enlarging your scope and capacity to do the essential basic thing, and it’s also making you more aware of human culture. Rabi has a fine statement I’ve quoted several times. It’s in your book. Speaking about humanities and science he says the beauty of them is not in the subject matter alone. He laments that scientists don’t communicate effectively with nonscientists, and too often “we teach science as if it’s about the geography of a universe uninhabited by mankind.” For me it’s gratifying to help students see that science is very much a human enterprise. Textbooks often make it look as if it proceeds by brilliant ideas and discoveries by Olympian figures. But I like to emphasize that science enjoys a huge advantage over other human enterprises. The goal, understanding nature, waits patiently to be discovered. That’s why ordinary human talent, given sustained effort and freedom in the pursuit, can achieve marvelous advances. So we all can play a role, whatever it’s going to be, and be part of an unfolding saga. We receive a grand legacy from previous generations, try to add what we can to it, and pass it on. Our roles change in different periods of our lives, but even way past our prime we can enjoy gaining new insights I get much pleasure in seeing the lovely things students of mine have gone on to do.
Go back to liberal education; your comment about questioning is a good one. Another thing you can say is that a liberal education should bring a person to be at home in the world they live, in the culture they live in, and if you accept that, then in 2003 science should be an important core part of that liberal arts education. Is it here at Harvard?
It’s nowhere near what it should be. Many students come already having been conditioned to think that science is just for a special geek subspecies. They get that message in many ways. That happens in other countries too. Last October I went to Korea to give three lectures. I wound up giving five because my hosts were so concerned that so few young Koreans are going into science.
Okay, you were at a meeting in Korea.
At the Korean Physical Society. The session I took part in had a talk by a Japanese fellow and an English fellow, and both described the same phenomenon. They had data showing that, just as in the U.S., native-born students aren’t going into science, undergraduate or graduate, as much as they used to. It’s worrisome. We recognize some reasons, but basically it’s a paradox. In bookstores, never before has there been anything like the wide range of wonderful science materials now available, accessible to any literate reader. Similarly, there are excellent TV programs, on NOVA and elsewhere, as well as myriad websites. There was nothing like that when I was a kid. Yet it seems like a smaller fraction of the population is drawn toward science now. I wish we could do more about it, but we do what we can. For colleagues discouraged about our future scientists, I urge they go see the International Science and Engineering Fair, where I was last week. Or the Science Talent Search (formerly sponsored by Westinghouse, now by Intel). Both of these exhibit remarkable things done by high school students. Both are conducted by Science Service, a small non-profit outfit in Washington. I was recruited by Glen Seaborg to serve on the Board of Trustees. It was launched in 1921 to publish Science News, which they’re still doing. It’s a superb magazine presenting science in the National Geographic style. When you talk with students at these fairs, it’s pretty inspiring to see the high quality of their projects and their enthusiasm. At ISEF there are 1,000 students, all winners at qualifying fairs and 900 volunteer judges. Leon Lederman and several other Nobel Laureates regularly attend ISEF and take part in a special program responding to questions from the students, and another meeting with the judges.
You teach a freshman chemistry course that’s called Chem Zen.
Yes, Chem Zen is the nickname the students gave it. Over a span of 20 years, I taught four versions of general chemistry for freshmen. Chem Zen, is the final version, given the past 8 years. This Fall was my last go at it, as I’ve now become Emeritus. I tried to teach it as a humanities course, or what I imagine would be a humanities course. It covers a lot in just one semester, dealing with three main areas: chemical kinetics, thermodynamics, and molecular and electronic structure. Each week or so, I’d present what I called a Molecular Parable, a story pertinent to the science topic at hand but meant to put it in a broader context. For instance, I’ll describe a favorite parable, prologue for our discussion of the gas laws. It’s titled “How Aristotle and Galileo Were Stumped by the Water Pump.” The idea came from a 1950 book by James B. Conant. As president of Harvard, he led admirable efforts to improve science education, which included launching the careers of Len Nash and Gerry Holton. The students have all heard about the ideal gas law, “PV = nRT.” I tell them that to really appreciate how that tidy equation emerged and the impact of underlying concepts; we need to start with the water pump. Commonly called a “suction pump,” it has been used for 1000s of years and is still used by billions of people today. (Of course, I always ask how many in the class have used such a pump and mention I did on our farm.) Aristotle said it worked because Nature abhors a vacuum. But even in his day, it was known that you could not pump water more than 34 feet, a limit Aristotle apparently ignored. (I ask the students whether they were aware of the 34 ft. limit; usually only 2 or 3 hands go up. I suggest they might want to think about an explanation for a bit.) Meanwhile, we’ll jump ahead 2000 yrs to Galileo. In his Two New Sciences, published in 1638 (anybody know when Harvard was founded? — 1636), Galileo discussed the 34 ft. problem, which had been called to his attention. He suggested the limit might occur because a column of water higher than 34 ft. would break of its own weight. (Anybody seen such a column?) But that’s not right. (Any suggestions?) Who did solve the age-old puzzle? It was a student of Galileo, named Torricelli. (I hope some of you students will solve problems your professors could not!) He did it by taking seriously something Galileo had said, but had failed to recognize that it related to the 34 ft. limit. Galileo had speculated that tiny, invisible corpuscles might make up the air and weigh something. He even tried to weigh air, but his experiment wasn’t good enough. He filled a leather bag with air, weighed it, then flattened the bag and weighed it again. Found no discernible difference. (Oops, why a poor experiment?) Torricelli must have realized that the tiny air corpuscles, if they existed and had weight, possibly extended far up in the atmosphere, so could be pressing down firmly enough to push water up 34 ft. He devised a better experiment, and consequently invented the barometer. Instead of water, Torricelli used mercury, 14 times denser, so he only needed a tube somewhat longer than 30 inches, much easier to handle. Next I note the controversy over whether the “Torricellian vacuum” apparent at the top of his barometer tube really existed, debated in many scholarly papers. Then describe subsequent experiments using vacuum pumps, showing sound won’t transmit through vacuum, mice die without air… and air obeys (roughly) the ideal gas law, etc. Demos done with student participation include collapsing a 55-gallon drum. Follow that by inviting the smallest member of the class to outdo the TV ads showing football players crunching beer cans, by crushing a can without squeezing it at all. Set out a soda can, put a in a little water, pop it on a Bunsen burner, let it fill with steam. Have student grab it in an asbestos mitt and quickly dip it, top down, in cold water. Then have students estimate weight of the earth’s atmosphere, using Torricelli’s data for pressure it exerts at earth’s surface. Have them calculate how that compares with the pressure they exert on the earth when standing and when stretched out flat. Have them estimate the pressure differential needed to lift a 727 airplane into the sky, compared with a seagull. Finally, estimate order of magnitude of numbers of nitrogen molecules in the atmosphere. (~1044) Do same for a breath (~1022). So, one molecule is same fraction of a breath as one breath is of the atmosphere. As nitrogen is pretty inert under atmospheric conditions, can infer that most of it is quite ancient, so if spread uniformly on average likely to inhale a molecule exhaled by Aristotle or Galileo or by Cleopatra, or… Finally, ask this question: “What if Hercules, after his famous twelve labors, was required as a 13th labor to weigh the Earth’s atmosphere?” Do you think he would’ve come up with Torricelli’s answer? If not, would his failure prove to be a blessing? Among the heritage of Western civilization we’d have a story about a hero with great strength and courage who fails because of lack of an intellectual insight. Ladies and gentlemen, such failures often happen; are those failures abetted perhaps by the many tales we have of heroes winning just by strength and courage? Well, that’s a sample parable, serving to generate questions, some serious, some fun, some both. For instance, here’s another: “Does anybody know Italian? What’s Torricelli mean?”
I don’t know.
It means “Little Tower.” Isn’t that uncanny? Could his family name have predisposed him to think about columns of air and mercury and so make an epic discovery? He actually also developed a pneumatic pump. Sometimes I actually roll in one like it from the historical collection; it’s a beautiful instrument. When the Royal Society of London was founded in 1662, King Charles the Second, who was the patron, soon became very upset because “These gentlemen are just discussing nothing.” The reason was that vacuum was a huge deal then. The long-standing authority of Aristotle was being questioned, confronted by the experiments of Torricelli and others. It was a serious debate. Transmission of light though a vacuum was presumed impossible (as it should require a material medium, as with sound). Thus, it was argued that the space at the top of Torricelli’s barometer could not really be a vacuum, because it allowed light to pass; so there must be invisible threads holding up the mercury column. Parables like this, I hope, should help students appreciate science from a humanistic perspective, as urged by Rabi. Of course, many of the students think such a parable is just a curious digression that won’t help them on exams. But to me it’s essential to understanding science as a liberal art. Of course, it is important to help the students learn nuts and bolts things, but there too I try to broaden their perspective. For instance, take PV = nRT. It’s easy enough for students to understand that given values for three variables, you can calculate the fourth. But I say to them: “There is a practical problem when using this neat equation; what is it?” A student will answer: “You have to know R.” I reply: “Right! How do you find out R?” Usual response: “Look it up in a book or on the web.” My reply: “You can do that, but actually you already know what R is. You just have to think a little about what you know.” Pause. Then I hold up the traditional box (shown earlier), labeled 22.4 liters: “What’s this?” Response: “Oh, that’s one mole at STP.” My reply: “So you know R. How come?” Pause. “Well, there are four variables and you know one situation and you know all four values. That gives you R = PV/nT, which will hold for any other values of the four variables. So by just remembering the black box, you never again have to look up the value of R. If you need it in different units, it’s easy to convert those of the four variables.” I emphasize that students will meet many such formulas and recommend they just make for each their own box for a reference set of the variables. As well as making constants like R friendlier, that fosters thinking about what the variables really represent. At the first meeting of the course, I explained that my aim was to teach in a liberal arts mode, in accord with the definition “Education is what’s left after all you’ve learned has been forgotten.” That’s attributed to James Bryant Conant. It can be restated as “Education is what’s left that you are unable to forget.” I like both versions because they convey the aim is for students to take ownership of the subject. Parables, almost by definition, are hard to forget. So are well-designed labs or thought experiments like representing R as a box. I found freshman chemistry especially challenging to teach because so many of the students arrived having been conditioned to cope with science courses just by memorizing what seemed likely to appear on tests. I tried to get them beyond that.
Are you still teaching this course?
Not now that I’m emeritus. I’m teaching a freshman seminar.
This was a course for non-majors?
Not only non-majors, majors also took Chem Zen.
Chemistry Zen was for majors?
Yes, not only majors but everybody, many pre-meds. Typically enrollment was about 250 students. Only about 10% of them would be likely chemistry majors.
Oh, that is very, very nice.
There’s an aspect related to such melding that deserves mention. Many people think that students who want to be scientists should go to MIT or Caltech where most of the students are hotshot prospective scientists rather than a place like Harvard where most of the student body is not. Thirty years ago or so, a chemistry professor at Caltech years compiled data on where distinguished American chemists had been undergraduates. He set up a number of criteria for distinction such as election to the National Academy, publications and prizes. He had expected Caltech undergrads would rank near the top. To his surprise, he found that the most distinguished subspecies of American chemists had been undergrads at Harvard, followed by undergrads from other Ivy schools and Stanford, my alma mater. Caltech was about eighth on the list, MIT not much higher. Ten years or so ago, the survey was repeated and got essentially the same results. In fact, I was at Caltech on sabbatical in 1976, about the time of the original survey. Faculty there were dismayed that every year some of their most promising undergrads would leave. They’d just give up. Faculty also told stories of kids studying in closets because they don’t want others to know how hard they were working. The Caltech faculty think that the undergrads are under too much pressure, because each class is small, composed of the crème de la crème of very good high school kids. But compressed together at Caltech they’re all good in the same way, so damage each other’s self-esteem. I read something by Ed Land; I think it was a speech he gave at MIT, where he said, “Kids have to be able to think of themselves as special.” If they’re selected to be all good in exactly the same way, it’s hard for everybody to think they’re special. When kids enter Harvard, they soon realize they’re all special but in many different ways. So they can still think of themselves as having a special destiny. Their roommates and many of their friends are likely not to be science students, for the most part. So they have to interact a lot with people who have a different background and outlook. That is educationally healthy for everybody, and for science students surely broadens perspectives. It seems likely to contribute significantly to the later distinction in chemistry of Harvard undergrads found in the Caltech survey. In short, such an educational environment is better suited to producing architects. As you have got me into preaching about science education, I would like to unload a few further comments. Why does so much good frontier research in science, opening up new vistas, come out of universities? The people doing it are mainly amateurs and dilettantes. In saying that, I mean to be sympathetic, not unkind. The students and postdocs are apprentices, not paid as professionals, so can be fairly termed amateurs. The professors, devoted as most are to research, have so many other things to deal with that they usually have to function as dilettantes. Contrast this with a corporate or government lab. There the research is conducted by professionals who can focus more fully on it, often with considerable advantages in instrumentation and funding as well. How come so many major advances actually emerge from universities despite what might seem substantial handicaps? I think the chief reason is the teaching environment. The steady input of new students keeps both the amateurs and dilettantes going back to fundamentals. That is very important for frontier research. This view is reinforced by experiences I’ve had as a consultant or member of visiting committees for government or corporate labs. I haven’t done a lot of that, but my observations resemble those I’ve heard from colleagues. Typically, the visitor hears the researchers describe what they’ve been doing. Usually you’ve never heard about it before so you have to ask very elementary questions. You soon recognize that the researchers are indeed experts. But they aren’t used to thinking anymore at the freshman chemistry or freshman physics level. They’re way beyond that. On several occasions, progress on the project under discussion had been stymied for quite some time. If the researchers had the habit of going back to the basics, they would’ve seen what the roadblock was. Yet a visitor did see it, with the advantage of that habit, imposed by teaching students. I think that teaching and research have a tremendous symbiosis. I’m sad to see that this is generally not recognized. The way university research is funded gives the impression that at a place like Harvard, research is all that matters. So it seems obvious that devoting time to teaching subtracts from research. NSF really wants to encourage teaching and outreach along with supporting research. As I mentioned already, the current funding system makes it crucial for faculty to get research grants to support grad students and postdocs. That’s in fact most of the budget of science grants. If, as I advocate, students were supported independently, grants would be much different in character and not distort so much what should be natural mutual reinforcement of teaching and research. Another thing I’d like to see could be much more easily implemented. Speakers introduced at departmental seminars or at meeting are most often professors. We always hear about their academic pedigree and awards, but almost never is it mentioned that they actually teach. Why not make it customary to at least tell what they teach? Over time I think that would make a difference. For years I’ve asked people who introduce me to do that and when I’m doing the introducing find out about and mention the speaker’s teaching. I’ve also long thought it’s peculiar that a Ph.D. is considered prerequisite to teach at a University, yet the thesis presents only research. Most grad students teach. Some are outstanding teachers and many devote a great deal of effort to it. It’s like to see it become customary to have two special chapters of a Ph.D. thesis. One would describe teaching experience and perhaps innovations made by the candidate. The other would present an overview of the research, in a broad way accessible to nonscientists. That would signal that a Ph.D. goes beyond being a technician and should strive to become an architect.
Have you asked your students to do that?
I have suggested it but not insisted. Some have responded in that direction, but there are two practical impediments. They know perfectly well that such chapters would “not count.” Also, most often the candidate is toiling down to the wire to get the thesis in before some deadline. So in most cases it’s not realistic to ask for such “extras.” That would not be so if it became customary. To encourage it, I think the best approach would be to provide sizable prizes for such chapters. Each department would then need a committee to select the prizewinners, so it would affect the attitudes of the faculty as well as the students.
What do your colleagues think of your course, taking this approach?
Well, Harvard tends to be rather insular. Everybody is so busy with their own activities and typically have a lot of demands or involvement in the outside world. For the most part not much attention is paid to what colleagues are doing in the teaching arena. Of course, there are exceptions, but generally each faculty member deals with teaching in their own way. Even when nominally two faculty share a course, usually they don’t attend each other’s lectures; each just treats certain topics without much coordination with the other. It may be different in some departments or fields. I was happy to hear from colleagues who taught organic chemistry that students who had Chem Zen did very well in their courses, by virtue of “conspicuously” good grasp of basics of chemical kinetics, thermodynamics, and chemical bonding, especially molecular orbitals. So the liberal arts approach of Chem Zen didn’t seem to do harm, although of course we can’t conclude it did good either. I suspect that if my faculty colleagues were to visit Chem Zen, at least some of them would regard my parables as “digressions.” About 20% of the evaluation forms filled out by students said “He digresses a lot.” Actually, I worked harder preparing the parables and related demos than the traditional material. It could be that I think they are important just because at root I’m a preacher. That suspicion goes back to my sophomore year at Stanford. I took a psychology course that required students to take various tests, including a vocational interest test. It had been given to a lot of professionals in various fields. Your interests were assessed by comparing the profile of your answers with that calibration. For me, it showed zero overlap with real estate salesmen but huge overlap with preachers. Also indicated that I resented authority enormously. A surprise to me because I thought I was just a sweet fellow who naturally didn’t like to bother people. But then I realized the reason was that I didn’t want people telling me what to do. Anyhow, I’ve wondered whether that’s why, even when I’m excited about some notion or opinion, I’m usually reluctant to push it too strongly, either on students or faculty colleagues.
When you teach more advanced courses, like graduate courses, do you take a standard approach or do you just try to leaven that a bit with —
I taught quite a few different courses and always told stories, mostly of the kind I think of as parables. As a student, I felt I got valuable insights from such stories. And it did enliven things a lot to learn something about the people and circumstances that led to discoveries. I’ve often compared becoming a scientist to becoming a musician. Even if you have talent, you have to practice a lot to master your instrument, read the literature, and understand the culture. Science is easier, because you will surely play a lot of wrong notes, yet when you get one right be appropriately applauded. In any case, it’s not just a matter of learning to solve equations or to get apparatus to work. In teaching quantum mechanics and other courses that involve lots of equations, I always had nice notes all written up and handed them out. Had the students work out a lot of homework problems. Then in class meetings had them volunteer, usually in pairs, to lead discussion of parts of the notes and problems. I would interject stories and comments as we went along. Like Debye, I wanted to be sure the students appreciated the cultural and historic context. Also connections between the material at hand to other things. Rather than the usual sort of texts, in graduate courses I had the students write papers on topics of their choice. Again, the aim was to help them take ownership of the subject or some part of it.
Let me ask you about Currier House.
Yes, that was a wonderful experience.
Why did Dudley Herschbach, with your plate as full as it was — you probably had a lot of graduate students. You came to Harvard in 1966?
What were the years you were in Currier House?
So you were full head of steam at that point.
Yes, although I only had 5 or 6 grad students then. As a matter of fact, I had served from 1977-1980 as Department Chairman. That was a strenuous time. Then I had a year on sabbatical, before being invited to take on Currier House.
Was Bright alive then?
Did he advise you against it?
No, I don’t think I asked his advice. He probably would have advised against it. That’s probably why I didn’t ask him. Here’s what happened. Two of the tutors at Currier Houser were my graduate students and urged me to consider it. So Georgene and I went to visit. We’d never been there before. It’s up in what’s called the Radcliff Quad, about a mile from the main part of the campus. Currier was built when Polly Bunting was president of Radcliff. She felt Radcliffe had to institute a house system because otherwise women would always be second-class citizens. Radcliffe had just a bunch of dorms and Harvard had since the 1930s these fine houses. So she raised the money and built Currier during the early 1970s. Back then it was called the Harvard Hilton although it belonged to Radcliffe, not Harvard. Currier had special things, including a dance studio, an art studio, a wood shop, music practice rooms, and even an attached child care center. Later in the 1970s, a great and abrupt revolution occurred. What had previously been considered unthinkable happened in the space of a year: women and men began living in the same dorms. Currier House opened just after another abrupt change came: Black students were for the first time recruited in significant numbers. Before the assassination of Martin Luther King, Jr., in 1968, Harvard, Stanford, the Ivies, all those schools never had more than 1% Black students. For the next incoming class after his assassination, all those schools invited 8% Blacks. By the 1980s, Currier was described in the Harvard Crimson, the student newspaper, as “where the third world meets the nerd world.” In part, that referred to the concentration of most of the Black students there. At that time, and for more than a decade after, sophomores were assigned to Harvard houses by a lottery system designed to satisfy the preferences of students as well as possible. The original houses, nearest Harvard Square and the river, were most popular, those up in the Radcliffe Quad, including Currier, much less popular. For most Black students Currier was their first choice, so the lottery assigned them there, In contrast, few of the “River houses” had even a single Black student, most had none. This happened, I think (there’s been no study of it as far as I know) because when numbers of Black students arrived, they saw the River houses, and the “Finals clubs” around them, as monuments to a society that had excluded their ancestors. That aura did not cling to Currier House, new and up in the Radcliff Quad. Nowadays the house assignments are done randomly, so all twelve houses have similarly diverse populations. On our first visit to Currier, we learned about serious racial tension. The previous “Masters” (terminology inherited from the British Ox Bridge Colleges) had to leave after only two years. They were nice people but got into a terrible situation. The house superintendent, who was Irish, disliked Black students. When an elegant chair disappeared from the master’s quarters, the superintendent took the wife of the master around looking for it, but only looked in Black students’ rooms. Other episodes made things worse. The previous year, about 20 of the freshmen football players had “blocked” in the lottery, asking to go to the same house together. Of course, that negated their chances of getting into a River house and they wound up in Currier House. For years, the Black students had comfortably run student affairs at Currier via the elected House committee. Arrival of the block of 20 football guys created much tension, particularly since several were genuine rednecks. Most had failed to make the varsity team, had little interest in studying, but did a lot of loud late-night partying, drinking gallons of beer. We heard about these problems, and worse, from the Senior Tutor, a very earnest, likeable young woman. We were startled. If Black and White students can’t learn to get along at a place like Harvard, what chance do they have in the wider society? We were optimistic about improving things, having met a fine group of Currier tutors and students. So we felt the Currier situation was something we should take on. Our daughters were nearing high school age and Georgene liked the idea of getting experience in administration. Indeed, she got a lot of it in our five years there. It actually was a particularly good training ground for college administration. The system puts the Houses at the intersection of the administration systems of the undergrad college, the grad school of arts and sciences, building and grounds, food services, office of the arts, athletic department and more. Various activities involved special interactions with the medical school, law school, music department, etc… Further instructive complexity entered because at that time Currier was owned by Radcliffe but operated by Harvard. (It was not until some years later that Radcliffe and Harvard finally merged administrations.) Georgene with help from an assistant, handled most of the multifaceted administrative tasks and communications. After our “graduation” from Currier, she went on to become Assistant Dean, then Registrar, and now Associate Dean of Harvard College. During our first few weeks at Currier, in the fall of 1981, many of the students were a bit leery of us. As we were told later, they were suspicious that University Hall had put us there to deal with the problems by clamping down pretty harshly. Of course, we had no such notion. Our motto was “no rules, only principles.” We wanted to help them to work things out on their own. The ugly racial tensions faded away rather quickly, because student leaders emerged that dealt with it. We’ve always emphasized that they deserved the credit, not us. For example, previously there had been conflict over parties; the Black students wanted relatively quiet parties for conservation and dancing, with no alcohol; the White students favored very loud music and lots of alcohol. We just appointed a committee of students and tutors, found it was easy to agree on principles, and they devised arrangements that worked nicely. Most remarkable during our first year was an unprecedented success in the intermural sports competition among the Houses. The intermural program, overseen by the Athletic Department, was quite extensive. It included about 40 sports, among them tackle football (with pads and helmets), soccer, a two-mile run, wrestling, fencing, basketball, volleyball, ultimate frisbee, and even crew races on the Charles River (alumni had provided shells and oars for each House!). In many of the sports, both men’s and women’s teams could enter, and mixed-gender teams in some cases. There was an elaborate point system and the House that got the most was awarded a venerable trophy, the Straus Cup. Currier had never before come in higher than 11th, but in 1981-82 actually won the Straus Cup. How it happened showed that sometimes virtue really is rewarded. At the start of the year, we met with the students who had volunteered to be in charge of our Athletic committee and suggested two principles: (1) we’d enter all events; (2) anybody who showed up would get to play, regardless of ability. We just wanted to maximize participation, with no ambition to win. We didn’t anticipate that other Houses, eager to win, gave only their best players much chance to play. Soon students who didn’t get to play much would stop showing up for games, so further attrition due to exams or other activities often resulted in forfeits for lack of the minimum number of players. Currier never had to forfeit, so we picked up a lot of easy points. Midway in the fall semester, Currier already had more points than ever in a full year before. That spurred participation still more; e.g., students who had never done fencing or wrestling did so. Georgene played center very effectively on the women’s basketball team and we both took part in running events. The climax came in the crew races, in late spring. Currier didn’t have any students who had done crew before, but a tutor who had rowed for Princeton served as an excellent coach. Eliot House, which had many students from prep schools with crew experience, had long been dominant in the crew races. In a dramatic race, our women’s B team, none of whom had set foot in a shell until 5 weeks before, nosed out Eliot’s team and cinched the Straus Cup. To celebrate the unprecedented victory, we got a permit from the City of Cambridge and carried the Cup up to the Quad in a parade led by bagpipes — also an unprecedented event! Afterwards, I did some research on the Straus Cup competition. The Athletic Department had records going back for decades. Also, I found that the Harvard Admissions procedure included assigning to each student a number between 1 and 7 indicating an estimate of athletic ability. From that data, I calculated an athletic index for each House, and found a very strong correlation with the intermural points. The correlation showed why the Straus Cup was nearly always won by one of the three River Houses preferred by athletes. It also predicted that Currier should have again ranked 11th, with only 25 to 30% of the points it got in 1981-82. Thus, apparently about 70% can be attributed to consequences of principles (1) and (2). Moreover, Currier House remained near the top of the Straus Cup competition for years afterwards. Unexpectedly doing so well in sports carried over to much else. For instance, a few students, entirely on their own initiative, led the way to a greatly expanded public service program. Essentially, Currier adopted a neighborhood in North Cambridge. Our students taught adults there English as a second language, tutored kids and did much else. To raise funds to support the service activities, not only at Currier but also at other Houses, the students launched an annual dance marathon. There’s much more I could tell you, just wanted to mention enough to indicate why we cherish memories of our experience there. Old-time Housemasters at Harvard used to serve for decades, but in our day it was a five-year appointment, although renewable. At the outset, we decided to do it just for five but went at it full tilt during those years. As we liked to say, during that time we were reincarnated as undergraduates. Some of my colleagues presumed that being a Housemaster is something of a lark. We did have a lot of fun, but also felt we were doing serious work of abiding value. In 1986 we “graduated” from Currier, and three months later the Nobel Prize was announced. I happened to be scheduled to give a talk at the Science Table the next night at Currier House, so I did and of course, everybody was very happy about this coincidence. You asked me earlier about awards I especially liked. I should mention a joint award to Georgene and me from Currier. There was a situation, I’ll not go into specifics, where we had to take a very firm stand against something a group of Black students wanted to do that we felt was unacceptable. We explained why, and there was angry reaction from some of them. But they changed their mind about it. So when the Black Student Association gave us an award, it meant a lot.
Sure. What was your schedule like during that period? Did you live there? You lived there.
Oh yes, we lived there. I continued to teach a regular schedule. Housemasters were entitled to reduce teaching to half-time but I didn’t take that option. While I was Department Chair, I had discovered that things had evolved such that no senior faculty member was teaching any of the first two years of chemistry, only junior faculty were. I jawboned a lot, with the result that two years later all those courses were taught by senior faculty. While jawboning, I’d said that after my chairmanship and a sabbatical year, I’ll teach freshman chemistry. Of course I didn’t anticipate going to Currier House. That first year was strenuous, but in the Spring I started teaching freshman chemistry because I didn’t want to cop out on my promise to do so. It was really hard because it took me about seven hours to prepare each lecture. You have to plan very carefully what you say. The course I taught then was for less-well-prepared students, so you could easily confuse 300 students in a twinkling. I had heard Purcell give a lecture about the virtues of teaching elementary courses with viewgraphs. So I made up viewgraphs, which took a good part of the seven hours per lecture. I also got up early to take our daughter Brenda to the train because she was commuting to Milton Academy. Often she had to take her cello along. At any rate, I didn’t get a lot of sleep, especially that spring term. Students didn’t sleep a lot either. Between socializing and study, they usually wouldn’t get to bed until 2:00 or 3:00 in the morning. I’d teach a 9:00 a.m. class so I’d get up very early to work more on my lecture. It was a strenuous time but fulfilling time.
Did you get any research done?
Well, my output of papers decreased a lot, in fact by more than 50%. Yet, it was during the Currier years that I got started on dimensional scaling. And afterwards, my research output actually increased markedly.
This is Currier House? [Looking at bound volumes of papers; that for the Currier period much thinner than volumes before and after.]
Yes, I had much less time to write papers when we were at Currier House. Also, few Graduate students joined my group during that period; others probably thought I’d lost interest in research. It was a while before my group built up again. But at any rate, I’m very glad we had the Currier years; it was a fine experience.
What influence did it have on you as you look back?
Well, I think it helped my teaching to live among the students. I came to understand many things that were very different from my student days.
Were you essentially a tutor in the house in Currier?
Not really a tutor, but you’re there most all the time and you eat most of your meals with the students, so you have many conversations.
Did they come to you with problems?
Oh yes, sure.
The Senior Tutor is a resident dean, and handles many of the routine things. The more serious academic problems do reach the Comasters. You’re actually responsible for everything. An especially important task was recruiting tutors each year. Usually there was turnover of a quarter or a third of the tutoring staff, so you’d probably hire five resident to seven tutors a year, grad students or young faculty. In addition to about 20 resident tutors, we also had about 40 nonresident tutors and 30 faculty affiliates who would visit often. There were always many visitors and lots of activities going on, many involving students and tutors together. Music was pervasive. We were only a few blocks from the Longy School of Music and developed a collaboration with them. Among our students many were serious and talented musicians. Also, we recruited as music tutors two professional pianists, a married couple. They and their friends often performed at Currier, to rehearse before a competition or concert. In fact, about 50 concerts were held each year at Currier, ranging from solo performances to small ensembles and bands playing all sorts of music, from rock and jazz to classical chamber pieces. I might add that it was a lot better being housemaster than department chairman. In both cases you encounter some childish behavior. As housemaster you deal with people younger than you, and that’s easier. When you’re chairman, the most difficult people are usually older than you. I’ll refrain from telling about some humdinger episodes during my chairmanship. Our time in Currier House was a great experience. Now, years later, we keep in touch with tutors or students that we knew there. We still visit from time to time as members of the so-called Senior Common Room.
Well, that’s wonderful. I’ll tell you what. Why don’t we call it a day for today?
Yes, I’m awfully tired.
We’ll get back together tomorrow afternoon.
Sure, at 1:00.
1:00. All right, this will be it for now.