Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
Please contact [email protected] with any feedback.
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of Dudley Herschbach by John Rigden on 2003 May 22,
Niels Bohr Library & Archives, American Institute of Physics,
College Park, MD USA,
For multiple citations, "AIP" is the preferred abbreviation for the location.
In this interview Dudley Herschbach discusses topics such as: his childhood and early interests in chemistry and physics; recruitment by Stanford University; decision to do graduate work with E. Bright Wilson at Harvard University; working on molecular beams at University of California, Berkeley; meeting Otto Stern; transition-state theory; Yuan Lee; things learned from Linus Pauling; consequences of winning the Nobel Prize in Chemistry; dimensional scaling; prizes he has won; grant support for research; advising students; biophysics; staying between physics and chemistry and in between theory and experiment; history of science; changes during his career and looking ahead.
This is John Rigden and we are sitting in the office of Dudley Herschbach. It is the 22nd of May and it is the afternoon. And we’re continuing from yesterday. I would like to pick up on a couple of odds and ends from yesterday and I would like to start by asking you about your latest research activities. We did yesterday speak to molecular dynamics, your early cross-beam experiments, and your first and second generation machines. We talked about the forbidden fruits. And we talked about the scaling, which I feel has real fundamental importance. The scaling brings you right up to now, because you’re still doing some of this. But you’re also doing work related to DNA, correct?
Biophysics. DNA. Right. It’s theoretical work, and I got into it because I had an orphan graduate student, Anita Goel. She’d been dismissed by a previous advisor. I’ve always felt that a faculty member ought to be first and foremost a public servant. So as in some other cases, I have always agreed to take on any student reckless enough (or desperate enough) to want to have me as their advisor. What Anita had been working on before had to do with enzymes interacting with DNA. Of course I knew very little about this. But I felt that for her to complete her Ph.D. in a reasonable time, she shouldn’t start all over again. The relevant experiments, done by other people, were very nice in principle. A strand of DNA is stretched by means of what are called optical tweezers. The tension on the strand is varied while an enzyme in solution is interacting with this DNA. The enzyme moves along the strand of DNA as its catalyzing addition of bases to a single strand portion to convert it to double strand. They can see how that depends on the tension.
Of the double strand.
Well, it’s partly double and partly single strand.
Is there like a force constant?
Yes. The stretching of DNA under tension is measured. That’s done, in the absence of enzyme, for single strand DNA and for double strand DNA, which is much stiffer. Then it’s done with the enzyme present acting on a DNA that’s part single and part double strange. From the way the end-to-end distance of this stretched strand changes as the enzyme adds bases from solution to convert the single strand portion to more double strand they can tell how fast the enzyme is moving along. For example, at low tension it’s typically 100 bases per second for the particular enzyme we studied.
The enzyme zips right along doing its business. The enzyme people like to call it a molecular motor. But as the tension on the DNA is increased, the motor slows down and stalls and eventually reverses and starts undoing the double strand portion, turning it back to single strand. However, the interpretation given in the literature by the people who did the experiments didn’t seem reasonable to us. There’s nice x-ray structural data on complexes of DNA with the very same enzyme that had been studied in the optical tweezer experiments. And it convinced us that the original interpretation could not right. So we wound up developing a very simple model, which ironically is connected with the pendular molecule work we discussed yesterday. In that, we were orienting diatomic molecules traveling through a vacuum and subject to an electric field. Here, we’re talking about the orientation of segments of DNA near the active site of the enzyme in an aqueous solution containing bases that will get added to the DNA. It’s somewhat as if a field is imposed by the enzyme. The interaction with the enzyme restricts the volume within which the DNA segment is free to move, and then the enzyme changes its confirmation. The enzyme landscape is, roughly speaking, like a right hand. The DNA strand that’s interacting with it comes across the palm between the thumb and the index finger. When the enzyme fingers close, it kinks the DNA a bit and then the incoming base from solution slides in between a couple of the fingers. If it’s the correct kind of base, for example, an adenine approaching a thymine, a Watson-Crick pairing occurs, catalyzed by the enzyme. Then the enzyme hand opens up again, moves one notch up along the DNA and does the same at the next site.
A hundred times a second.
A hundred times a second. But under tension the rate changes, slowing down. It’s obviously not too clear how to interpret such data. The fellows who did the tweezer experiments were physicists, and viewed things something like this: “We can tell something from this slowing down about the work the enzyme has to do against the tension force. We know the magnitude of the force. Work is force times distance. So what must be happening is changing the length of the DNA because a single strand segment is longer than a double strand segment.” From such reasoning, it was inferred that the distance involved in the work that caused the slowing corresponded to changing two segments from single strand to double strand length. So it was concluded that two bases were being added in the critical configuration, what chemists call the transition state. But the x-ray structure of the enzyme-DNA complex in the closed configuration, considered the likely transition state, showed there wasn’t room for more than one. A similar tweezer study in another laboratory, interpreted the same way, concluded from work with a different enzyme that four bases were added in the transition state. That would be very odd, since each catalytic cycle incorporates just one base. The interpretation derived from inferred distance changes indicates that in the transition state you might have two or even four bases added, so then from one or three of them would somehow be undone before starting the next cycle. Well, naturally, after so much work on collisions, I tend to think about angles. I thought maybe we could show that angular reorientations of the DNA segments are involved, like pendular molecules in an electric field. If such angular changes can account for the work inferred from the data, then it would not be a matter of changing distance per se, and could be consistent with adding just one base in the transition state. We showed in a model calculation that seemed a plausible interpretation. But it was just a model and involved some rough assumptions and approximations. Since Martin Karplus and his group had for many years been doing computer simulations of molecular dynamics for big proteins, we discussed this with him. He kindly invited us to undertake simulation calculations, in collaboration with Ioan Andricioaei, a very capable postdoc (now a professor at U. California, Irvine). It took a couple of years, and we have now just submitted a big paper. It’s the first simulation treating DNA interacting with an enzyme under tension. There are a lot of interesting features. It does turn out that the two segments right near the active site of the enzyme appear to be the most important and can be modeled as little pendular molecules. And the key thing is how, when the enzyme closes its fingers, it changes the angular freedom that the segments have, or the enzyme allows them to have. That accounts nicely for the tension dependence. There’s an aspect that invites a wedding analogy. The incoming base, destined to form a Watson-Crick pair, can be likened to a bride approaching the church. However, in the open configuration of the enzyme-DNA complex, the church doorway is blocked by a tyrosine group of the enzyme. Moreover, the base on the single-stranded DNA segment of DNA that’s to be the groom seems disinterested; the groom is taking a snooze, lying down almost 90 degrees to the direction of the aisle along which the bride will enter. When the enzyme hand closes, the tyrosine door guard is rotated out of the way and the groom is flipped upright, allowing proper nuptial alignment. During Spring Break last year, my wife Georgene and I were in Paris for a week. We went to an exhibit of Chagall paintings and were delighted to see one he’d done in 1917 that depicts nuptial alignment, with the groom perched on the bride’s shoulders. Of course, I’ve shown that in recent talks. We think that the reversal of DNA replication observed as the tension is increased might result from nuptial misalignment. That would correspond to the groom getting tipsy and sliding off the bride’s shoulders. It’s gratifying to find some connection between treating orientation of molecular dipoles in an electric field in and the interaction of DNA segments with an enzyme while subject to tension. The simulation results suggest that we can take structural data from the protein data bank on the web and predict some properties of other molecular motor systems without further large-scale simulations. We’re exploring that right now, but how well it works remains to be seen. In any case, we’ve had a happy time finding that wiggling DNA segments resemble pendular molecules in significant ways.
A happy occasion.
It made a nice Ph.D. thesis for Anita Goel. She’s in an M.D./Ph.D. program, jointly run by Harvard and MIT. Now she’s about to go off to finish the M.D. part of it.
How do you treat the literature? I mean, you read Physics Today and that’s not really the literature. But you’ve got an idea from the Witten paper for one piece of forbidden fruit.
Again, teaching has led me to look into things that sometimes open vistas for research — and vice-versa. My first reaction to the Witten paper was to use the idea of D-scaling for a homework problem, and it turned out to launch us on an exploration that’s still unfolding.
All right. Then you read somewhere about DNA. Now, that wasn’t in a chemistry journal.
Oh, well. Of course I’d heard a lot about DNA.
I know that.
But this particular problem, it was a personal thing. Anita got me into it. I would not have gotten into that particular problem otherwise. Actually, I think a lot of what I do is motivated by aesthetic or artistic appeal. Or by recognizing aspects that spur thinking of a more simple heuristic view. I don’t scrutinize the research journals in a systematic, regular way. More useful to me are less technical journals, like Physics Today or books that give a broad, qualitative view. Usually, I don’t dig into research articles until I’ve gotten underway on a project myself. Often, the starting point comes just from a conversation or a seminar. Anita had told me about the optical tweezer work and soon after we heard a seminar on that by Carlo Bustamante from Berkeley. It was lovely work. But the conclusion that two DNA segments were changed from single-strand to double-strand in the transition state did not seem reasonable. It really depended on a “global” model using stretching data on long single- and double-strands of DNA taken in the absence of the enzyme. But as chemists we naturally thought of a “local” model considering interactions of the DNA segments with the enzyme near its active site. This shifted attention to the x-ray structure data for the DNA-enzyme complex, which gave a very different picture than the global model. Putting emphasis on molecular structure was of course congenial for me, as it would be too for you, since we both worked with Bright Wilson. I’ve found plenty of opportunities to make use of what I learned in his lab.
It is amazing though how you’re able to take an idea and see how that same idea can be applied in a very different context.
Well, yeah. It’s probably because I’m basically very simple minded and I like to think in a pictorial way. So it’s a habit to look for simple-minded models, although of course sometimes misleading. I’ve actually written more theoretical papers than experimental. But if you look at my papers you’ll notice I mostly have only one line equations. I try to define notation and formulate the equations to exhibit the building block ideas. That I owe to Pauling, to Debye, and to Polya. I much admire the way Pauling wrote for chemists. A nice example is a paper he wrote about barriers to internal rotation which I read as a grad student. I remember that he made use of the spherical harmonic addition theorem. He didn’t say so, but clearly he understood it and his use of it perfectly. Anyone conversant with that theorem would see what he was doing immediately. Typically, less astute authors would want to show off their erudition by trotting out all the mathematics. Pauling knew his audience, so made the heuristic understanding accessible without distraction. I’ve tried to emulate that, although in writing for chemical physicists, it’s appropriate to use more mathematics. But I try to tidy it up and supply words to keep the qualitative essence in the forefront. Often it’s a matter of defining notation carefully so the modular blocks stand out. Again, teaching freshman chemistry I think has helped me to do that better. It’s like being a journalist. You need to get to the crux of the matter quickly and supply a “hook” to interest the reader. I really find there’s not much difference between teaching freshmen and doing the kind of research I do. The basic approach is the same. Yesterday I mentioned how much I admired Nico Bloembergen because he gave such beautiful qualitative insights when we’d ask him a question in his solid state physics course. So in my courses I strive to provide and emphasize clear qualitative pictures. Material that involves math I give them in sets of notes, with more motivating words and explanations than equations. Like Debye, I like to bring out connections and historical context before we look into the details or derivations. To gain confidence and take ownership, it’s essential for students to get qualitative, heuristic understanding. That you don’t get so easily from the textbooks.
You don’t get that.
So in all my teaching and lecturing and mentoring, that’s the approach. In research too.
All right. Now, is there any other aspect of your research that you would want to comment upon before we go to the next topic?
Well, there are quite a few other enjoyable episodes. I think I should restrain myself. The Annual Review of Physical Chemistry invites old-timers to write about the first 50-years of their career. My turn came in the 2000 issue, and I described a number of episodes not mentioned here. I’ll say a little about a couple of them. The chemistry behind meteor trails is literally eye-opening! Everybody has seen meteors streak across the sky leaving very long trails behind. Few people, even among scientists, know the light from meteors is primarily the atomic sodium D-line, the same yellow glow familiar in street lamps. It’s a fully allowed electronic transition. So that means it’s emitted in roughly 10-8 seconds. A meteor travels fast but not very far in 10-8 seconds. If the light came just from physical excitation, exciting sodium by heating as the meteor entered the mesosphere up there at 90 kilometers, all we’d see is a little glow — Rudolph the Yellow-Nosed Meteor — not a long trail stretching far across the sky. So how come there’s a trail like that? When I ask the audience, usually someone eventually says, “Well, it’s not physics, it’s chemistry.” Another revealing clue is that there’s only 1% sodium in the meteor. There’s much more iron and other metals, but we don’t see them light up. The reason is that sodium reacts with no activation energy with ozone that’s up there, and that produces sodium monoxide, which then reacts in a separate step with oxygen atoms to reform the sodium, some of which is excited. Reaction with other metals is inhibited by activation energy barriers. The two-step process involving sodium, ozone, and oxygen atoms is just like that for chlorine atoms (instead of sodium) reacting with ozone in the stratosphere, much lower down than the mesosphere where the meteors come in. The two-step mechanism was proposed by Sidney Chapman way back in 1939, to explain both meteor trails and sodium nightglow. But in the 1980s, lab experiments on the reaction of sodium monoxide plus oxygen atoms didn’t find any light, any sodium atom excitation. In collaboration with Chuck Kolb and colleagues at Aerodyne Corporation, we undertook molecular beam and flow tube experiments that gave enlightenment. It turns out that the reaction of sodium atoms with ozone produces sodium monoxide mostly in an excited electronic state. That excited state does react with oxygen atoms to produce sodium light; under conditions of the previous experiments, only the ground state of sodium monoxide was involved, and its reaction with oxygen doesn’t produce light. The electronic configuration of the excited state of sodium monoxide is very interesting. The bonding is highly ionic, so the molecule is essentially an ion-pair, Na+O-. The outer valence shell of the oxygen anion resembles that of a halogen atom: it has a “hole” because it lacks the additional p-electron needed to complete a “closed” shell. In the ground state, that hole resides in a p-orbital oriented perpendicular to the bond axis. In the exited state, however, the hole is in an orbital along the bond axis, so the hole is aimed toward the cation! Of course, that seems weird for an ion-pair molecule. But it’s the case and as a consequence the excited state of sodium monoxide has a key role in producing meteor trails and nightglow. The trails are long and nightglow ubiquitous because there’s a layer of sodium atoms up in the mesosphere, with peak concentration about 3000 atoms/cc. It’s the steady-state result of ablation from meteors. When a new meteor comes in, it increases in its trailing wake the sodium concentration above the steady-state level, so produces the “extra” emission we see. In fact, the sodium layer has proved a boon for astronomy. A technique called LIDAR sends a laser beam up 90 km to excite sodium. Viewing emission form the excited atoms from earth-bound telescopes provides a way to monitor and correct for the effect of atmospheric fluctuations and thereby greatly enhance the resolution and range of the telescopes. There are many other interesting aspects and it’s a continuing odyssey, one of my favorites. Although study of single collisions, making use of vacuum apparatus has been my main work, I’ve also enjoying doing some condensed phase chemical physics, the domain of perpetual collisions, using especially high pressure apparatus. Again, my interest in high pressure research came via teaching. When I taught quantum mechanics, a favorite problem we discussed had a hydrogen atom at the center of a sphere. As the radius of the sphere shrinks, the familiar energy levels produced by the Coulombic potential became less stable and successively rose above the ionization limit. When the sphere became small enough, even the ground s-state couldn't be bound by the Coulomb force, although of course the electron remained confined within the spherical “box.” For homework, students would figure out various instructive variants, such as what happens when the sphere is replaced by an ellipsoid. Of course, this problem simulates the effect of high pressure, which squeezes neighboring atoms together, so the strong repulsion from overlapping electron clouds in effect acts like a “box.” Nobody had treated the analogous problem of a hydrogen molecule confined to an ellipsoid. A grad student, Rich LeSar, undertook to do it, to model how high pressure would weaken the chemical bond. Incidentally, in our paper, the first reference is to Edgar Allen Poe’s famous horror story, The Pit and the Pendulum. As you may remember, the narrator wakes up to find that he’s strapped to a sled with a sharp blade on a pendulum skimming above him. And he’s in an ellipsoidal chamber, formed by sheets of heated steel that were steadily sliding together to shrink the chamber and push the sled toward a pit. Our calculations for the boxed hydrogen molecule used a high quality variation function. We put the protons at the foci of the ellipsoid, with the consequence that as the bond distance changed our ellipsoid changed shape in just the same way as Poe’s diabolical shrinking “lozenge” did. Our results provided a toy model for how the bond energy and length, vibrational frequency, and other properties changed with pressure. We also obtained a plot comparing how the energies of the hydrogen molecule, the helium atom, or a pair of electrons unaccompanied by nuclei varied with pressure. At high enough pressure, the curves became the same, since then it didn’t matter whether there were any nuclei in the box, as the repulsive potential of the box became totally dominant. This illustrates nicely, as I like to say, that high pressure provides a universal catalyst, facilitating rearrangement of chemical bonds.
It’s like a degenerate… like a dwarf sort of thing?
Yes. Indeed, high pressure experiments induce all sorts of phase transitions and formation of exotic molecular structures not stable at ordinary conditions. Of course, you know about Wigner’s famous prediction that hydrogen should become a metal at high enough pressure. We looked at that a bit too. Modeling by LeSar and electronic structure calculations by another grad student, David Dixon, indicated that even if you hypothetically compress atomic hydrogen to high pressures, you don’t go directly to the metallic phase, but you go through a molecular H6 phase first. Such a phase hasn’t been discovered yet; it’s still an open question. Right now I’m again involved in some high pressure research, this time experimental. It’s in collaboration with Russ Hemley and Dave Mao at the Geophysical Laboratory of the Carnegie Institution of Washington; I go down there once a month. We want to examine how activation energies for chemical reactions change under high pressure. In the 1980s, I was also involved in high pressure experiments, in a more modest pressure range, in collaboration with Hubert King and others at the Exxon Research Laboratory in Clinton, New Jersey. We investigated how vibrational frequencies of solute molecules in solution changed with pressure, as a means of getting information about the solute-solvent interactions. That turned out pretty interesting too. However, I probably have already unloaded more than you should have to bear, hearing all these stories.
Let’s wind up your research by you trying to give me two sentences that you would use to describe why you were awarded the Nobel Prize.
Of course, that had to do with elucidating the dynamics of elementary chemical reactions by means of molecular beams.
And is that what the Stockholm people said?
Were you asked to approve that statement?
Not beforehand! But it was obvious that was the reason.
And what they said was accurate as far as you’re concerned?
Oh, sure. That’s certainly the work I’m best known for. And it’s been the main path of my work all along.
And it goes through everything, in a way.
Well, a lot is connected to it. Obviously, as I say, even doing something with DNA turns out to have some connection to the beam work. As everybody knows, certain physicians are reputed to always diagnose the same malady. So whatever I meet in my research odyssey, I see it through the eyes of someone who loves to think in terms of simple molecular interactions. That’s why I got so fascinated with beams to begin with. Otto Stern, in the Foreword to an early book about molecular beams, emphasized its “directness (and in principle at least) its primitiveness.” And also “that beauty and peculiar charm which so firmly captivate physicists working in this field.” As a caption for my research, you could use “Primitive simplicity.”
Okay. One more question from yesterday. You talked about, in a very gentle way, the funding issue and the difficulties that you have encountered since your Nobel Prize. What you didn’t say, or even imply, which you may not believe, but there are people who are concerned with the peer review process.
Oh, yes. I’m among them, yes.
Are you concerned that in an era when the number of resources is limited that people will review things in their own self-interest rather than in the interest of science. Do you have a sense of this at all?
Well, yes. It seems to me that even if reviewers are perfectly well meaning, it’s almost unavoidable. Now NSF, which is the prime funder for chemical physics, requires five or more reviews for a grant proposal. If a proposal, particularly from a young scientist, asks support for adventurous work that’s really novel, not being done by anybody else, it’s very unlikely there will be five reviewers who really can appreciate it and rank it highly. But if the proposal is for mainstream work, worthwhile but not novel, related to that the reviewers themselves are doing, the likelihood of a favorable assessment is higher. The reviewers hope to continue getting support for similar work themselves. So there’s a built-in bias. Sometimes, I’ve joked, but it’s serious, that NSF should have a special pot of funding for which a proposal is not eligible unless three of the five reviewers say, “It won’t work.” We should invest some amount to encourage such work. Under the conditions prevalent today, I don’t think that molecular beam work would have received NSF support back when beams got started in chemistry. What happened was NSF then had a fellow, Bill Cramer, who had actually done some ion beam work. He became the research monitor at NSF for molecular beams. Back then there were only one or two reviewers per proposal, and also there were many fewer proposals. When you sent a proposal to NSF, if approved the money would actually arrive only four months after submission. Now it’s a long time before they even send it out, even by fast lane. There are a lot of hoops to go through before even the reviewers are selected. With five reviewers, it winds up that you rarely get any funding earlier than a year. The upshot is you’ve got to have enough funding to keep your lab going, with some margin to borrow from to explore more adventurous ideas that turn up. The Research Corporation helped me with a small grant to try a wild experiment I haven’t mentioned to you, but it’s really beginning to look promising and so maybe you’ll hear about it. I wanted to make very slow molecular beams so we might be able to trap them in space as physicists have done so beautifully with atoms. But it’s a much harder problem with molecules because the laser slowing method that works so well with atoms (at least alkali atoms) doesn’t work with molecules because the molecules have so many vibration-rotation levels. And what we’re doing is using a supersonic source, to exploit its high intensity and narrow velocity spread. This is where you should object: “Wait a minute. That speeds up the molecules.” Yeah, but the source is rotating 40 thousand RPM in the opposite direction. So we call it a counter revolutionary source. The net velocity of the molecules emerging is very low. We’ve already been able to make a beam of oxygen slow enough to acquire a de Broglie wavelength of two angstroms. We want to get it to 20 or 30 angstroms. Even if we don’t succeed in trapping molecules, we should be able to study interactions of molecules in a regime where they’re acting like waves instead of particles. And that’s fundamentally what we want to do. It’s not easy to get funding for things like that. And likewise dimensional scaling seems too far afield. We can’t apply it right away to improve electronic structure calculations over what the conventional big computer programs can do. People in that field therefore don’t find it interesting. We’ve had to scrape up support here and there.
Are you a member of the Science Board?
No, I’ve not been.
Have you ever been?
But you surely have influence.
I’ve talked with people who are members.
And you’ve expressed these kinds of concerns?
Oh, yes. But I haven’t devoted myself to campaigning. Actually, the past couple of years I’ve campaigned about something else. It too is likely tilting at windmills. I’m much concerned about elections, especially our presidential elections. Years ago I read an article in Scientific American that acquainted me somewhat with the theory of elections. The 2000 election was so dismaying that I wrote to the New York Times. They didn’t publish my letter, but as a result I got acquainted with a political scientist, Steven Brams, at New York University. Colleagues in political scientist here and told me, “He’s the expert on election theory.” He’s written several books and many articles about it. Shortly before the 2000 election, Brams had sent a letter to the New York Times predicting what would likely happen, but his letter also was not published. Some months after, we managed to publish an editorial in Science about election theory. Here’s a quick sketch. It resembles the three body problem in physics. Our election system, plurality voting, allows us to vote for only one candidate, no matter how many candidates there are. If there are more than two, plurality voting is about the worst possible way to do it, if we want the winning candidate to be at least acceptable to a majority of the voters. No doubt you recall some three-way races in which a candidate, that polls showed was the least appealing to the electorate, won a plurality because the other two candidates each got less than a third of the votes. By “least appealing” I mean that candidate would lose, often by a wide margin, in a two-way race against either of the other candidates. Nowadays the early primaries are very important in presidential campaigns. There the plurality problem is of course worse still. For instance, there may be five or more candidates. Then the least appealing candidate (in the sense I just specified) can win with an even smaller plurality. There is a built-in bias in favor of a candidate at either the left or right end of the political spectrum. Such a candidate will collect all the votes in that part of the spectrum, whereas the rest are spread among the several other candidates. Our plurality system forces any third party to be a spoiler. In 2000, we had a spectacular instance in Florida, where Nader got 94,000 votes and Bush and Gore differed by only 530, So the election was decided by a minor third party, to the benefit of the major party candidate less congenial with aims of the minor party. That happened in 1992 also, when Ross Perot’s candidacy decided the election in favor of Bill Clinton. Beyond all the other complications of politics, our election system is capricious. I’m convinced that the plurality system would enable someone like Hitler to be elected more easily here than he was in Germany in the 1930s, if we were suffering a similarly dire economic situation. Despite the efforts of Brams and others, it’s very hard to get people to even think about our election system. I’ve tried to get NOVA to do a program. There’s much fascinating material. Many systems other than simple plurality have been tried in practice and analyzed theoretically. Lewis Carroll wrote a lot about election theory, after becoming upset about how Oxford Fellows and British MPs were chosen. Over 50 years ago Kenneth Arrow listed criteria that seemed sensible for elections in a democratic society to fulfill, and then demonstrated no system would fulfill all of them. But some systems are much worse than others. Simple plurality is about the worst possible. Brams has made a strong case that, from practical as well as theoretical viewpoints, the best way to deal with it is so-called approval voting. It simply permits voters to endorse as many candidates as they would approve of holding the office. Because of a misleading ballot format, in 2000 approval voting occurred inadvertently in Palm Beach County. There all the ballots marked for two candidates were thrown out. My letter to the New York Times was to point out the irony. Actually, it would not take a Constitutional Amendment to implement approval voting. The Constitution does not prescribe the plurality system or any system, just leaves that up to the states. What really should be done is to get some state to adopt approval voting for primaries, on a trial basis. Then people even in other states would become acquainted with it. Both major parties would benefit if approval voting were used. They’d be protected from derailing by minor parties, either the right or left that can change the whole outcome, as happened in 1992 and 2000. A third party with appreciable support would no longer be a spoiler. The approval votes of its adherents could decide the election in favor of which major party candidate appealed most to the minor party’s concerns. It’s sad that our nation, which historically has been a mighty force for democracy, actually uses about the worst possible election system.
Do you think you’re ever going to be able to give up DNA and chemistry and devote more time to something like this? Public policy, science policy?
Well, I’m probably not temperamentally suited to such a role. Maybe I will try it if I become aware that my colleagues find me so tiresome they don’t want me around at all. But right now I just can’t give up doing science. I taught a freshman seminar this spring on molecular motors. As mentioned, I only recently began learning about that field myself. I’ve been delighted with the initiative and verve the students have shown in leading discussions and presenting papers on topics of their own choice. They’ve worked up very nicely material on bacterial motion as well as enzyme kinetics and random walks.
Your colleagues. Let me ask a question I hadn’t planned to ask. But one of my blessings in my life has been that I have gotten to know a number of very good people. Very competent people. What it has enabled me to do, Dudley, is that I’ve been able to really recognize myself limitations. There are people who can do in an hour what would take me two weeks. But I’ve competed with them pretty well because I’ve been willing to spend the two weeks while they go out and do other things after they do their hour.
Yes. I’m more or less that way.
Now, wait a minute.
I am pretty slow at a lot of things I do.
And let me go to the top. I knew Feynman a bit, and I will say that Feynman could do in a day what 100 John Rigdens’ could never do. Never do.
Yes. And the same would go for me.
No. No. Now don’t do that.
But yes. Feynman was at another level altogether.
But you are in a position where when you collaborate, not in papers, but you get to know people, you see their work and so forth. And you’re at the higher echelon now (don’t say no). How do you evaluate yourself vis-a-vis Bright Wilson, a Pauling, and these people that you have known?
Well, I don’t worry about evaluating myself. I think I still benefit from my roots in that I don’t think of intellectual work as so glorious as some people do who grew up in an academic culture I think of myself in almost biological terms. I acquired a certain set of intellectual genes implanted in me by my teachers and mentors and I’ve been passing them on to my own students. I think more than anything I have a natural capacity for getting excited about things and exciting other people. I’ve published a little paper you might like called “The Quantum Interpretation of the Intelligence Quotient.” It’s in the Annals of Improbable Research, which sponsors the Ignoble Prize events. The motto of its founder, Marc Abrahams, is “First we want to make you laugh, and then think!” My QI of IQ paper is for fun but has a serious message. It emulates the sort of thing Ben Franklin often did. I wanted to debunk a book titled The Bell Curve that got a lot of notoriety. My QI of IQ starts by noting that psychologists don’t recognize what this bell curve they talk about really is because they don’t know quantum mechanics. Of course, it’s the ground state of a harmonic oscillator, and must be related to molecular vibrations in our brains. We need to consider excited states too. The vibrational amplitudes correspond to IQ values; 100 is the peak of the bell curve and the standard deviation is about 13 or 14 units. In the ground state of the oscillator, the classical turning point is at one standard deviation. So in the ground state your IQ can only exceed about 115 by quantum mechanical tunneling. But in an excited state, if you get excited enough, you can go way out and become a genius. As genius is usually defined as 150 IQ, you need to get excited to N = 5, where the classically accessible amplitude reaches 150 IQ. There’s an interesting little theorem from photochemistry, which I learned in reading Herzberg’s book long ago. If you have a harmonic oscillator with a Boltzmann population distribution in its excited states, and you sum up the population weighted squares of the vibrational wave functions, you get again a bell curve. But now the width of the bell curve depends on the temperature that determines the population. That means the intellectual temperature in your environment contributes to your IQ. The original curve for the ground state is your heredity, but QI of IQ makes perfectly clear how environment enters. So if you live in an excited state, you can be very smart. But when you return to the ground state, as you often will, you may not be all that smart. Personally, I’ve observed that in myself countless times (often remarked about by my wife!) and noticed it in others too. QI of IQ also explains the “Mozart effect” reported in a much-cited paper in Nature. Because when we push a pendular swing for a little kid, we find that if we time our pushes right, the amplitude of the swing grows larger and larger. Mozart evidently wrote his music in such a way that it does just that to important vibrating molecules in our brains. Thereby listening to his music can increase your IQ, although afterwards that boost will droop as the molecular vibration amplitude dies down. Finally, the QI of IQ explains something else that the psychologists have shamefully never even addressed. According to our oscillator model, the vibrational amplitude compresses as well as expands. Compressing shrinks the IQ. That explains why smart people so often do dumb things. I almost put an asterisk to that conclusion saying “like writing this article,” but I figured that anyone who could read the article must be smart enough to recognize that already. That’s my whimsical theory. Molecular vibrations affect lots of things. Seriously, I do think excitement makes a huge difference. I’ve lived in an environment that produces plenty of high intellectual temperature interactions with other excitable people. That’s a major reason why I’ve been able to do a lot of interesting work. It’s those interactions, not just me.
Let me ask you something very specific. You are able to calculate things in ways that I think is not at all typical in chemistry.
Yes, that’s because I had so much mathematics.
Well, all right. But do you not recognize that you have abilities in that regard that puts you ahead of a lot of your colleagues?
Yes, that’s right. But still, lots of people are better at it than I am. My first grad student, George Kwei, back in Berkeley days, said a nice thing one night when we were working together in our lab. He said, “Dudley, you’re a queer bird. Theorists think you’re an experimentalist and experimentalists think you’re a theorist.” That’s right, and I like it! In that way I’m emulating Enrico Fermi. He did experiments when that was appropriate for the question he was interested in, and did theory if that was needed. To be able to do that is a nice aspect of chemical physics. Like a mom-and-pop shop, whatever needs to be done in the store, you can do it yourself. Even in chemistry, lots of research groups are pretty big these days and rely on division of labor. That’s especially so in organic chemistry, where groups of 30 or more are typical. R.B. Woodward, in conducting his monumental syntheses, organized the work in almost military style. His multistep syntheses of large molecules like vitamin B12 required a great deal of strategic planning and coordination among battle units. I haven’t operated that way. For the most part, my research group has worked more like the 19th Century mode. Usually only 2 or 3 people working on any particular project. I’ve liked that immensely, and I think my students have too.
I will just say personally, Dudley, I’ve known you for a long time, and I’ve known more or less what you’re doing, but in preparing for this I was amazed at the extent of the things you’ve done and the contributions you’ve made. But let’s move on.
Well, I’m happy. But still, the credit belongs to the atoms, molecules, and lots of ideas that we have as a legacy. We enhance them a little and pass them on.
All right. I want to talk about Herschbach, the man between. You’ve already touched on it with George Kwei, but you’ve had your feet in camps that have been very interesting. For example, you are a chemical physicist. And in a very important sense you do things like a physicist, you do things that physicists do; and at the same time you do things that chemists do. Let me just start with a trivial question. What’s the difference between chemical physics and physical chemistry?
People often ask that. Of course, it used to be you would point to comparing the Journal of Physical Chemistry with the Journal of Chemical Physics. Up until something like 1948 or so the Journal of Physical Chemistry wouldn’t publish theoretical articles. That, in fact, was the reason that the Journal of Chemical Physics was launched in 1933.
No, you’re wrong.
Is that not right? This is what I’ve heard.
It’s more than that. Physical Review would not publish papers by Condon, by Slater. These people who were doing some small molecule stuff. Quantum mechanics. And there are letters.
Okay, good. So they’re guilty too. Usually the blame is put on the Journal of Physical Chemistry, which wouldn’t publish theory, as I said.
Well, Phys Rev would not publish?
Phys Rev either.
And Urey and a few others, came together to form the Journal of Chemical Physics, where they would have a place to publish their work.
Yes, I knew about that. Ben Bederson asked me to write the article on Chemical Physics for the special issue of Reviews of Modern Physics celebrating the centennial of the American Physics Society. I started out with the story about J Chem Phys and named the founders. Also quoted from Urey’s preface in the first issue of J Chem Phys. But he didn’t say anything unkind about either the J Phys Chem or Phys Rev, so I didn’t know that Phys Rev was also implicated. Recently, two hybrid journals have been launched, one titled ChemPhysChem, the other Phys. Chem. Chem. Phys! At any rate, Chemical Physics is intrinsically interdisciplinary. But my article mentions that the term “chemical physics” goes way back into the 18th Century. A book by Mary Jo Nye, a historian of science, describes this. Old chemistry textbooks had a section titled “chemical physics,” which discussed effects of heat and light. So use of the term “chemical physics” is really quite old. In fact, chemists actually used it before physics was recognized as a separate discipline. But getting back to your question, the modern Journal of Physical Chemistry is indistinguishable from the Journal of Chemical Physics. Now both publish the same kind of papers. The contrast between physical chemistry and chemical physics used to be between a macroscopic and a microscopic focus. Historically, Phys Chem evolved from thermodynamics, electrochemistry and colloid chemistry, whereas modern Chem Phys sprouted from quantum mechanics, statistical mechanics, molecular structure and spectroscopy. Nowadays people often ask about the difference between atomic/molecular physics and physical chemistry/chemical physics. There’s a lot of overlap, but also an interesting cultural aspect that persists. I like to say that physicists strive to develop their understanding of things in a Taylor expansion. That requires getting higher and higher order information near the origin of the expansion. Choices for the origin might be the hydrogen atom or another prototypical model systems. The higher order information you can get, the more accurate predictions you can make. But the accuracy degrades rapidly if you get too far away from the origin. In contrast, physical chemists customarily develop their understanding in a fashion analogous to a Fourier expansion, for which the coefficients are obtained from low order information over a wide interval. That enables predictions to be made far from any prototypical reference case, although with a lower level of detail and only modest accuracy compared with a Taylor expansion. Chemical physicists use both approaches. They’ll use whatever data or calculations might be available for a prototype system, but also invoke empirical correlations and heuristic reasoning that lacks a rigorous theoretical foundation. This melding relates to the analogy to an impressionistic painting, mentioned earlier in describing chemical epistemology.
And in an epistemological sense, do you think Dirac was right when he said that essentially quantum mechanics now solves all of chemistry?
Yes, but only in principle. As many people would point out, if you had an infinitely powerful computer so you could answer any question about chemistry, you would still need to have a chemical kind of theory, to provide a qualitative, heuristic interpretation.
Like what you talked about yesterday.
Yes. We need that. The late Don Bunker, a dear friend who died quite a few years ago, was a good theoretical chemist. He liked to say that the duty of a theoretical chemist is to be wrong, but wrong in an interesting way. Of course, what he meant was, wrong in the sense that you leave out as much as you dare, then see whether your theory manages to describe the essence of the phenomenon. Such a theory is a sporting proposition, seeking to find interesting approximations, postulates or guesses. In a lot of chemical physics, you have to try to get that kind of theory. I often preach to my students, “You have to understand that people use the words ‘theory’ and ‘experiment’ very sloppily.” Often a paper is called theoretical because the paper has a lot of equations. Another paper may not have any equations, so won’t be regarded as theoretical. But actually the second paper may be more theoretical because it introduces a new idea, perhaps in interpreting an experimental observation. And the first paper may not be theoretical at all if it’s just a treatment within an existing framework, with a lot of equations but only old ideas. I think it’s good for students to recognize the difference.
Your comment about “it should be wrong in an interesting way,” you’ve heard the Pauli comment, haven’t you? Pauli was given a manuscript. He said, “It’s not even wrong.” That’s about the most damning thing you could say about it.
Right! I’ve also heard a related story. Pauli dies and goes to Heaven and St. Peter ushers him in to talk with God. Immediately, Pauli asks God about an unsolved physics problem. God says, “Oh, yeah. I have a manuscript on that here.” Pauli looks at it and hands it back, saying “It’s still wrong.” I hope somebody makes a complete collection of Pauli stories someday. I’ve urged Erwin Hahn, a great story teller at Berkeley, to do it. But he hasn’t yet.
Okay. You’ve worked with both chemists and physicists. You’ve worked with both sides. How do you compare them?
Oh, I like them both.
I know that. But they’re different.
Oh, well, yeah. I’ve just been talking about how they’re different.
You have been.
I’m intrigued by the cultural difference. I mentioned the wonderful synthesis that Kishi did, constructing the palytoxin molecule which required getting correct 72 two-fold structural choices. Kishi once remarked to me that he never was comfortable trying to solve a quadratic equation. Often I point out this sort of thing to students. When they arrive at Harvard, many presume that to be a scientist you’ve got to be good at any kind of science or math. But you don’t actually. I say it’s like being a musician. You don’t need to play well all the instruments in the orchestra. Once I heard Yo-Yo Ma say, in public, “I can’t carry a tune. I can’t sing. Fortunately, I can play the cello!” The comparison with musicians is also useful in talking to students or the general public about another aspect of science that is generally misconstrued. Even if blessed with talent, musicians have to work hard to master their instrument, the literature and culture and other things required to perform well. Science is like that but much less hard in a major respect: the scientist can and likely will play most of the notes wrong, even off key, then finally get one right and be appropriately applauded. That’s a huge advantage that science has over most human enterprises. Their academic experience conditions students to think that science is hard. But from a wider perspective, actually doing science is congenial and rewarding. What you want to find out, call it truth or understanding waits patiently for you; it doesn’t change. That’s the huge advantage. In business, sports, war, politics, you may make a seemingly smart move, but a little early or a little late or conditions change so it turns out to be a fiasco instead of a triumph. That explains why more or less ordinary human intelligence can accomplish so much in science. You’ve probably heard the response Fermi gave when someone asked whether he thought his fellow Nobel Laureates in physics had anything in common. Fermi thought a while and said, “No, I can’t think of anything they had in common. Not even intelligence.” I heard a fine talk by Charlie Townes about scientific creativity. He pointed out that there are scientists and other people who are far more productive than average, although they may differ only a little in their IQs. He discussed that in terms of Zipf’s Law. That’s an empirical relation discovered in the 1930s by George Zipf, a professor of linguistics at Harvard. He found that if you ranked the different words in a given text by how often they were used their frequency was approximately inversely proportional to the rank. Thus, the relative frequencies of the highest-ranking English words (the, of, and, to…) are approximately 1, 1/2, 1/3, 1/4… respectively. Actually, I came across a paper written in 1927 by Ed Condon that evidently anticipated Zipf’s analysis of word frequencies. Zipf went on to find similar correlations for many languages and for much other data, ranging from sizes of populations and economic activities to the length of speeches in plays. A similar phenomenon widely observed in physics is “1/f noise,” also known as “pink noise,” wherein the noise power is inversely proportional to the frequency. Complexity theory indicates that pink noise or Zipf’s Law behavior typically arises for events or processes that require the contribution of many independent variables. If these many variables are each distributed as a normal bell curve, and you want the distribution of the sum of the all the variables, you’ll wind up getting Zipf’s Law. You can show that by rolling dice. A heuristic view, natural to a chemist, considers the daunting task of optimizing the yield of a multistep chemical synthesis. To achieve that, you have to get a very good yield in each step, of course hard to do. It’s much more probable that your yield in one step is far from optimum, and then still more probable that your yield is poor in two steps, etc. That’s how the inverse correlation of Zipf’s Law arises. Townes pointed to Zipf’s Law as an intrinsic aspect of scientific productivity. It applies because many factors must be favorable to get exceptional performance. Accordingly, high-ranking achievements rarely emerge unless sufficient support is forthcoming for the inevitably far more numerous efforts that yield lesser results. This perspective is important for issues of research funding. As Townes emphasized, among the factors required to be favorable for strong performance is the acceptance of long-range prospects, diversity in approaches and institutions, tolerance of failures, and encouragement of trial and error because it is not possible to plan what scientific research is going to be successful. These are large-scale implications of Zipf’s Law. It should also sharpen awareness that fostering careers depends on a lot of different things. It depends not just on your intellectual acuity, it depends on your education, it depends on the temperature of your intellectual environment, it depends on your personality and how you interact with people. All kinds of things come into it.
Stamina, yes, and health. I remember Bright Wilson remarking that very productive people were also usually exceptionally strong physically. Also, Henry Eyring until well past 60 used run foot races with his graduate students.
Well, all right. We’ve done chemistry and physics for now. Another thing you’re between is theory and experiment.
Well, I touched on that before too. I love both. I couldn’t give up either one.
And you’ve said that experimentalists think of you as a theorist.
By the way, you can also say that the advantage of being a chemical physicist is when you’re with physicists you can claim to be a chemist, and when you’re with chemists you can claim to be a physicist.
“Oh, I’m sorry. I’m a physicist.”
A chameleon. I don’t really give a darn what I am. I actually think of myself as essentially an amateur scientist. I have a certain level of competence. At a generalized level it allows me to work in different ways. I don’t have profound, far off scale talent or knowledge in any particular area. Of course, that depends on what you compare it with. Anyhow, I am a useful member of the scientific community and feel I’m passing on genes inherited from my teachers and mentors to my students. It’s such a joy and privilege.
How do you think theoretical chemistry and physics have evolved over your active career?
Well, theoretical chemistry and chemical physics have become far more physics like. Of course a great deal of it is due to the power of computers. We have a young theoretical chemist here who has 40 computers hooked up and running parallel calculations. All of his students are calculating diligently. I joke that we may be evolving a new species, Homo Computus, because so many people now spend a very large fraction of their waking hours hooked up with their computer. We all do that much more than ever before. But the tools now available are powerful. Mathematics probably has at least the equivalent of 10 or 15 years of intense study of advanced mathematics built into it. Anybody who learns the lingo, the way Wolfram set it up, have access to all that. Often I point out to students that it’s natural for the younger generation to look at the older generation and say, “Oh, those guys were so lucky. They just walked through the orchard, shook the trees lightly and the fruit fell in their laps.” And I say, “Well, you should recognize two things. One, those pioneers also had the privilege of making lots of blunders which are so easy to make when no path is clearly marked yet. But more important, you benefit from a legacy. There are all kinds of instruments and concepts and theory that were not available to your predecessors.” Again, it’s like the architect aspect that I referred to earlier. New building materials and building methods enable you to do completely new things. Frank Gehry emphasized that he couldn’t have built the famous Guggenheim Museum in Bilbao and many of his other structures if it weren’t for the computer. All the structural elements in the Bilbao Museum building have different dimensions and these are calculated to a fraction of a millimeter then cut and fit together perfectly. An architect couldn’t even imagine doing that before. The impact of computers in science is immense. Here’s a simple example, familiar to any chemist of my vintage. When I worked with Harold Johnston, few physical chemists really understood what a normal mode of vibration was. Obviously Bright Wilson and many other spectroscopists did, but it was not part of the common background that all physical chemists had. Now, it is, and has been for quite a while. Anybody can plug in some standard programs, and calculate electronic structure, force constants, and vibrational frequencies. At times, you have to wonder whether these young people may have bypassed solving the most elementary problems, so won’t know as much as you would like them to about what they’re actually calculating or anything about its historical evolution.
There’s a question, though, that I’m waiting for you to reflect on. Two of your heroes, Bright Wilson and Rabi, for sure Rabi and I think Wilson, argued that theory should be closely connected with experiment. It should be driven by experiment to some extent. In that sense, theory is moving away. It’s taking on a life of its own. Do you not see this?
What do you think of that? You’re a theorist now. And you’re an experimentalist. Your style would be that you want your ideas connected with the laboratory.
Most of the theory I’ve done has been of that kind. It’s naturally prompted by experimental questions. On the other hand, the dimensional scaling is certainly not. But I don’t find it alarming that theory is developing on its own, because of course my early immersion in mathematics made me appreciate how beautiful it is as a pure intellectual adventure. Moreover, it’s uncanny that so often scientists find phenomena for which appropriate mathematics is ready and waiting. So some theory that seems disembodied now may be redeemed that way. Some won’t. I don’t see that as doing great harm. We have lots of scientists now eager to do theory. It’s probably good to let them explore all kinds of things. Perhaps if theorists were in short supply, we’d need to nudge more to do work that aids design and interpretation of experiments. I think it’s best to arrange things so experimentalists and theorists mingle, so they learn to communicate and share perspectives. Then collaborations will naturally emerge.
But if you take multiple universes where in principal there’s never going to be a way to check the validity of that idea in an empirical way, and yet if you accept that, then you can say, “We understand why the constants are so finely tuned in this universe as to allow life. Because there are lots of other Universes where there are all different kinds of constants.” So we can explain that. But should we call that something other than physics or cosmology?
It might border on theology! Then it is a question. Maybe we should go back to natural philosophy as in the Enlightenment. Again, however, as far as I’m concerned, I’m glad some people chose to explore questions like that. I’ve not prepared myself to feel comfortable with them. Yet I’ve met some very bright cosmologists who emphasize that modern astrophysics may be able to test some of their far-out ideas. Among them is Andrei Linde at Stanford. It was at a symposium held there a few years ago with the modest title, “Cosmologies and World Views.” Steve Chu invited me, perhaps to have a specimen chemist. In addition to three cosmologists, and Steve as an atomic physicist, there were a couple of literary scholars and other humanists, including a Jesuit theologian from Loyola. What he said was striking. He didn’t phrase it quite this way, but at dinner I said to him: “It sounded to me like you were saying that God didn’t create man, it’s the other way around. And that theology is now regarded as a branch of anthropology. Is that what you said?” He replied, “Yeah, that’s pretty much what I said.” Seems that Jesuit theology is now much more down to earth than scientists’ cosmology! As the announced aim of the symposium was to bring together scientists and humanists, I gave a talk called “Sacred and Profane Love,” and started with the famous painting by Titian. It has two female figures, one very opulently clothed, the other attired in a simple gauze-like gown and holding up a lamp or grail. These ladies were looking off in opposite directions. First I asked the audience how many were scientists, then how many humanists; it was 50/50. Then I asked the humanists to indicate which figure in the painting they thought represented the Humanities. Then asked the scientists which represented Science. I was very surprised: it was about 50/50 each. I had expected most humanists would say something like, “We pursue knowledge for its own sake and hold high the flame of learning, while the opulent and haughty scientists ignore us.” And I had expected the scientists might say, “We tend the flame of reason and humbly seek to understand the Cosmos, while the arrogant humanists consider us to be clods.” After expressing surprise and pleasure at the outcome of the votes, I told a story about encountering cultural disrespect as an undergrad at Stanford. I took a course in scientific writing. The teacher, a grad student in English, would come in and just look out the window for several minutes. Then he’d turn to the class and say unkind things about how hard it was to try to teach such uncultured slobs how to write. One day he pointed to a fellow sitting next to me and said, “I feel sorry for you. You’re probably going to spend your life improving adhesive tape.” The symposium was held in a building donated by a Silicon Valley company. So I said, “Actually, that fellow might have gone on to improve magnetic tape, and this building might be a result.” Later I took part in a “culture wars” encounter held at the New York Academy of Sciences. It was called “The Flight from Science and Reason.” I accepted the invitation because I knew Bright Wilson would have done so; he was much concerned about that. For my talk, I used the title “Imaginary Gardens with Real Toads,” a line from Marianne Moore’s poem. She was talking about poetry, but I thought that also described science. We construct imaginary gardens, and find there are real toads there too. I wanted to be conciliatory. A chief point was: even if people say silly things, it’s a good thing that they’re visiting each other’s gardens. We should realize that the next generation is also going to look at our gardens and toads. They’ll likely laugh, but weed out or nurture what we have planted, as they see fit. So we shouldn’t get uptight about it.
Well, I think there are physicists that are concerned with the theory that apparently at this moment has no contact with the experiment and —
But then it will fade away, won’t it?
It will eventually. That’s right. And they’re going to continue playing with it. Because it does some nice things.
Yes. And some of those apparently weird ideas may find a life of their own elsewhere in science. If we look back to the golden age of atomic physics there were some pretty odd things. Now we think the J. J. Thompson model of the atom was farfetched. Yet you could say that was experimentally motivated.
I think you could, yes.
Definitely it was trying to explain some things. But yes, string theory has gone pretty far out. Those parts of physics are so far away from chemical physics that I don’t personally get much contact with them.
And you don’t think there’s a parallel in chemistry?
It’s nowhere near as extreme.
Okay. I want to just talk a few minutes about teaching and research. Because you’ve been in between teaching and research. You called yourself earlier this afternoon a public servant.
Yes. I think that’s what teaching and mentoring is.
That attitude is exhibited in your devotion to teaching. But I want to push on you a bit. The AAPT gives an Oersted Medal to recognize excellence in teaching. The medalist gives a response. And in many of these, particularly the older ones back in the early ‘30s, a theme runs through their responses, and that is that there was always a tension between teaching and research. That’s when their research is going well, their teaching suffered. When their teaching caught their imagination, they didn’t spend time in the lab. So there’s this real tension. Are you aware of this? I mean, would you acknowledge that there’s a tension between your teaching function and your lab function?
I don’t think I have that kind of tension, because I think for me teaching has helped my research enormously, because I get excited when I talk with students. If I see the student gets interested and excited, I get more excited. I’ve gone away on sabbaticals a few times, and my wife always points out how I spend almost all my time writing letters to my students. It’s the interaction with the students, graduate students or undergraduates; it’s about the same, which seems so important for me. I’m not sure I could even be a scientist otherwise. There may well have been more validity to the tension back in the ‘20s and ‘30s, when faculty were doing experimental work with their own hands. For his oil-drop experiment, Millikan personally made 1000 batteries. Nowadays, even so-called experimental scientists don’t get to do experiments very long themselves. They are too busy writing research proposals and papers. Many are almost executives. So the teaching role may become a major way the faculty mentor interacts with the students actually doing the hands-on research. At least in this mentoring mode, the distinction between teaching and research gets fuzzy. Usually, the mentor contributes important ideas to the research. Yet the teaching component may be more important. Grad students and postdocs do much of the nitty-gritty work on their own. Teaching and mentoring them involves much more than technical matters. It involves, as Bright Wilson exemplified, conveying an understanding of the culture of the field and what is ethically right, how to write papers and give talks. Much of the purely technical things can nowadays be learned from the web. But the personal interactions in a research group contribute greatly to the making of a scientist. Teaching in regular classroom courses also doesn’t seem to me to conflict with research. It takes time of course, but there’s compensation for that in revisiting and refreshing your appreciation of basic concepts and discoveries, things you fell in love with when you were a student. I get charged up by teaching. It fosters my enthusiasm for doing research. Progress in research does not go linearly with time, but in fits and spurts. So time devoted to teaching should not be considered as simply subtracting from research. Instead, it contributes positively by stimulating excitement as well as ideas that can accelerate progress in research. Even during my stint as department chairman, I always taught the regular “load.” I found it was not a “load” but a “buoy.” Maintaining contact with students and cherished topics helped a lot to dispel frustrations with burdensome administrative chores.
Let me put it another way. I would argue with you that if I walk up and down these halls in this building and across at Lyman, that the research physicists, the research chemists, are hoping that their research attracts attention.
That they become recognized. That they become honored. That they become a prize winner of one sort or another.
Yes, that’s a natural thing.
It’s a natural thing. So that fundamentally I would suggest that research is nurturing self, whereas in teaching, you are nurturing others. Those are very different human activities; it’s a different kind of mindset.
Yes, they are. But if you value the feeling that by nurturing others you are doing something you just deep down feel is very worthwhile, you are also nurturing your self-esteem. If you personally feel grateful that it transformed your life, all the nurturing that went into you, then you feel awfully good about doing your bit for others. It can be more important than anything you could manage to do in research. I just don’t see a huge difference. When I’m doing research I’m trying to teach myself and a few comrades something new. Whereas in classes I’m trying to teach things old to me but new to the students. In doing so, I often come to see the old things in new ways. Sometimes that’s just as exciting as getting a new insight from research. Also, both the exploratory attitude of research and things learned from it enhance teaching. Even the general chemistry courses I’ve taught to freshman have been informed and enlivened by insights from current research, my own and that of others. It makes a difference if teachers of elementary courses are involved in research because they gain perspective on what is important, and how the basic ideas are key in frontier research. Graduate courses help get students ready for research, not so much because of the advanced material per se but because they reinforce and deepen command of the basics. Overmastering the fundamentals empowers students to think in fresh ways. Again, a musical analogy: you really have to play the scales extremely well before you’re ready to work up to concertos. There’s a simple policy I’ve advocated that would help combat the notion that in a research university teaching doesn’t matter. Many seminars are held, most given by professors. But in introducing a speaker usually only the academic pedigree, awards, and research are mentioned. It should be customary in introductions at seminars or scientific meetings to always mention teaching done by the speakers. Either hosts, session chairs, or speakers can make it happen. I’ve long thought it odd that people tend to think of teaching as something that only goes on in schools. Actually, in the “real world” everybody does a lot of teaching, much of it inadvertently. Ironically, in a university you can get away with doing a crummy job of teaching. In industry you can’t. There you have to teach both your subordinates and your supervisors and those who can do it well are highly valued. However, at a university when Faculty much admired for their research are also devoted teachers, their students and colleagues want to emulate them. I remember Frank Westheimer telling about the big surge in teaching efforts in chemistry when he was at Chicago and Fermi arrived on the faculty and began teaching introductory physics. Quite a few chemists outstanding in research have taught freshman chemistry, among them Linus Pauling and Harry Grey at Caltech; Roald Hoffmann at Cornell; Dick Zare at Stanford; Bruce Mahan, George Pimentel, and Alex Pines at Berkeley. At Harvard, as you know, Ed Purcell insisted on teaching undergraduates. He had many auditors, including some faculty colleagues and a few grad students (me among them). His classes were an absolute joy. His love of physics, his deep understanding and his way of thinking were exhilarating. It made you eager to try to teach like that, although you didn’t expect you could nearly as well. I’ve known faculty who felt the less teaching they did the better. But I don’t think that helped their research. As I said earlier, I’ve observed situations where the quality of research fell short because of thinking just on a narrowly technical level. If you teach a basic course, it keeps you going back to the basics and focused on big questions. Then you’re more tuned to recognize what really matters in a research problem. Of course, also having the opportunity to observe great teachers like Purcell has that effect too. Debye was another fine example. As I described yesterday, his course was inspiring. He clearly loved teaching and enjoyed his artistry in doing it. I can see him now, with a twinkle in his eye and impish smile,
But you can’t argue a general thing in terms of some specific exceptions. Rabi said the first thing he would do if he was in charge of a university he would double the teaching role. And he said he would do that because teaching now occupies such a small fraction of the week that you neglect it, because it doesn’t get the attention you should give it.
Good for Rabi. I like that idea. Well, there’s an old saying that “the exception proves the rule.” At the turn of the 19th century Harvard had a very remarkable chemist, Theodore William Richards. He did high accuracy atomic weight measurements of incredible accuracy, before mass spectroscopy. An unthinkable thing happened. Göttingen, then pinnacle of chemistry, invited Richards to be a professor there. Back then, Harvard was hardly a blip in chemistry. One way Charles William Eliot, President of Harvard then, kept T. W. Richards at Harvard was to agree to his request to teach another course, doubling his teaching. Later, in 1914, Richards became the first American Chemist to get a Nobel Prize. I’ve told this story at various occasions. Today it startles people, they expect the opposite. Woodward did the opposite. After an offer from Zurich, he got Harvard to excuse him from teaching. Actually, he did do graduate level teaching in effect because he had a weekly long evening seminar of his research group which drew many auditors. He had taught undergraduates during World War II, when most faculty were away. Again, research is not something done per unit time. The QI of IQ applies. Its bursts of excitement and rising intellectual temperature that count most. Teaching contributes to that. Feynman wrote about the value of teaching. He stressed that you’re not at your peak doing research all the time. There are stretches when you don’t have any good ideas at all. By teaching you get to feel you’re still doing something useful, and it often gets the juices flowing that help ideas come.
You know he was offered a position at the Institute for Advanced Studies. He said he wouldn’t take it because he wanted to teach.
Yes. It seems to me that the Institute for Advanced Study is less than it could be. The research that has come out of it is not commensurate with the incredible human talent that it has collected. There are exceptions, such as Ed Witten. But overall, it’s disappointing. I think it’s because their scholars don’t teach. A current development that I think goes in the wrong direction is the proliferation at universities of centers for various research areas. It sounds good and helps attract funding. But teaching is left to the traditional departments. That’s evolving toward the pattern that used to be common in Europe, separating teaching from research. Faculty are joint members of a department and one or more research centers. Many then want to do less teaching because of their responsibilities at the center. I’m afraid this will have bad consequences. Especially so with the burdens imposed by the need to get funding to support grad students and postdocs, already discussed. I told you what Purcell said about not having ever applied for a proposal. It’s said that a Senator has to raise, during a six-year term, about $10,000 a day to get reelected. Now a science professor with a research group of only 4 or so needs to raise $1,000 a day on average; some with large groups have to bring in more than Senators. Some faculty have the business skills to prosper very well in this environment, indeed many academic scientists now operate private companies too. But as things are going, places like Harvard might not be able to have an Ed Purcell anymore. That’s worrisome.
Yes it is. Let me ask more about teaching and reason, one more question. And I’m asking you this. I would be uncomfortable asking anyone else. Here’s the issue. Rabi published about 50 papers. Purcell published about 50 papers. Feynman published about 50 papers. Dudley Herschbach has published 450 or something.
Yes. That’s the total published from my research group. I’m not a coauthor on about 20% and roughly another 20% are nontechnical, popular, or historical articles.
The reason I can ask you is that I know that you are absolutely devoted to all of your responsibilities. But it’s now common for people to end up with 300, 400 papers. Something is out of balance. Not with you, because —
Well, no, it’s generally out of balance. I would rather have published only 50 research papers but written 500 directed to the general public, especially young people. Of course, the funding system compels publication. If you have a grant, now typically for three years, to get a renewal you have to have published results. If the grant is for $300,000 a year or so, if you haven’t published at least two or three papers a year, there’s no chance for a renewal. At the 1911 Solvay Conference, Sommerfeld made a remark I’ve often quoted. It pertained to Einstein’s paper on specific heats, but applies here too. He said, “Herr Einstein has shown us that degrees of freedom should be weighed, not counted.” In one sentence, Sommerfeld brought out the key difference between quantum statistical mechanics and classical statistical mechanics. Research papers also should be weighed, not counted. Sometimes a one page paper is much more significant than dozens of the garden variety. But a funding agency and the peer review system often don’t weigh significance reliably. So faculty feel pressure to turn out a respectable quantity. Also, grad students and postdocs have to have publications for job applications. If people were allowed to publish, let’s say, only one paper a year, it would be quite different. Maybe a lot better.
That’s right. Physical Review would be readable again.
Yes. But there’s no likelihood of getting to that point. I’ve had about 60 graduate students and 50 post-docs total. After deducting non research articles and reviews from the list, our average production of research papers was about 3.4 per capita, or roughly 1 per person per year. Not embarrassingly high. I have tried to emulate Bright by encouraging students to publish papers without the impediment of me as coauthor. But only 20% of the papers are in that category, in large part because on the rest I did most of the writing.
Bright kept his name off a lot of papers.
Yes, absolutely. Bright had 90 Ph.D.s and 60 postdocs. His group published about 400 papers. About 240, that is 60%, were without him; of the rest, he was sole author on 80 and coauthor on another 80. I don’t have a paper with Bright. I wish I did. I would give him a manuscript and he never changed a word. Of course, I had worked very hard to try to make it perfect to give to Bright. Well, he once pointed out a misspelled word. I begged him to put his name on a couple of papers. But he wouldn’t do it
And what would he say?
He just said, “Look, it’s your idea. You did everything. I don’t feel I contributed sufficiently.” I would say, “But Bright, there’s no way I would even have known where to start if it weren’t for you and your lab.” In his book Introduction to Scientific Research, he describes his view. Sometimes I have not put my name on a paper when most people, even Bright would have because I felt the student needed a solo paper. I remember Norman Ramsey telling about his time as a grad student with Rabi. Back then Columbia had a rule that you had to publish a solo paper for your Ph.D. thesis. The result was students would do something uninteresting but sure-fire to fulfill that requirement, while more challenging work usually involved collaboration. The presumed incidental paper assigned to be his solo turned out to be the discovery of the quadrupole moment of the deuteron! Actually, I’ve partaken of both worlds. I was promoted to tenure at Berkeley after only two years; by then I had only a dozen papers or so. Most of them, eight or so, were from Bright’s lab. I had a couple of good theoretical papers plus just one experimental paper from Berkeley, our first results on reactive scattering of potassium plus methyl iodide. I was surprised to be promoted so early. Even when I came back to Harvard after another two years, I had only about 25 papers. Now, assistant professor candidates typically have 20 to 30 papers, none have only five or six.
Purcell has 50. That would not get him promoted to full Professor.
Well, that isn’t quite true. His NMR paper, for example, was only about number seven. I looked up his list. And I would hope that would get him tenure, but you can’t be absolutely sure in today’s world.
But 50 papers would not do it today in most cases.
Yes, that’s right. But the papers Purcell wrote were such a joy; everything he wrote. I haven’t read all 50, but I’ve probably read 20 Purcell papers.
Dudley, let’s move on. We’re almost done. I’m interested in how you got interested in the history of science. I mean, you liked Franklin.
Oh, I always loved history and biography.
I know you liked your Western Civ as an undergraduate.
But when did history of science become something that attracted your attention? As you say, a number of your papers —
From an early age, I loved to poke around the library. For example, as an undergraduate I looked up the issue of Philosophical Magazine that had Bohr’s 1913 paper. I recommend everybody do that. Because if you look at the actual issue, you see this little six page paper. It looks odd. In the same issue, there’s a much longer paper, 30 or more pages, trying to explain the periodic table of elements in terms of the J. J. Thompson raisin muffin model. Surely most readers when they got that issue must have been impressed with the long article. Then they saw the short one by an unknown young Dane. “What the heck is this? He pulls out of thin air something, juggles it around and he gets the Rydberg constant! ” Somewhere I heard or read that when Sommerfeld got this issue, he instantly recognized the importance of Bohr’s paper. In the Stanford math library I made another memorable discovery: the collected works of Leonhard Euler, in 13 volumes. A lot of them were in a language I couldn’t make out very well, but could follow the equations. I had many happy hours just browsing in those volumes. A favorite library excursion occurred in my first weeks as a grad student at Harvard. As mentioned yesterday, I took Debye’s course. He started out with his 1912 treatment of the heat capacity of solids. As you know, Einstein had done a simplified version, assuming all the atoms oscillated with the same frequency. Debye generalized in a much more realistic way. I decided to look up his paper in Widener Library, which I had never visited until then. It was a spiritual experience. I descended down into the bowels of the library, located a long corridor of journals, found the volume, and blew away dust. I’d had two years of German at Stanford, and had no trouble reading Debye’s paper. He’d written it when he was about two or three years older than I was then. It had lots of Bessel functions and other things I felt at home with. It boosted my confidence that I could maybe do something like that. That History of Western Civ course is in a way still with me, more than 50 years later. It was so fine because we read original sources, guided by a syllabus. There were no lectures, just discussion in sections of 25 students or so. I’ve always enjoyed reading. Now I read before going to bed, and always have a book or magazine along to look at during odd pockets of time. What got me to study Ben Franklin seriously happened in 1956, in the spring of my first year at Harvard. The American Academy put on a special program celebrating the 250th anniversary of Franklin’s birth in 1706, and the 200th of Mozart’s. It featured a performance on a carefully made replica of a musical instrument invented by Franklin, the glass harmonica. Mozart and also Beethoven had composed pieces for it. The instrument was large, with 37 glass bowls of increasing diameter mounted lathe-like on a rotating spindle. Unlike the original version, this model had a keyboard enabling it to be played like a piano. The famous organist, E. Power Biggs performed, but had some trouble because some of the bowls shattered as he played. Accounts written in the 18th century were read that described the “unearthly beauty” of the music. To me, it seemed likely that the authors were pulling our leg! Next came a string quartet, likely composed by Franklin. It was very lively, with scordatura tuning such that each musician has to play only four notes, one on each of the open strings This got me more curious about Franklin. So I read more about him and came to appreciate that in his time he was greatly esteemed as a scientist. His work on electricity was recognized as ushering in a scientific revolution. If he’d remained a British subject he likely would have been buried in Westminster Abby, as he was hailed as the “Newton of Electricity.” The savants of Europe were astonished by his incisive experiments and insightful interpretations of them. Everyone marveled at his demonstration that lightning was not supernatural. He was all the more esteemed because he had only two years of formal schooling, none past the age of 10. His immense scientific reputation greatly aided his diplomatic career. I’m a life member of Friends of Franklin, a fan club akin to the Baker Street Irregulars for Sherlock Holmes. I’ve written several papers and given many talks about Franklin, recently extending to include contemporaries of his, John Adams, Thomas Jefferson, and Joseph Priestley. My Franklin hobby has led to many friendships and special experiences, including a PBS program. I especially cherish getting to know I. Bernard Cohen, a distinguished historian of science. Among his many books are definitive scholarly studies of Franklin and Newton, and several for wider audiences; my favorite is his Science and the Founding Fathers: Science in the Political Thought of Jefferson, Franklin, Adams, and Madison.
You like to write too.
And I’d like to write more.
You like to write.
I like to write. I’m not very good at it. I’m very slow. I am always trying to make it better. But I write too much because I feel guilty about all the things I haven’t written. There’s always something I’ve got to do right away, so other projects I want to do don’t quite make it to the top.
Now, let me ask you about one more area, and that now has to do with you’re sitting here, 2003, and you started your science in 1954, 1955, 1956, somewhere in there. As you look over this period of American history, of the history of science, its impact and so on and so forth, how would you characterize your life in science and the changes that you’ve seen? Do you have cause for concern? Do you think everything is great?
Well, I have a lot of concerns. A major one is the paradoxical situation we’ve alluded to a couple of times. Science has hugely transformed civilization and is crucial for coping with big problems as well as big opportunities. Yet, understanding of science as a shared adventure of humanity and the ways of thinking that should foster seems to be ebbing lower. One reason people are alienated is because so few can understand how their computers, automobiles, and much else actually work. Yet there’s so much science material in bookstores and the web now that’s accessible to any literate person. It’s strange that so many people seem to feel that science is not something they can possibly understand and don’t want to try. A lot are downright antagonistic to science, such as those who want to reject evolution and/or climate change. Of course, I’m a starry-eyed fellow, who thinks of science as a grand exploration of the world inhabited by our species, finding out things about it, ourselves and other creatures, and developing ways to find out more. It all becomes a common legacy for humanity. It also offers a mode of thinking that transcends cultural, religious, and political differences. Twenty years ago, I wrote an essay urging this view of science, not from a starry perspective but that of co-inhabitants of our earth that preceded our species by many million years, the dolphins. My essay, titled The Dolphin Oracle, was prompted by an allegory published 50 years ago by Leo Szilard, another remarkable Hungarian. He founded the Council for a Livable World, devoted to efforts to restrain the nuclear arms race between the U.S. and Soviet Union. I have served on the Council for about 15 years, since Ed Purcell recruited me for it. Here, I’ll just quote from the last paragraph of my essay; I offer it as an earnest creed: “Think of yourself as a dolphin oracle and ask about any issue of the day. Try problems involving differences in gender, race, religion, political persuasion, national identity, or the like; all recede when confronted by our common humanity. Let your mind try out also, now and then, other super civilized traits of the dolphins, including exuberant leaps, whistles, and happy chortling. It can only do humankind good to become more aware that along with the dolphins and other incredible creatures, we really belong to a much wider universe of the mind; it could be called mind kind.” You have probably been asked, as I have, “What about science should people know? The response I start with is a quote attributed to Richard Feynman. Although I’ve never been able to find the exact citation, it certainly sounds like him: “Science is not about what we know but about what we don’t know.” This conveys what I regard as two of the most important things about science: it is an ongoing exploration and deals very much with uncertainty. We can expect, especially as new tools become available, to find out both new things and revised understanding of old things. In recent years, I’ve been surprised how often people ask me about science and religion. I respond by first saying that I think of them as siblings, both born out of our innate sense of wonder. Yet we hear much talk about a claimed conflict between science and religion, although in a different context than with Galileo. The issue is framed as a proposal to teach creationism or “intelligent design” as an alternative to evolution. Rather than bristling at that, I consider it an opportunity to contribute a little to public understanding. I try to defuse what seems to me to be needless contention. In my view, the real issue has nothing to do with evolution per se. It is much broader and more basic. The key reason scientists oppose the proposal is simple, indeed utterly mundane. In science, we can ask questions of Nature but must supply our own interpretations of her responses. That typically requires much discussion to assess evidence, often uncertain and most always incomplete. Invoking a supernatural explanation is not allowed simply because it’s just not useful. It would stop discussion cold, with no way to go further. So the real issue does not involve a genuine conflict between science and religion. Again, both are born from our sense of wonder and awe at finding ourselves in this incredible world. Both involve much that we don’t understand. But history shows that it is unduly pessimistic to presume that limitations of current scientific understanding will not be overcome, and therefore conclude that resort must be had to an inscrutable supernatural cause. For instance, lightning was considered supernatural until 1751, when Benjamin Franklin showed otherwise. Sometimes I relate a story I heard from I. B. Cohen about a visiting minister to Harvard who had never been to New England before. Harvard’s preacher took him up to visit Vermont and they saw this beautiful farm there. The visitor was just enthralled with it. Just then the farmer happened to come by with his donkey and plow probably, and the visitor gushed and said, “Oh, it’s wonderful to see what you and the Lord have done here on this beautiful farm.” The farmer replied: “It is a beautiful farm. But you should have seen it when the Lord was taking care of it by himself.” My comment is just that “I think scientists are among those doing the Lord’s work.” Although science education and literacy are overall far weaker than befits the 21st century, there really are strengths to build on. Among them are science fairs. In the U.S. these are increasingly a really significant mode of “informal education.” Premier annual events for more than 50 years have been the Science Talent Search, long sponsored by Westinghouse, and the International Science and Engineering Fair, now sponsored by Intel. Anyone who attends these events or serves as a judge will become a lot more optimistic about our future. The high school kids who enter are doing the real thing; on their own they take ownership of a project. In the course of developing it, and exhibiting it, often at a series of fairs, they arouse the interest of friends and family and lots of curious neighbors. Both the Talent Search and the International Fair are conducted by a small nonprofit outfit, Science Service. It also publishes Science News, an eight-page weekly, written for laymen, which provide an excellent survey of what’s happening in all fields of science. For 30 years, Glenn Seaborg chaired the Board of Science Service, and a few years ago he recruited me as his successor. Science Service now hopes to get Science News into every high school in the country, via the web, and to further enhance the Talent Search and Fair. Recently, Intel undertook to sponsor the International Fair (ISEF), which is much less well known than the Talent Search (STS). The ISEF is held every May in a different city. It has grown to more than 1400 kids from about 50 countries, although over 90% are still from the U.S. Those kids are all winners of hundreds of local, state, and regional fairs, in which more than a million other kids took part! The ISEF also involves about a thousand volunteer judges and hundreds of volunteers who help in running it. Hundreds of prizes in the form of scholarships are given as prizes. Considering all the friends, relatives, and teachers of the kids entering the preliminary fairs, all told there must be several million people with links to the STS or ISEF. The kids displaying and explaining their projects are fine ambassadors for science. I wish the major media would pay more attention, particularly to the ISEF. I’d like to see TV news programs include, just as regularly as the weather report, a one or two minute episode featuring a student presenting an engaging and instructive project. That would surely attract a devoted viewership, since so many other kids, parents, and teachers would want to tune in. A year’s supply of such segments could be taped, with unusual efficiency, at the annual STS and ISEF events.
I’m going to change tapes. Here’s a question about science in 2003, at the late afternoon of your career. In 1945, before they tested the first atomic bomb, there were serious questions about whether this device would ignite the atmosphere. Calculations were done, and they convinced themselves that it would not. But there was uncertainty. We have now gotten to the point with our technological capability and even perhaps our basic science that the question can be asked to what extent should simple curiosity be justification for moving into areas that may have the potential for catastrophic outcomes. All right? Now, did you read the New York Times on Sunday?
The last issue, Sunday, there was a book reviewed, a book by Martin Reese, The Final Hour. Dennis Overbye wrote the review.
Yes. I did look at that review.
Well, the book gave an experiment where there were gold on gold nuclei, and they had the potential for essentially devouring the universe.
Yes. There was some funny business, worm holes or whatever.
Yes, whatever. And Martin Rees talks about rending space/time that would propagate through the universe at the speed of light and be done with everything. Now, is this hyperbole? Martin Rees, the Astronomer Royal of England, Cambridge Professor…
A pretty serious guy.
He’s a serious guy. But he’s raising very interesting and important questions. So my question is, as we look ahead, as you now sit there thinking, have we moved to a point where science is going to have to consider things in a new way?
Yes, for sure we need to consider such possibilities. And, of course, we have a good example with what happened with DNA recombination in 1975, where the scientists, before they really turned loose full scale research on it, had a meeting and addressed how they could take sufficient precautions.
That took place right here, pretty much.
Yes, in Cambridge. The City Council said, “Before we allow you to have these kinds of laboratories here we want to look into it.” They appointed a committee that had some scientists and lawyers and some ordinary citizens. They had testimony both ways around. They came up with workable and effective procedures. This episode offers an encouraging model. Coming back to concern about science education and literacy, I’d like to mention a notion for a college-level core course for non-scientists. It might be called Great Experiments, as an echo of Great Books. The students would get personal experience, without having a regular science course necessarily, by doing experiments with things they’ve all heard about and know are important. They would read about, write about, and discuss the cultural and historical impact and consequences of the Great Experiments, considered from humanistic rather than technical viewpoints. And they’d devote one or two afternoons to each of the experiments, with no concern about getting “right” answers but rather getting “up close and personal” experience. They would do something with DNA. They’d build a primitive computer. They’d do a primitive version of NMR or other kind of spectroscopy. They’d synthesize a chemical compound, perhaps Indigo, a dye hugely important in trade on the Silk Road for many centuries and now still produced in great quantity, mostly to dye blue jeans. For instance, an outline of a DNA experiment has been prepared by our younger daughter, Brenda, who has a Ph.D. in molecular biology. The experiment involves extracting some DNA and holding it your hands. You’ll work with a certain bacteria that in its native form is immune to UV light. But that immunity can be degraded chemically. Then you restore the immunity by splicing in a little piece of DNA that you can easily separate from something else. My fantasy is that the students would find that it’s easy, it’s fun, and it puts them in contact with things they’re curious about. And it becomes so popular every Harvard student insists on taking it! I’ll like to try out such a course, even in my so-called retirement years.
That would be terrific.
This project is only a partial rough draft now; I need to find a young collaborator to carry it on. There’s so much you can do with the web. In my freshman seminar, I see every week how skillfully the students fish out information from the web. The other day, a student told me about a project for a biology course. The aim was to find where a certain sequence of DNA bases, constituting a particular gene, might occur in animals. He said that in only half an hour he found using the web three very different kinds of animals that had that gene. Until just a few years ago, you couldn’t even think about doing such a project. Such powerful tools can surely revolutionize education. We need them. I don’t think we can provide an adequate corps of science teachers for K-12 in the foreseeable future, or perhaps ever. However, I’m convinced that this gap can be significantly offset by empowering able students to a much greater extent than occurs today. This becomes practical via the web. I’ve told you about students teaching students algebra almost 60 years ago back in my rural high school.
That’s part of what convinced me it could be done. But now I think many teachers feel they have to be authority figures. So they wouldn’t dare have the attitude of Mr. Drummond: “It’s okay if I don’t understand much about Algebra. I’ll just make sure you kids are learning it.” There’s another serious intrinsic problem in teaching K-12, and even beyond. It severely impacts teaching science, especially to minority students. From an early age kids are conditioned to view their teachers as judges who grade them. It needn’t be so. That was brought home to me when our older daughter Lisa had a year at Oxford, so experienced the famous tutorial system. Every week she had to deliver a ten-page paper to her tutor. He criticized it vigorously and thoroughly. That did not discourage Lisa, for two reasons. 1) The ideas were her own and her tutor clearly was helping her to sharpen them and her presentation skills. 2) The exams which came at the end of the year were set by a faculty group that did not include her tutor. So the tutor was not a judge, but a coach, helping her to develop her capacities. In all of our large cities, about 50% of minority students drop out without finishing high school. This is attributed to a nasty syndrome: if you try but don’t do well, it confirms the stereotype that you are inferior; not trying avoids that. In sports, those same kids will take strong criticism from a coach. I’m glad that now many experiments trying out new approaches in K-12 education are going on and in prospect. For instance, Leon Lederman is a great advocate of physics first in high school; I think such a change is good to try. I wish I had whatever it takes to get a range of schools to try out the “coach rather than judge” approach. It takes a big effort to do such things. Likewise to get an experiment going on reforming voting to avoid the dangers of the plurality system. I don’t expect to be able to accomplish much on such things, but keep talking about them in hopes that someone will take up the torch and do far better than I can. John, you’ve extracted some of my thoughts, delivered in a random way, about many things that I care a lot about. Thank you for your patience and astute questions.
Well, sir. This has been a good day and a half. I appreciate it.
It’s been lovely for me. I wish I had a chance to interview you!
It’s 3:35 roughly. And we’re going to call this an end.