Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
We encourage researchers to utilize the full-text search on this page to navigate our oral histories or to use our catalog to locate oral history interviews by keyword.
Please contact [email protected] with any feedback.
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of W. Conyers Herring by Lillian Hoddeson on 1974 July 29, Niels Bohr Library & Archives, American Institute of Physics, College Park, MD USA, www.aip.org/history-programs/niels-bohr-library/oral-histories/4666-2
For multiple citations, "AIP" is the preferred abbreviation for the location.
From Herring's childhood and early education to his election as department head for the theoretical physics group at Bell Laboratories in 1956. Topics include graduate education at California Institute of Technology and Princeton University; Ph.D. in physics, 1937; early interest in astronomy, wartime work (hydrodynamics of explosions, underwater explosions). Much of the interview is devoted to brief discussions of individual publications; discussion of working environment at Bell Labs and experiences there from 1945 through the 1950s. Also prominently mentioned are: John Bardeen, Felix Bloch, Richard Milton Bozorth, Edward Uhler Condon, de Boer, Peter Josef William Debye, DeMarco, Dutton, William Fairbank, Enrico Fermi, Foner, Frobenius, Theodore Geballe, Gorkov, Gorteov, Holstein, William Vermillion Houston, Josef Jauch, Charles Kittel, Kunzler, Kunzter, Lev Davidovich Landau, Fritz London, Bernd T. Matthias, Robert Andrews Millikan, George Moore, Stanley Owen Morgan, Nichols, Obraztsov, J. Robert Oppenheimer, Gerald Leondus Pearson, Pitaevskii, Maurice Rice, Henry Norris Russell, Frederick Seitz, William Shockley, Shur, John Clarke Slater, Rado Suhl, Dave Thouless, Titeica, John Hasbrouck Van Vleck, Pierre Weiss, Gunther Wertheim, Eugene Paul Wigner, Witteborn, Dean E. Wooldridge, Fritz Zwicky; Zmerican Institute of Physics, Bell Telephone Laboratories Journal Club, Bell Telephone Laboratories Library, International Conference on Semiconductors, Massachusetts Institute of Technology, Physics in Perspective, Reviews of Modern Physics, University of Kansas, University of Michigan Summer Symposium in Theoretical Physics.
We're going to continue our overview discussion of your publications. OK? We were up to the 1954 papers and you were just beginning to tell me about the "Thermoelectric Power" paper, and you suggested that I not actually read the papers relating to that, but that I go ahead and read the review paper (in Halbleiter und Phosphore.)
Yes, that covers the theory of thermoelectric power, semiconductors, the low energy phonons in thermal conduction, and I guess those are the main two.
OK, what about the paper — (#28).
"The Transport Properties of a Many-Valley Semiconductor" (1955), yes.
How does that fit in?
It's not especially related to the thermoelectric power paper. It has to do with the fact that, for a long time, ever since the thirties, in fact, I'd been concerned about the possibility that there could be band edge points that were the many-valley type or the degeneate type, and had anisotropic effective masses and so on. And now that data were coming in on semiconductors that seemed to establish that some of their band structures, band edge points, really had that sort of structure, I thought it would be appropriate to work out the transport properties in some detail. So I worked that out in this paper, and then there's a companion paper to that also, Herring and E. Vogt, 1956, "Transport and Deformation-Potential Theory for Many-Valley Semiconductors with Anisotropic Scattering." (#30).
Were you working closely with experimenters at Bell Labs on these papers, or were you primarily stimulated by books and published experimental papers?
Oh, a little bit of contemporary Bell Labs work, but these papers were more just sort of basic theory that didn't tie in very specifically with any particular piece of experimental work. This is in contrast to the thermoelectric power work, which was stimulated by Ted Geballe's experimental work.
I suppose that in a sense the second of these papers, the one with Vogt, supercedes the first, although there were several topics covered in the first that were not covered in the second. The second one also includes a very detailed discussion of how one calculates the mobility of a multi-valley semiconductor, starting with assumed known values of the isotropic and anisotropic deformation potential constants, or in turn, how one can use various types of experimental data to infer values of those deformation potential constants.
OK, now we go on to —
— let's see, there's one more on transport. That's the piezoresistance paper. That's a collaboration with an experimental program One of the effects that was discussed in this "Transport Properties of a Many-Valley Semiconductor" was the Piezoresistance Effect — how if you strain a crystal, you lower — say if you have something like germanium, with valleys at the L points on the 1-1-1 axis, you strain it, you can lower the energy of one of the L points relative to the other L points, and thereby change its population. But since the conductivity due to any one valley is anisotropic, then the conductivity of the whole crystal becomes anisotropic once the different valleys no longer compensate each other. And that's the main piezoresistance effect. We exploited that in some detail in this experimental paper.
OK, you have a review paper here — and then you get into phonon-drag.
Yes. There's more of this phonon-drag work. This was a very extensive experimental program by Geballe and Kunzler, and we published several papers on it. There's this one in the Physical Review, general survey (1958). There's the Journal of Physical Chemistry Of Solids (l959), just a report, a brief report at the international conference, and then there's this big paper in the Bell stem Technical Journal (l959). The basic idea there was that a magnetic field not only alters the distrubtion function of the electrons, and therefore their current response to a given electric field, but it also alters the phonon-drag effects as well, because it alters the distribution function of the electrons. At low temperatuies in semiconductors, these effects on the phonon-drag part of the thermoelectric power alters both the magnitude of the thermo- electric power and gives it an
anisotropy, in particular a Hall-type Nernst-Ettingshausen component which can be much larger than the effects that you'd get without phonon-drag. This in turn can be used as a tool to tell you quite a lot about the details of the electron-phonon interaction. So we developed this in a great deal of detail in this big BSTJ paper.
We then enter the sixties—
Yes. Oh, let's see, I should just briefly mention this paper, "Spin Exchange in Superconductors."(physics 24 1958) I don't have a reprint of that. This was just a conversation that I had with Berndt Mattics, in which he was concerned about the effects of rare earth impurities in superconductors. The Bardeen-Copper-Shrieffer theory was fairly new at that time, and it occurred to me that on putting in a magnetic impurity, the magnetic impurity would couple differently to electrons of parallel or antiparallel spin, and consquently would tend to break up the Cooper pairs and thereby reduce the tendency to superconductivity. I worked out a very crude theory which subequently was superceded by much more adequate theories by others. But he wanted to report this at a conference over in Europe, so I just quickly wrote up something and sent it along with him.
I see.
Now, let's go on. One thing that's a continuation of all this collaboration with Geballe and others, in experimental work in semiconductors, was this paper on "Effect of Random Inhomogeneities on Electrical and Galvanomagnetic Measurements," paper #40 (1960). In that, there had been some funny things that we couldn't quite understand on the piezoresistance effects in germanium that set in at low temperatures. One of our speculations was that the troubles might have been due to inhomogeneties in the specimen — maybe slight inhomogeneities in doping or something like that, which could be either randomly fluctuating on a scale where they're fine compared with the size of the specimen, or they might be gross.
It was the fine-scale fluctuations that concerned me, and just using classical transport theory, classical continuum theory, but assuming the local conductivity or Hall coefficient to be a varying function of position, I worked out some prediction of what sortof effects that would have on various types of electrical properties—conductivity, Hall effect, magneto-resistance, thermoelectric power and so on, particularly magneto-resistance. Another motivation for doing this work was that everybody finds that the magneto- resistance fails to saturate, as the magnetic field goes up, as well as it should. Now, it's known that there are quantum effects that can cause it to fail to saturate, but it looked to us as if the magneto-resistance was departing from expectation. As far as quantum effects were concerned, magneto-resistance should go up, saturate pretty well, and then take off again. In these semiconductors the thermoelectric power in fact did just that, but the magneto-resistance started to bend over, and then just continued on up. And we wondered if that could be due to inhomogeneities.
So I worked out a sort of perturbational treatment, of the effect the inhomogeneities would have. They would have that sort of effect, but it wasn't clear whether that was really what was causing the experimental results or not. However, there was an appendix to this paper that I think is probably the most important part of it. See, throughout the major part of the paper, I treated the inhomogeneities as weak, so that they could be handled perturbationally. But in this appendix, I treated the other case, of an inclusion in an otherwise uniform material where the inclusion had a different Hall constant or some similar different property from the rest of the material, I think specifically a different Hall constant. And what I was able to show was that the distortion of the current lines, in going around the inclusion, which in the absence of a magnetic field would be more or less spherically symmetrical around it, just flowing out or in, and going on in the presence of a magnetic field, became a distortion that was considerable for a large distance in a long cigar-shaped region, whose length to width ratio was of the order of the value of the Wc^T. This may have some important consequences.
In fact, some people have invoked this as an explanation of the non-saturation of magneto-resistance in the alkali metals and things like that. The situation is still unclear. So that's I guess about the last thing that I've done on semiconductors. Well, not quite. There's one on quantum transport much later on. Now, let's see. Oh yes, this paper on "The State of d Electrons in Transition Metals." (J. App. Phys. 1960), That was one of the invited papers at the annual magnetism conference. At that time, there was a great deal of hullabaloo
about the finding of Weiss and De Marco. They had studied the electron distribution by X-rays in transition metals, and they had concluded from their X-ray intensities that the effective number of d electrons in some of the transition metals was very different from what it was in the corresponding atoms. And this led to a flurry of theoretical papers, with all sorts of new models of transition metals and so on, and some of us around here had been discussing that, and we were rather skeptical.
It occurred to me that just on grounds of simple electrostatics, you couldn't possibly transfer that many electrons from the d shell to the s shell, without an enormous increase in electrostatic energy. I developed that as one of the main points of this paper. The other main point, and the one that perhaps has attracted more lasting attention. because the experiments were soon reinterpreted— that situation died off, but the more lasting impact of the paper was just the fact that it called attention to the fact that the question of whether the carriers of the magnetic moment in transition metals were to be viewed as localized or as itinerant, could be given a reasonably definite answer in terms of experimental facts — namely, one should study the Fermi surface.
I called attention to some theoretical work that had been done that seemed to indicate that the Fermi surface was a precisely definable concept even in the presence of many-body interactions; if one studied the Fermi surface, and in particular studied the Fermi surfaces for up-spin electrons and for down-spin electrons, and then one could see whether or not the difference between those two Fermi surfaces corresponds to the magnetic moment. I closed with an exortation to the experimentalists to get busy on the de Haas van Alphen effect and things like that. Of course this did prove eventually to be very fruitful.
Alright. Now, this is a review paper, #41.
This is not a review paper, it's sort of a paper on the science of science, on what the science of materials is, how much activity there is in the field, why it's important to study it, how it breaks down, and how the different parts of it interacts. Things like that. You might call it a little sociology of science. It was the introductory chapter of that book.(Perspectives In Materials Research) (1961).
Number 42 — "Current State of Transport Theory" (1961).
This was an invited paper at the Prague Conference (1960) in which I just called attention to what seemed to me to be a few important current developments. I did, however, also make one or two little original contributions in this. Let's see, one had to do with the question of multi-phonon scattering. There had been some papers that suggested that scattering of an electron in a semiconductor by interaction with two phonons, rather than with one phonon at a time, might give rery important effects. I argued that that conclusion was usually wrong, and that if one looked at it right, the two phonon effects were a relatively minor correction to the one phonon effects except in certain strong scattering situations and so on.
Let's see, I think it was in that paper also that I pointed out a very simple thing, no, I guess it was in the earlier one, the summary paper at the Rochester conference,that I pointed out that one can't ever get a negative magneto-resistance out of a simple Boltzmann equation approach with a constant number of scatterers, because of a variational principle. In this one, I called attention to some of the missing points in the theory of impurity scattering, things that weren't understood. The importance of situations where you had scattering that was too strong to be taken account of by Boltzmann equation. And I guess that's about all. Not very much there.
Then you go into Heitler - London...
Ah yes. Somewhere around the early sixties, I had been approached by the Seitz-Turnbull series, for Academic Press, to write a review article on exchange interactions, and then subsequently, since Academic Press was the same publisher that got out the Rado - Suhl magnetism series, those authors approached me to make a contribution for them. By this time, Phil Anderson and I had decided to divide the material up: he would do super—exchange, and I would do direct exchange and itinerant excange, I think. We'd do these for both the Rado-Suhl series and for the Seitz- Turnbull series. So this got me started thinking about exchange, and I started going back and reading some of the old papers, and by this time there had come to be quite a controversy, largely stimulated by Slater, over whether the Heisenberg type of
coupling had any really sound theoretical a priori basis. And so I was reviewing all the old literature of this, and the Heitler-London model from which the exchange coupling was usually derived, and for a long time, I just got more and more confused. And I thought I saw some relatively obscure papers that had the clue to giving the answer, and it turned out they didn't. Finally I did get an idea, and just at the time I got the idea well developed, I was invited to contribute a paper to the Festschrift issue of Reviews Of Modern Physics that was in honor of Wigner's 60th birthday.
Well, usually I decline things like that, because I figure to contribute a potboiler is no real honor to the person you're trying to honor, but in this case I did have something that I thought was important, so I agreed I would contribute a paper. And except for the fact that there was a horrible boo boo in the paper— I never thought that way, but somehow the words came out that way, on an incidental point — otherwise, it was a fairly good paper. What I showed there was that one could say some completely rigorous things about the interaction of two atoms, each of which had a non-zero spin, in the asymptotic limit where they're widely separated so that the interaction is very weak I showed rigorously that this was indeed describable by a Heisenberg Hamiltonian, and I gave a prescription, not in detail but sort of in outline, of how one could go about explicity calculating the magnitude of this coupling. And then the explicit calculation of exactly what it was, was carried out in this paper with Flicker, #44, in The Physical Review. (1964).
Why is it two years later?
I'm coming to that. I was busy. I was working on these review articles and I thought, well, I really owed it to the publishers to get these review articles to them so I wouldn't be holding up publication of the book, and so on. Flicker was a graduate student at, one of the New York City schools and he had wanted a summer job. So he came in and worked for me over the summer, and I set him to this job of calculating the interaction of two hydrogen atoms, and we finally got through with it and got an answer. But then it was sort of up to me to write the thing up, and I was
distracted with this other thing. I thought, well, gee, ever since the 1920's nobody has ever done this, there's no hurry about it. I didn't know anybody else who was working on it. But then suddenly, a year or two later, I discovered in one of the Russian Journals, I think it was the JETP, no, the Doklady yes, Gorkov and Pitaevskii had done the same calculation. They referred to my Reviews Of Modern Physics paper, and they also referred to a calculation by Landau, which had not appeared over in this country, of the analogous problem of the even-odd splitting in the hydrogen molecular ion, which incidentally had been done independently a number of years earlier by Ted Holstein in an unpublished report which I had referred to in my Reviews Of Modern Physics paper.
The basic idea there was one that had occurred independently to Holstein, to me, and to Landau. And I learned about Holstein's before my work was published, so I referred to him. But this was just for the hydrogen molecular ion, and it was Gorkov and Pitaevskii who realized that this could be applied also to the exchange couping. I don't know whether that occurred to them independently of my Reviews Of Modern Physics article or not they did cite my article. At any rate, I was very alarmed, and I thought, gee, what have I done to this poor young man who worked so hard all summer, and now we've been scooped!
But then when I read the fine print, I discovered they'd made a mistake, just a computational mistake, but the fact that they had made a mistake in carrying out the algebra gave us justification for publishing a sical Review paper on this. So we published the whole story in the Physical Review (134, 1964) (#44). But not quite perfectly. It turned out that for the evaluation of the final integral which we did numerically we got .821 as a coefficient. Somebody wrote me immediately after and said, "Look, you can do this analytically, and the correct answer is .818."
Then on this direct exchange work I continued to do some more work on that, and the summary, of all the work appears in Volume 2B of the Rado-Suhl series That's the thing you should read, although you don't have to read all of it, because that's a review of the whole literature of the subject. I can show you the parts that represent my contributions.
In this I developed at greater length a theme that was also in the Reviews Of Modern Physics paper, namely, that one can show from very basic general principles that the exchange coupling of two atoms at large distances is almost always antiferromagnetic. In other words, you can classify a very wide category of cases in which you can say that it must be antiferromagnetic. There are cases in which, if the two entities are not identical, then your proof doesn't go through, and if they have orbital degeneracy, it doesn't go through. But in essentially all other cases, it's surely antiferromagnetic in the limit of large distances.
Now we're discussing your article in the Rado-Suhl Book Conyers on "Direct Exchange between Well-Separted Atoms," the book on Magnetism 1966). (#48).
It's Section IV of this. Now, there may be some reference back to some of the basic concepts that I've developed in other sections, so I'm not positive how well Section IV can be read by itself, but I think most of it can be. Section IV is my work, and all the rest is just review, of course with some of my own viewpoints contributed at every stage. But the work that I've been talking about is all summarized in Section IV.
OK. Now we come to charge fluctuations. Paper number 45.
Yes, now, that's another outcome of my review article work. ("Energetics of Charge Fluctuations in Transition-Metal d Bands," 1964) I was supposed to contribute two articles, on one direct exchange, which is the one I've just been talking about, and the other on itinerant exchange, exchange in metals.
This was a contribution to the same Festschrift?
No, not to the Festschrift, to the Rado-Suhl. I guess originally it was to be Rado-Suhl and Seitz-Turnbull but neither of them ever appeared in Seitz-Turnbull. Phil Anderson contributed an article to each, one of them bei a condensation of the other. But I only contributed to Rado-Suhl. One of the central points in the theory of ferro- magnetism of the iron group metals is what sort of exchange effect is responsible for the ferromagnetic alignment of the spins? Van Vleck, in collaboration with Henry Hurwitz, a number of years earlier, had proposed that the d electrons in the transition metal — you usually have of course a non-integral number of d electrons per atom— he proposed that they fluctuate around from atom to atom, but in such a way that only very rarely do you get — you see the mean number is between N and N plus 1, so most of the time they fluctuate with either N or N plus 1, and only very rarely would you get N minus 1 or N plus 2. They suggested that conceivably, you might get enough of those situations so that they could play a significant role in the exchange coupling, at least in the case of nickel. For example, nickel you see is between 9 and 10 d electrons, so you'd say, most of the time you have either zero or 1 d holes on an atom, but if you have zero d holes on n atom, there's no magnetism.
If you have 1 d hole, there's magnetism, but there's certainly no Hund's Rule coupling. You have to have two d holes on the same atom before the Hund's Rule coupling on the same atom can exert an effect in aligning the d holes parallel. And so there was a question: How much of the time do your nickel atoms spend in d8 configurations? And if you calculate how much electrostatic energy it costs to put the wrong number of d electrons on an atom, in the crudest possible way, you find it costs 25 volts or something like that, and that's just unthinkable, so there'd be very little fluctuation to that sort of thing. If you calculate it really carefully, allowing all the other shells of the atom to readjust and so on, and using the atomic term values, you find it costs only 12 or 13 volts . But the argument that I made - I had made this earlier to Van Vleck and he had I think made mention of it in his paper in the 1953 Reviews Of Modern Phsics - was that the sp electrons should screen the delectrons somewhat, so that their interactions would be considerably weaker, and it couldn't cost so much energy to put the wrong numberof d electrons on an atom, because that would be compensated by sp electrons going on or off that atom. And I tried to develop this in more quantitative detail in Chaper 9 of Volume 14 of Rado-Suhl, and since I was still working on it, it hadn't come into print yet.
I guess I'd submitted the manuscript but it wasn't in print yet. No, I guess I hadn't even submitted the manuscript. By the time of the 1965 magnetism conference in Nottingham, I gave a paper on my thoughts on this at Nottingham. This was a rather condensed version of what subsequently came out in
Chapter 9 of the Rado-Suhl book, in which [ argued that it really only cost maybe a couple of volts or something. And alot of people didn't quite believe that, perhaps still don't to this day, but there's coming to be some evidence that that may be correct. Gunther Wertheim has some experiments with ESCA that seem to give some information on this, and they tend to favor my picture at the present, but I wouldn't say that's definitive yet.
So we'll have to wait and see. Now we're coming to 1966 work. I guess by now you've finished the review article.
Yes. Yes. These "High-Field Susceptibilities of Iron and Nickel" (No. 46 — J. App. Phys. 1966) turns out a completely wrong paper. The theory part of it is right. Thene are two papers essentially juxtaposed in this. Let me just check here in this Journal of Applied Physics issue. This is another magnetism conference, the annual one asuring the high-field susceptibilities of ferromagnetic metals, the idea being that if you have an up-spin and a down-spin Fermi surface in the d band, then you've got a high density of states for both up-spin and down-spin, and consequently, if you apply a magnetic field, you can shift the relative numbers of up and down spin d electrons by quite a bit, you get a very high susceptibility; whereas if you have only a down-spin Fermi surface in the d band, and the up-spin d band is filled, then applying a field, you have to transfer electrons from the d band into the sp band which has a low density of states.
That costs a lot of energy so you get very small high-field susceptibility. So all you have to do then is to measure the simple Pauli susceptibility of a ferromagnet to decide which of those situations you have. And I had suggested this experiment several years earlier to some of the experimentalists, to Bozorth when he was here, and to others. And after Bozorth retired and go working at the Naval Research Laboratory, with their big magnet down there, and also at IBM, he and some of his collaborators at IBM decided to do the experiment.
It's very simple for a theorist to talk about, although even for a theorist, there are some fine points. Toru Moriya had pointed out to me that one has to be very careful in making allowance for the Van Vleck orbital susceptibility in making inferences about what the Pauli susceptibility is, but I had taken account of that in this. The experimental problem of measuring this very tiny change in magnetic moment, in the
presence of the huge ferromagnetic moment that you have in a saturated ferromagnet, is very nontrivial experimentally. They thought they had it licked and they found a rather large Pauli susceptibility in nickel, and I put the pieces of the theory together and concluded that the only explanation, if their experiment was right, was that contrary to what everybody was thinking, nickel must have both the up-spin and down-spin Fermi levels lying below the top of the d band. And Foner at MIT was doing similar experiments, and he got the opposite experimental result, and let's see, who did theory for him? I don't know whether it was Art Freeman. You can look that up anyway.
These two papers anyway appeared next to each other. The theory portions of the two papers were almost identical, but the conclusions were opposite, because the experimental results were opposite. And it turned out that they were right and we were wrong.
Oh. I see. I guess we've discussed this already —
Well, of course, there are a lot of other things in Numbers 47 and 48 that I haven't talked about, that were original contributions too. I don't know how much you want to go into those? It's a little hard to detail them. In #47 they're such little things — formulation of what one means by a spin Hamiltonian and how one formulates it, how one allows systematically for the effect of spin orbit interactiofl. I put in a lot of effort on the so-called nonorthogonality paradox, which is alot of formalism that can be very fascinating once you get into it, but one really wonders why anybody should waste his time on it, because it seems unlikely that it will have any real physically interesting results, but since I had to review the literature, I had to delve into it.
Oh yes, and I did sort of rewrite some of the Freeman-Watson work on Hartree-Fock type calculations of exchange coupling of transition metal atoms and so on. I guess of more importance are some of the things that are in Volume IV. I'll get the book down, to refresh my memory. Well, I guess the first important thing in this is on the question of the spin susceptibility of a free electron gas. One should be able to formulate that in terms of the Landau-Fermi liquid theory, and the question is, how does one do a perturbational calculation of the Fermi-liquid interaction function for a free electron gas? I developed that from the ground up, using the Gell-Mann—Brueckner type of approach and applying it to the interaction function itself. Subsequently, essentially simultaneously I guess, Maurice Rice at Cambridge was doing a similar calculation for his thesis, and he published that also, which sort of partially overlaps mine.
My treatment gives more detail in some respect, but his treatment makes a harder effort to go in the direction of real normal metal densities. Whereas I was concentrating just on the high density limit. Then, in this chapter (4) I took up the exchange coupling in the so-called Wigner crystal, and pointed out the fact that pair exchange effects would give antiferromagnetic coupling, and ring exchange effects of odd membered rings would give a ferromagnetic coupling and this is an effect that was I guess discovered at the same time by Dave Thouless in England, and applied to solid helium 3, where it does seem to be of some importance — the competition between the ferromagnetic 3 member ring coupling, and the antiferromagnetic pair coupling. This (Chapter 5) was largely clarification of some of the issues that Overhauser had raised. I don't know that there's anything I need say too much about here. Chapter 6 is historical, covering some of the material that I had in this paper, 1960 or whenever it was, on the state of d electrons in transition metals. Just review.
Chapter 7?
Yes. This (Chapter 8) is review of some of the material. This (Chapter 9) is the stuff that I said was my extensive foray into the question of screening, yes. This again (Chapter 10) is review of several of the papers on strong correlations. The Kanimori scattering matrix approach, the Hubbard-Green's function approach, and the Gutzweiler variational approach. Again, I evaluated all these things in the light of the conclusions I'd reached in Chapter 9. Here again, all of these chapters have a little original material in them, but nothing that I think I need to comment on to you. Chapter 11 and 12 then 13 — on spin density waves and antiferromagnetism in itinerant metals. Finally, in Chapter 14, I came back to some of the material that I'd worked on in this paper with Kittel on theory of spin waves in ferromagnetic metals, and on the theory of the Bloch wall on the itinerant model,—
— back in 1951?
Yes. I brought some of this stuff up to date, and said a little bit more than had been said in these earlier things about the rather interesting questions of spin wave currents. What does it mean to say that a spin wave carries a current, and is this something that could be measured,and would t give interesting information? Well, I think it would give interesting information if it could be measured. I haven't really devised a good experifnent to measure it.
Now we have a iaper on 'TQuantum Transport in High Magnetic Fields" (J. Phys. Soc 1966 — No. 149).
Yes. That was the International Conference on Semiconductors at Kyoto, and I was invited to give an invited paper there. I think I offered them a choice of two or three topics that I would be willing to try to review, and this was the one that was picked. It was a field that I'd been interested in because of this earlier work on failure of magneto-resistance to saturate in high magnetic fields in semiconductors. There had been a number of papers in the literature that showed that ne cause of that nonsaturation could be the fact that one had to give up the Boltzmann equation in momentum space and go to a quantum treatment in which one worked with the quanta Landau levels and so on. And some of these treatments had been very abstruse density matrix things. And it occured to me that there should be a simple and perfectly rigorous way of looking at these quantum effects, essentially Boltzmann equation in the space of Landau levels. And so this was one of the main points that I wanted to make in this invited paper.
But before making it, I tried to gather together what one knew in a fundamental way about the theory of irreversible processes as applied to quantum transport in high magnetic fields. One thing that I called especial attention to was a point that had been made by a Russian Obratsov, I had learned about it originally before it was fully published in some correspondence with some of the Russians. He showed that the diamagnetic currents that flow around the surface of a Landau diamagnet are an important part of the transport process, and that when one is calculating things like thermoelectric powers and so on, one has to take account of those or one won't get the right answer. So I developed all these fundamental principles in as lucid a way as possible, and then concentrated on the explicit calculation of transport coefficients. I added a little bit at one or two places to some of the work that the Russians had done, but this was basically a review paper. And the part about using a Boltzmann equation in Landau level space, well, it certainly isn't new.
It was exactly what the Rumanian Titeica had done back in the 1930's, But nobody ever believed it, and I just argued that in view of Van Hove's work and so on, this was rigorously justifiable, And this sort of served as a criterion for telling which papers in the literature were right and which were wrong.
Interesting. Well, now we come to No. 50, "Simple Property of Electron-Electron Collisions in Transition Metals" (1967).
Yes. Thats an interaction with an experimentalist. The Australian,G.K. White,had been working at Bell Laboratories for a while and had been measuring the thermal conductivity and electrical conductivity of transition metals. I guess his experiments had been particularly on nickel, and the ratio of those, is the Lorenz number, which, according to standard transport theory, should approach the universal value, 2.5 times l0^-8 in some suitable set of units, as the scattering becomes completely inelastic, which it usually does as you get to absolute zero, because everything's dominated by impurity scattering. He found that this was of course true, but that if one took the electron-electron contribution to the resistivity - which is temperature dependent and gives the resistivity contribution proportional to T squared, and that dominates the temperature dependence at low temperatures in alot of these transition metals - and the corresponding temperature-dependent part of the thermal con- ducitivity the ratio of these two gave a value considerably different from the classical ratio.
And the question is, what should that be theoretically? Well, in general for electron-electron scattering, the theoretical ratio depends on the details of the electron- electron scattering. You can't give any universal value. And I had the bright idea, which doesn't seem to have turned out so well in practice although I still think it was a good idea, that if the band structure was sufficiently complicated — you just have so many different possibilties of electron-electron collisions, you can just classify them in a dozen different types or something like that — maybe the number of different types is so large that one can treat them sort of statistically, and say that on the average, they essentially randomize the wave vectors or something. And if that happens, then it turns out that one can compute a definite unique value for the Lorenz number which is 1 times l0^- 8 instead of 2.5. At that time,that seemed to agree with White's experiments, so we published these two letters together. Subsequent experiments on other transition metals and so on don't seem to have given values that are nearly as close to the 1 as I would have expected, although they're usually rather smaller than the 2.45. There are also some interesting effects that I don't think I went into in this letter, on how these behaved when you apply a magnetic field. I guess that work hasn't been published yet. Oh, I said 1 — no, the correct prediction from the theory is 1.58.
I made a mistake in the Letter and it was pointed out to me. It was just a — I got the ratio here as a ratio of two definite integrals, and then I just whipped these through on my slide rule with a pencil and paper, and got 1, whatever this was. Yes, 1.08 here, and if one really evaluates the integrals correctly, it's a ratio of 1.58. Same sort of mistake that Gorkov and Pitaevskii had made in the exchange calculation, only this time I was the one who made it.
Now paper # 51 "Gravitationally Induced Electron Field near a Conductor and its Relation to the Surface-Stress Concept," phys. Rev. 1968
This "Gravitationally Induced Electron Field," yes , this is just a matter of some things that some of us got talking about. There was a Physical Review letter by Witteborn and Fairbank that purported to measure the effect of gravity on an electron. And in order to measure this, one has to eliminate electric fields to a fantastic accuracy, because the effect of gravity on an electron is about the same as the effect of a charge, the electrical effect of a charge of another single electron 30 feet way. And it seems almost incredible that one could measure any force that small. In other words, that one could get rid of electrostatic fields to that accuracy.
And what Witteborn and Fairbank had tried to do was screen out electric fields by having the electron move vertically in a hollow copper tube. And they had relied on a theoretical paper by Schiff and Barnhill, that had argued that in such a copper tube, there would be a contact potential difference between top and bottom, that would be — let's see. Essentially what they had computed I think was that the work function difference between the top and bottom of the metal, due to the fact that there was a gravitational field acting on the electron inside the metal, would be zero.
And they'd used a very ingenious argument to prove this and yet to some of us who had a background in solid state physics, the idea seemed ridiculous, because it seemed that the bottom of the metal is compressed and when you compress a metal you change the lattice constants. Changing the lattice constants changes everything in sight, you can change the work function by quite a sizeable amount, and if you say that, well, one in a million compression will change the work function by something of the order of the one part in a million, then you get a field that's 10 times larger than what they're expecting to measure. So there was a group in Texas that argued in favor of the large contact potential difference, but not in a sufficiently convincing way to convince the theorists at Stanford, so there was a controversy there.
The theorists at Stanford said, "Look, we've got this very elegant way of using a reciprocity theorem to compute it, it's absolutely rigorous, and the answer comes out so and so," whereas the people at Texas used a number of arguments that — well, they made good physical sense but they weren't obviously completely rigorous. So it occurred to me that one ought to be able to compute the correct answer, using the Schiff-Barnhill type of reasoning. And it turns out that they had neglected a very subtle effect, namely, that when you place a charge outside a metal surface, you develop a stress, a tangential stress, a surface tension type of stress, in the surface, underneath the charge, that tends to stress the underlying layers of metal.
So I developed this in some detail, and showed that if you did the analysis correctly, you got the same answer regardless of whether you used the Schiff-Barnhill method of calculation or the commonsense ideas of the solid state theorists on work function; and that one should therefore expect much larger fields than what had been apparently observed. What it would mean then was that it was a mystery why the experiments came out as they did, and this mystery has survived to the present day. More and more people have done experiments on change of work function with stress and so on, that always give values of the order of what one would expect, But why the Witteborn-Fairbank experiment came out as it did is still a mystery.
There are other things that one would suspect of being able to perturb their experiment too, and I've just gotten a preprint from Humphrey Maris,* who's argued that the thermodynamic fluctuations of the microwave radiation field in the tube should completely bollix the experiment too. But that's beside the point. I just did this one aspect of what would be the effect of gravitational stress on the work function.
Let's see, Now what is left?
There's only one other physics paper here, that's this one—
( #54) "The Driving Force for Diffusion" (1971).
That's nothing except a defense of some of the work that I'd published back in the early fifties, on theory of sintering and surface tension and things like that in solids. There had been a number of papers since largely in Scripta Metallurgica, that had been off the beam in one way or another. The most recent of those, while criticizing all the other papers, also criticized or differed with my conclusions, and so I just wrote this thing in rebuttal. This is a rebuttal to a paper by Stevens and Dutton. I sent them a preprint of my reply, hoping they'd withdraw their paper. They didn't, but within the next year they wrote me a letter saying they were writing another paper of some length, elaborating on the fact that I was right and they were wrong. *(Now known to be incorrect. C.H.). That completes all the scientific work
including books as well?
Yes. Oh, wait a minute — did I mention any of my war work on underwater explosions? I think I did. Yes.
Only the published work here. I think there are two published papers.
There should be three altogether, I guess.
There are papers #7 and #10.
I see. There was this thing, I think I may have shown you, in this thing that the Navy released on underwater explosions. Yes, I remember, I did describe it to you, on the pulsations of the gas bubble produced by underwater explosions.
Oh yes, that's the second paper.
Rather interesting piece of work. There were two papers here, yes.
I'm also interested in your nonscientific papers, Here is one paper on "The Need for Reviews" (1968) (#52). There are two on reviews, one in '68 and one in '69.
Yes, this is the main one, this one in Physics Today (#52) . For the other one, the Chemical Society people asked me to take part in a symposium they were having on review literature, and I said alright, I would talk. Then they said, "Well, won't you give us a manuscript?" I said, "Well, essentially everything that I have to say - almost all of it - is in this anyway, I really don't want to contribute a manuscript "But they twisted my arm, and so I did contribute a little manuscript that does something the same thing as is done here, only it is oriented a little more toward nonphysics fields than this one. (No. 53, in J. Chem. Doc. '68).
Now for these articles in Physics In Perspective
Yes. There are two versions of my article on information and communication in physics. The longer version is in Volume 2B; Chaper 13 of Volume 1 is a condensation of that, so everything that I've done is in Volume 2B on that. My other contributions to the Physics Survey are in Volume 2C. All of this is sort of the nature of what you might call operational research on how the physics community operates, and Volume 2C was a catch-all. I'd been chairman of the Data Panel of the Physics Survey, which was supposed to collect and process data for the use of all parts of the survey study. Mt of what we collected then appeared in in the other volumes, but there was a sort of a residue of stuff that we had collected, that we thought might conceivably be useful to somebody some day that hadn't appeared anywhere else in the other volumes.
- That's what we put together in Volume 2C. Again, alot of this is a collaborative effort, although as you can see here, the members of us that contributed, that really took any part in the preparation of II C were relative few. I did most of it. Teaney did a great deal, and he was our staff scientist, and Keyes did some, Morse a very little. Let's see the Table of Contents somewhere here, yes. There's a very long section on manpower data that was not mine. That was done by Teaney and Beverly Porter at the AlP. Most of the rest of this, I put together, although Keyes and Morse contributed small portions here and there.
Can I, from this manpower data, figure out when solid state physics took off?
Not too well, because this is based on the National Register, and I think the earliest we had on that was either '60 or '62. But when solid state physics took off — that's in this other thing that you're going to look at in the Perspectives in Materials Research. Let's see if I've got that somewhere —
Well, this has been extremely helpful. Thank you. I hope to continue our interview soon. I particularly would like to have a conversation with you about the period when you first came to Bell Labs.
Yes. OK.
Now, I'm going to start getting copies of the papers. We're going to make a big pile of them, for your file at AIP.