You are here
Richard Garwin - Session IV
Richard Garwin - Session IV
Usage information and disclaimer
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of Richard Garwin by Dan Ford on 2004 July 3,
Audio and video interviews about the life and work of Richard Garwin, 2004-2012
Niels Bohr Library & Archives, American Institute of Physics,
College Park, MD USA,
For multiple citations, "AIP" is the preferred abbreviation for the location.
In this interview Richard Garwin discusses topics such as: low-temperature physics, cryogenics, Los Alamos Scientific Laboratory, hydrogen bomb, International Business Machines (IBM), superconductivity, nuclear magnetic resonance, John Tukey, fast Fourier transforms, computers, Erwin Hahn, patents and licenses, lasers.
This interview is part of a collection of interviews on the life and work of Richard Garwin.
To see all associated interviews, click here.
What I wanted to do is see if we can get out of the nuclear weapons business in 15 minutes and then go onto other things. I guess one of the main things that I want to ask about the nuclear weapons business was, essentially, what was the state of cryogenics in the late '40s and 1950s? Were people already industrially using a lot of liquid nitrogen or liquid this-and-that, or was this something that had to be developed from scratch?
In the 1940s and '50s, there was already a big industry in liquid nitrogen and liquid oxygen, particularly, for chemical and industrial purposes. We had, of course, a rocket program using the captured German V-2 rockets and some other rockets under development, so that called for liquid oxygen and some liquid hydrogen. The expertise in the industry came in part from the National Bureau of Standards, and of course there were — in research, there was use of liquid hydrogen and liquid helium.
At University of Chicago, for instance, there was, under the west stands — where the reactor had been — a low temperature program under Earl Long where they had not only hydrogen liquefiers but also helium liquefiers that supplied the research establishments at the University with liquid hydrogen and liquid helium. In small amounts, one could have these things all right.
For my work at the University of Chicago, for instance, since particle physics looks at the simplest collisions possible — those would be proton beams or electron beams interacting either with protons in the target, and in order to minimize background, you used liquid hydrogen that had no other nuclei but protons. Or, to have interactions with neutrons, the simplest way to acquire neutrons was to use liquid deuterium, or high-pressure gaseous deuterium, so that each of the nuclei had a proton and a neutron.
At high energies, the protons in the nucleus act independently of the neutrons, so it was not so good as having a neutron-only target, because you had the background from the proton-proton collisions in addition to the proton-neutron collisions that you were investigating, and that had to be subtracted off. That would mean, typically, a run that was four times as long in order to determine, with some accuracy, the proton-neutron collision behavior, as if you didn't have the proton-protons at the same time. But it was a tolerable burden.
As far as the cryogenic technology that was acquired, was that something that, when you ultimately went to Los Alamos, that had to be done on a bigger scale than it had been done before? What was the cryogenic challenge?
Well, for Mike it had to be done on a much bigger scale. For George, there were only a few grams of liquid hydrogen all together, and that was something that the folks in the laboratory there were accustomed to do. But they were accustomed to do it at Los Alamos and not out at Eniwetok. So it was a big effort for them. But in building a device — Mike that used cubic meters — that is, thousands of liters of hydrogen and deuterium. That was a real step up in scale in the industry. So Ferdinand Brickwedde took it upon himself and the Bureau of Standards to have their experts work with industry to provide the design of the liquid hydrogen, liquid deuterium plant, which I think was probably at Boulder, Colorado, but I'm not sure. I'm sure it's in histories — maybe probably Richard Rhodes has it.
It was not a big deal — something that was obviously possible to do and got done routinely. The big deal in the bomb itself was the compatibility of the hydrogen with the atomic bomb, and supporting such heavy masses — uranium, lots of uranium contained in the cold container, and so on — while at the same time minimizing the heat leak, because if you had just had them sit on, say, support, then there would be a lot of heat conducted through this solid support. Usually you use a vacuum envelope, like a thermos bottle, so we had a vacuum envelope I built into this device. So there was inside it some innards, and outside there was a case supporting the radiation case, and that was eight-inches thick of solid steel that didn't have to be cold.
What I wanted to do was to have very long support rods that would carry this tens of tons of load, and the heat flow across a support rod that goes from room temperature down to the 20-degrees Kelvin, which is the temperature of liquid hydrogen, is proportional to the area — cross-sectional area — and goes down inversely as the length. There's only a little bit of space built into the bomb between the warm surface and the cold surface across the vacuum — maybe a centimeter. There would've been many kilowatts of heat transferred even if I had used stainless steel for that support.
I built into the heavy wall long, diagonal holes so the bolts with vacuum around them could go for a meter or more, coming down like that, angling in only slightly and being attached to the heavy cold component at the bottom ends. The heat transfer would be 100-times less than if they were a centimeter long. We chose stainless steel, which has pretty good thermal resistivity, all the things that you do routinely. Marshall Rosenbluth said I was unique among the people who understood the theoretical aspects and also the practical aspects of building these things. In fact, since I built things myself with my own hands, I could do that very well.
So it was easy to go back and forth between what is required to produce the primary — which is a pretty standard primary explosive weapon, nuclear explosive weapon — and keep it from getting cold, because the explosive wouldn't work right or it would crack, and transport the radiation to the secondary, to compress the secondary, across the vacuum barrier, because the primary wasn't in vacuum; the secondary was in vacuum. So you had to transport the soft X-ray radiation across this region in order — and that meant you couldn't use steel walls in that region; you had to use other things.
There are many other approaches that you could use, and of course I thought of several of them. One, you could use steel walls, but just at the time that the primary was about to explode, you could have these things pulled out suddenly by propellant. Of course you would have gas rushing into the secondary, but you would arrange that should be hydrogen gas so it wouldn't add much impediment to the flow of radiation. But that would be something else to design and to verify, so you'd rather have some light material providing the window and the vacuum barrier at that point. So we did that.
It was just an ordinary design requirement. You figure how many cubic meters of hydrogen you need, how many would still be in the storage container, how many times you might have to fill it before you got it right, and then Ferdinand Brickwedde and I decided how much production capacity there should be. I think it was a cubic meter per hour for deuterium and two cubic meters per hour for hydrogen. One could imagine shipping the liquid hydrogen and liquid deuterium in a ship across the ocean, but I think we shipped bottles of compressed gas and had the liquefier on Eniwetok — but I don't know. Jay Wexler would know about such things.
They had to build a processing factory?
Right, to produce the liquid hydrogen there.
I read at some point that Styrofoam was used in the design of the bomb, and that at one point Styrofoam itself was classified. Is that true?
Styrofoam is certainly used in some bombs, and I was well aware of Styrofoam, since I used it in my work at the University of Chicago. In fact, there was considerable controversy because I would buy from Monsanto, or whoever, Styrofoam logs which were two feet or three feet on a side, square cross-section, and eight-feet long. I set up a hot wire on the lab bench — that is, a nichrome wire like that in your toaster, but straight rather than coiled — and put a current through that through a transformer, so this wire would be red hot. Then you could just slide the log along the bench, and it would cut through it like a hot knife through butter without all of the crumbs that you get if you try to machine the Styrofoam.
I found also a glue that would glue Styrofoam without eating it away and that wouldn't become brittle at low temperatures. So I developed the technique there of using these Styrofoam containers — just rough machined in that way — for using liquid nitrogen around targets for our beams. That was very good, because just as in the hydrogen bomb, you didn't want the beam coming through heavy steel walls in a thermos bottle or whatever, and the Styrofoam was very light.
Then I began to devise a means for using them for holding liquid hydrogen. That's a different kettle of fish, because liquid nitrogen does not condense air. Since air is mostly nitrogen, it condenses water from the air, but the Styrofoam is a good enough insulator that it isn't cold on its surface. However, if you have liquid hydrogen in a Styrofoam container, and if the Styrofoam were not totally closed cells, then air could penetrate the Styrofoam, and it would freeze around the liquid hydrogen. In fact, oxygen freezes more readily than nitrogen, so you might even have oxygen saturating the Styrofoam near the hydrogen, and that could be dangerous if it warmed up. You'd have hydrogen/oxygen. So I used an aluminum foil liner under those circumstances, and I put an aluminum foil outside on the Styrofoam so that the air couldn't get to it.
I was not allowed to use the Styrofoam containers for liquid hydrogen, because we had a safety committee, and Leona Marshall was on the safety committee, or head of the safety committee. Anyhow, I was denied the possibility of using these. But after I left Chicago in December of '52, Leona found it a better idea, so she started using them herself.
There was no problem with the cryogenics, except you had to know there was no problem — that you could just design with it another parameter. At Chicago, I had to do some experiments — this was in collaboration with Jay Orear, who's a physicist who then went to Cornell for a long time — Columbia University and then Cornell. We wanted to expose photographic emulsion, which was the way to look at very fine details of particle interaction. But we wanted to expose it to nuclear interactions in hydrogen, and photographic emulsion is not sensitive at liquid hydrogen temperature. So I built a hydrogen bomb at Chicago, but it was not the same kind of hydrogen bomb. A bomb to a chemist or a physicist is simply a high-pressure vessel with thick walls that's strong enough to hold the high-pressure gas inside.
This was a cylinder maybe two-feet long and six or eight-inches in diameter, and it was going to have hydrogen at I think it was 30,000-pounds per square inch — and had to have thin windows at the end. So we had little pipes coming out so the windows would be maybe three feet away from the reacting volume. Thin windows were just aluminum foil that was captured and the seal made with an O-ring. Anyhow, it's a lot of fun to do that, so I designed that — it wasn't my experiment — and invited Jay Orear to put his capsules of photographic emulsion inside so that a particle proton interaction in the interior would then have the reactive particles — mesons or whatever — spray onto the photographic emulsion, which could then later be developed and scanned with a microscope to see what was going on.
When you went to work at Los Alamos, did Teller know or was he told of your work in Chicago with hydrogen and cryogenics?
He probably didn't know about that. Teller was on the faculty at the University of Chicago. He was a professor of physics there, so I knew him quite well. In 1949, when he was trying to recruit people for the hydrogen bomb effort, he sent a memo, which I saw recently. It was a secret memo at the time, but I was given a copy I think by Stan Norris of the NRDC. I will provide you with that memo, because it talks about people in general. They needed to get Hans Bethe on the project. They needed to get Enrico Fermi. And there were a few words of description for the talents of each of these people. I was on that also. So I'll send that to you.
I was just wondering whether Teller knew of your expertise?
No, he just knew that I was Fermi's best graduate student.
So you had nothing specific to do with cryogenics?
No, he didn't know.
Is it correct to say that the greatest part of your work on the bomb had to do with the cryogenic design?
No. No, the most important part was making the design decisions tentatively for putting all of these things together. That is shapes, sizes, temperatures, whatever.
Yeah. But there were, floating around, all kinds of ideas as to how you might do this or that, or what this particular element should be, and as Conrad said — in my few pages of text preceding the sketch, I said, "Well, this looks as if it will work, but the dimensions are subject to adjustment from detailed calculations." I didn't expect that the design would change significantly because it worked well enough, and when it's working well enough, you should leave it alone.
I also understand — of course, everybody understands that the big public debate as to whether the hydrogen bomb should be built or not… Fermi, as I understand it, was one of the people saying it shouldn't be built, and he made various statements about it being a morally unconscionable —
Fermi never said anything publicly, and he never said anything at Los Alamos about that. He said it only when asked in his role as a member of the General Advisory Committee to the Atomic Energy Commission. But at Los Alamos he just worked to see whether it could be built, and to help build it if it could be.
The statements that he made at the General Advisory Committee, at least as they were described to me, were —
It's available in his words because it was written. It was a minority report of Fermi and I.I. Rabi and said that the hydrogen bomb was an evil by its own existence because there was no limit to the size. You can infer from that that it shouldn't be built. I don't know that he… He was against building it, and he would be against building not only the Super itself, but also the equilibrium super — that is, the radiation implosion. But a decision was made by the president, and Fermi had come from Italy, and he was a citizen by '52 or '51, and he felt it was his responsibility to help carry out what the political leaders had agreed.
Did any of the people who felt the bomb shouldn't be built, did they ever try to dissuade you from working on it?
No. They didn't know I was working on it, I expect.
Fermi did, of course.
Yes, but he never said anything to me or anybody else. Only told the General Advisory Committee because he was asked.
Okay, enough hydrogen bomb. One thing I thought it would be helpful to do for purposes of overview was to describe your career at and within IBM — leave aside all of the government related work — because at least to me, the biggest part of the story that I don't know about was what you did for IBM. When you sent me stuff recently about touch-screen computers and laser printers, I was surprised. I always assumed you must do something for IBM, but I never heard about it.
I really had three careers simultaneously, and one was working with the US government on military technology, beginning with nuclear weapons from 1950 until about 1953 or so, and continuing but with lesser involvement up to the present, working on other things — missile defense, air defense, radars, many other arms control — ultimately beginning around 1958, and working also for the government on intelligence, ultimately space-based intelligence and imagery and electronic intelligence — but many other things in the intelligence field. So that's one career.
Then I had my own research in physics, which was first at the University of Chicago from 1947 until 1952. I worked in nuclear physics and particle physics. In addition to some results in discoveries in physics, I devised a lot of technology.
When I went to IBM, it was to continue my research in physics, but in a new field, to go from particle physics to low-temperature physics — that is, liquid and solid helium and helium-3, and superconductors. As I mentioned, these fields had not had much vigor — injection of new technology after the war, unlike nuclear physics and particle physics, so I thought that I could probably make some contributions there.
IBM was starting this new laboratory, which would have solid-state physics, condensed-matter physics, and low-temperature work fit right in. So when I went to Columbia, the first thing I did was to see what kind of supply of liquid helium we would have, and for a while we brought it a few blocks from a laboratory in the basement of the physics department at Columbia — Pupin Hall — from a Professor Boorse, as I recall.
That was a time when there was an expanding use of helium and a looming shortage, and helium is obtained as an impurity in natural gas, and sometimes it's just sent out with the natural gas, reducing the heating value, but it goes right through the burner into the atmosphere. I thought this was a shame and, perhaps unwisely from the point of view of economy, fermented a lot of work to try to separate the helium and reinject it into the ground in wells as a kind of strategic reserve. However, the political process got hold of this, and it got turned into a pork barrel for Kerr-McGee, so I'm not sure it was the right thing to do.
In any case, we had a recycle system in our laboratory where we had a rack in the basement, a large rubber bladder, and helium would be captured from all of the cryostats, cryogenic equipment, throughout the laboratory — instead of being allowed to boil off into the atmosphere, would be recovered and compressed and liquified in our basement. Eventually, after a few years, we gave it up because everybody else in the world was just buying helium from Air Products Limited and allowing it to vent to the atmosphere, so what sense did it make for us to do that?
I began my own work with superconductors. I had a couple of graduate students —
What is a superconductor?
Superconductivity was discovered in 1906, maybe, in Holland by Kamerlingh Onnes. As he cooled some metals, for instance lead or mercury, below the temperature of liquid air and down toward the temperature of liquid helium, which is four-degrees Kelvin — 4.18 degrees Kelvin — all of the electrical resistance vanished, and it vanished suddenly. The resistance of pure copper may fall by a factor-100 between room temperature and four degrees, but no matter how cold you make it, it still has that constant resistance. But lead or mercury or niobium or many other metals lose all resistance. If you have a loop of this material and you have it in a small magnetic field,
So Kamerlingh Onnes found that, when we cooled certain metals, they lost all resistance. That meant, if you had a loop of material and you cooled it in a magnetic field — for instance a small magnetic field — it would lose all its resistance, and the easiest way to tell would be then to move the glass thermos bottle in which this was out of the magnetic field, or cut off the current for the magnetic field, and you would find that current persisted in this loop of metal — and persisted for days, months, years, centuries — would persist forever. Well, almost forever.
In fact, around 1962 or '63, the people at Bell Laboratories had made some high-field superconductors in the form of wire — molybdenum rhenium — rhenium is a rare element wire. I had a friend at Bell Labs, so I got a spool of this stuff, and I wound a coil and took it in an ordinary thermos bottle full of liquid helium on the subway — probably not so wise — down to a hotel in midtown Manhattan, where the American Physical Society was having a meeting.
I had, in this coil, also another — I had this superconducting wire, but I had arranged also a convenient switch at liquid helium temperature — easy to make — so that the wire, little loop of wire, went through an inverted test tube under the liquid helium, and I had an ordinary heater in that test tube. When I wanted to put current into this coil to make a big magnetic field, I took a couple of dry cells and connected it across the wires that came to room temperature, but I had to make sure that the current didn't go into this little short circuit that closed the coil. So I heated the heater in the tube so that little built of molybdenum rhenium wire was normal, had a big resistance, current went from the outside down into the coil, made a big magnetic field — intense magnetic field. Then I stopped the heating current and took away the dry cells, and the current that was in the coil had no difficulty now circulating through the now also superconducting little shunt and continue also forever.
I could show there was a current in the coil. I had a pair of pliers with me, and I put the pliers near the glass thermos bottle wrapped in protective tape, and it just stuck out like that against the gravity because the magnetic field was so strong. Imagine it's a great big iron filing that is mapping the magnetic field. So I worked in physics on superconductors. I looked at the use of superconductors for radio frequency cavities in accelerators. The Stanford two-mile long linear accelerator was just being conceived at that time by Panofsky, whom I knew from my work with the President's Science Advisory Committee, and I had a student, Myriam Sarachik. Myriam Sarachik's first job [for me] was probably to make a microwave cavity by getting from the glass blower a Florence flask — remember that from high school: It's a round-bottomed flask with a neck — and putting another neck on the other end of it — wouldn't hold liquid anymore, and then evaporating metallic lead on the inside of this system, this spherical flask and the two necks. That's done in vacuum; no problem. Absolutely routine.
Then we would have a microwave signal generator with a little probe sticking into one of these necks — an adjustable distance — and we had sticking into the other neck a little probe that went to an oscilloscope. Now, you could put in the signal, and the oscillation was a few billion times a second for the radio frequency signal.
You could put microwave energy in this, and it would take away the signal generator, or shut it off. The microwaves would oscillate a couple billion times a second and gradually decay. Unlike steady current — DC current — for which there would be a magnetic field that persisted forever, the microwaves did have loss and would decay and get converted into heat. But I had some ideas.
I thought that, if one introduced scatterers into superconducting material, it wouldn't affect the DC current, but you could inhibit the motion of the electrons — the normal electrons which cause the resistance — so much that the microwave loss would be reduced, and these superconductors could be more practical for building particle accelerators using radio frequency, which is what they were going to do. So I made a special trip to California, and I talked with Panofsky and Bill Fairbanks, who was a well-known low-temperature physicist at Stanford, to see whether they could possibly benefit from having superconducting cavities for their two-mile accelerator. It was somewhat premature, because they had a defined schedule, so they built the accelerator with copper. But now we have accelerators with superconducting cavities.
Does Fermi Lab use them?
Yeah, I think Fermi Lab does, and there's a Jefferson Lab that has a continuous beam facility, which is more difficult because you have to keep the power in the cavities the whole time since there would be more heating. And at CERN in Geneva, they're building a system with large superconducting cavities. It's a useful technique.
Then we went on to do some real physics of superconductors. Myriam Sarachik built an apparatus where we had little radio-frequency magnetic fields on the outside of a cylinder of thin film lead, and on the inside we had a complicated pick-up coil, the whole idea being so that we could look at very tiny amounts, tiny fractions of magnetic field, leaking into the interior. She studied the so-called penetration depth as a function of temperature, and that may even have been her PhD thesis. She was just head of the American Physical Society last year. She's a professor at City University of New York.
My work on liquid helium and helium-3 was begun also in 1953, and there I worked primarily with Haskell Reich. He was a new PhD from Columbia University and a student of either Rabi or Kusch. We built apparatus for holding liquid helium-3 and also compressing it to make it solid. Although helium-4 boils at about four-degrees Kelvin, helium-3 boils at about three-degrees Kelvin. So to make helium-3 liquid in equilibrium with the helium-3 at one atmosphere of pressure, you need to cool it below the temperature of liquid helium — that is, of liquid helium-4 — and that is done normally by pumping on the helium-4. Just the way water boiling point goes down as you go to higher altitude, the temperature of a helium bath is reduced as you reduce the pressure over it with a mechanical vacuum pump and ultimately with gas ejection pumps.
It was conventional in laboratories all over the world to have liquid helium Dewar containers for storage, and then either glass or metal Dewar vessels — that is, thermos bottles — for doing the experiments. We mostly made ours of metal at the Watson Laboratory. But to get solid helium-3, you have to apply pressure, and I think about 40-atmospheres — so about 600-pounds per square inch — and that has to be communicated from room temperature down through the liquid helium to the helium-3.
We wanted to study helium-3 solid with a big change of density. Now, to double the density of steel would require about 30 million pounds per square inch, or about 2 million atmospheres, but to double the density of helium-3 requires only about 20,000 pounds per square inch, or about 1,500 atmospheres. That's a lot of pressure. What you have in the ordinary compressed gas cylinder that's trucked around and you see in laboratories in this country — and in the world — is about 2,000 pounds per square inch, or about 150 atmospheres. So we needed 10 times that pressure, and that meant that the wall thickness relative to the size of the container had to be 10 times as large, and this had to be at helium temperature.
There was a well-developed, high-pressure piping system available, so we could apply the pressure with helium-3 — in fact, we applied it to a very high pressure simply with a hand pump. It's a nice field to work in. This hand pump was an oil pump. You pump up oil. The oil was in a pressure vessel — very thick wall pressure vessel — with helium above it. In fact, we used mercury because helium dissolves in the oil.
Anyhow we had a means for applying pressure to the helium-3, we had to develop means for keeping it cold because we wanted to work for a long time at very constant temperature. Put radio frequency pulses down our main tool for finding out what was happening in the helium — was nuclear magnetic resonance. So I needed to learn a new field of so-called spin echoes. Fortunately, the person who had invented spin echoes at the University of Illinois had been hired by IBM and was in the next laboratory over. His name will occur to me in a minute. [Erwin Hahn].
But this was a wonderful technique. Up to that time, magnetic resonance had been done by the creation of a steady signal, and using an intense, steady magnetic field, one could move a resonance from some sample widely used in chemistry or developed in physics across the exciting radio frequency magnetic field. So a sensitive radio receiver would pick up this change of signal because of the tiny additional signal provided by the nuclei spinning in the magnetic field.
This was annoying because there was the intense radio frequency field from the experimenter in addition to the signal coming from the nuclei, and various schemes were introduced to try to balance that out, but Erwin Hahn invented the spin echo system whereby the magnetic field was applied in a pulse, perhaps 100 microseconds long, at whatever frequency was being used — four megahertz — or for the nuclear magnetic resonance imaging. For instance, now this is done at 60 megahertz or so.
Then there'd be a radio frequency field applied, and a time later — maybe a whole second later — there'd be a similar signal applied for twice the length of time, and then the radio frequency generator would be removed or shut off, and a second later there would come, out of nowhere into a coil near the sample, a beautiful echo. The magnitude of the echo indicated how many protons there were, or whatever else of the material. The frequency of the echo was dependent upon the magnetic field — 4.2 kilohertz per gauss of magnetic field for a proton — much lower frequency for a deuteron or a chlorine or whatever.
So you had, in this way, exquisite sensitivity to the elemental composition and the isotopic composition of the sample, because chlorine-35 and chlorine-37 have totally different frequencies. People had a field day, because they could then do experiments to wiggle one kind of element in the compound or sample. They could measure in detail the rate at which the spins actually diffused across the magnetic field, if the magnetic field wasn't uniform.
So we did all that with our helium-3. The things that we measured most were the relaxation times, of which there are two — relaxation time of the magnetization, and then how fast the spins get out of synchronism with one another. The other is how fast the spins move across a magnetic field by random walk. They have different frequencies in different portions of the sample because the magnetic field varies uniformly, linearly, from one part of the sample to the other. So we had a magnet that we had borrowed from Columbia University at rather low frequency, and we had our cryogenic apparatus, the Dewar vessel with the long tail that went between the poles of the magnet, and shimmed the magnet to provide a field gradient, and then applied magnetic fields, and then, at different times between the pulses — we had pulsers that had to be set by means of dials in those days to give a certain delay — and we could measure the amplitude of the echo.
Our measurement that Haskell Reich and I made for some years was to measure as a function of temperature and pressure — that is, density — the behavior of the echo as a function of time — that is, time between the applied pulses — and also of field gradient, and from that we could get very accurate measurements of the diffusion coefficient.
What's happening here is that the — in solid helium-3, it isn't necessarily the atoms that are popping around and exchanging with one another in the crystal structure, because the only thing different about two atoms is the direction of their spins. Electrons have closed shells; they have no personality. So you can't tell the difference between the atoms diffusing and the spins exchanging.
We were really interested in this, and we found that, as we went to low temperature, the diffusion coefficient didn't go to zero. With everything else, the diffusion coefficient goes to zero. That's why honey gets stiffer, more viscous, and molasses and all that. But in the case of helium-3, you have a zero-temperature, a zero-point diffusion coefficient, and you can measure that as a function of density.
So to make a long story a little bit shorter, we found that, as we squeezed the solid helium-3 to bring the atoms closer together, you would think it would be easier for them to exchange spins or locations, but in fact the diffusion coefficient went down by a million times gradually, and we could measure that behavior. So while we were rebuilding our apparatus to go from 1,000 to 2,000 atmospheres, I did a computer experiment on 20,000 spins, assuming that each one interacted only with the spins in the neighborhood. I took a random orientation of the spins to begin with — this was 1963 or so.
On the IBM 704 computer at Columbia University, I could do an experiment in which each spin was then set to the mean magnetic field of its neighbors. It took about five minutes to relax all these spins multiple times so that any wrongness would diffuse out of the lattice. No matter how I started, I would find I had the same relative orientations — the same spin energy of the system. So it had settled down to some steady, regular state, but I didn't know what it was, so I thought about doing a so-called Fourier transform on this system, and it would've taken about four hours of computer time, and we were being charged. I'm very stingy with other people's money. I didn't want to spend a couple hours — probably $1,000 of IBM's money — on computing for this.
But we wanted to see whether we had a frozen-in spin wave, whether this thing would go to antiferromagnetic so that at — what we were finding was that the exchange interaction, the diffusion coefficient, was getting much smaller, and we thought maybe it would get negative. That happens when you have a frozen-in spin wave, and instead of having things relax to the field of their neighbor, for some reason they relax to opposite directions. And we couldn't tell. We thought about that, so I thought I would do this computer experiment, computer simulation. But before I found the spin wave — which I then did in a different way — we had rebuilt the apparatus, and we found that it wasn't true — there was no frozen-in spin wave. This just got continuously smaller as you got to higher and higher density. So we needed a theory, and Andre Landesman, a French physicist, was just in the process of translating into French — really true — a book by Anatole Abragam on nuclear magnetic resonance. Abragam was a French physicist, but he knew English and he thought he'd have a better sale — he wrote the book in English, so Landesman was translating it into French. This was his field, too.
He asked whether he could come to New York to work with me at the laboratory, so sure, he came for a year. We became friends with him and his wife and their two children. We weren't quite through, so I went in the summer of 1963 — Lois and I went to Paris for a month, and I worked at Orsay, I guess it was, to try to finish the paper that we had been writing. So he helped with the experiments, and then we did this theory.
In 1963, I was on the President's Science Advisory Committee for a term that began with President Kennedy, and so it probably began in 1961 to '64 — or maybe '62 to '65. I used to sit next to John Tukey on PSAC. He was an interesting person, but there wasn't much time to talk to people in this meeting with 18 people around the table. I sat next to him because he would always have a package of dried prunes, so I could eat his dried prunes.
He was the person who introduced the citation index so that you could look things up in there. There are references, of course, in a paper, but you could extend your search in a very ingenious way, because later articles which cited earlier articles would have these citations collected and put into big books — citation indices. So if I wanted to look at my paper on a fast-coincidence circuit of 1950, I could look up my paper in the ordinary literature. I could see what citations I had in it — see Rossi or other people who had gone before. But if I look in a citation index of 1990, for instance, or 1960, and I look for my paper and see who cited it, then I can see what's been done since then. I had proposed this — not that, but some arrangement of the literature like that — in a meeting on electronic libraries at Woods Hole.
Anyhow, for months — two days every month in PSAC and I'm sure in all the other meetings he went to — John Tukey would have a stack of cards, a stack of papers, in his field, and he would go through them and write down the citations, and then they were given to other people to punch up on cards or whatever — begin to make citation indices. This time John Tukey was writing Fourier sums — that is, he was writing some function defined at various points, 128 points or whatever, and multiplying by the cosign of the angle or the sign of the angle, the sign of the double angle, the triple angle, and whatnot. What that does is to give you the steady components of a signal.
For instance, if I put my hand down on the piano keyboard and I get a crash, you'll tell that, after a long while, a lot of the sound has died away, and there will be the low tones that are continuing. But if you Fourier analyze that — that is, analyze it as Mr. Fourier did for the first time — then you have this complicated time behavior that you could see on a plot. But it's really the sum of individual sinusoids, so that's what Fourier transform is. That's what I was trying to do with my computer experiments to see whether — not in time, but in space — there were spin waves frozen in. I would've looked for the Fourier component, of which there are 20,000 for 20,000 spins, with the highest amplitude and that would be in a certain direction and certain spacing, and that would tell me what was frozen in.
I asked John Tukey whether he knew something about Fourier sums that I didn't know, because I was doing this experiment with 20,000 spins, and I'd take 400 million multiplications in order to get a Fourier transform. He said he probably did, because a person whom I didn't know at that time — no, I did. I did. I.J. Good, who was a British cryptographer at General Communications Headquarters — GCHQ — in Cheltenham, England, had an active mind and he was interested in lots of things. He had devised a scheme whereby you could double the number of points in a Fourier series, and it would not result in a factor-4 increase in workload.
If you have n data points, it would take n-squared multiplications in the ordinary way. If doing Fourier transform in the new way, it would take only n*log(n) — that's the log to the base-two. In my case, with 20,000 spins and the ordinary straight-forward way of doing this — 400 million multiplications — it would take — so, 20,000 times 20,000. It would take only 20,000 times the log of 20,000, which is 4.3 times 2.3 — or about 10. So I would save a factor-2,000 if I could use this fast Fourier transform.
I immediately thought of all the other things I was doing with the government on submarine quieting and spacecraft design and things like that — and optics — and how useful it would be to have this Fast Fourier Transform available. So I asked John Tukey when it would be published. He said, "Oh, maybe a year or two years." He was working with Mr. Batchelder. [RLG is not sure] I asked him whether it would be okay if I tried to get somebody to work with him to do it faster. I don't remember his answer. It couldn't have been 'hell no.'
When I got back to New York, I asked the person in charge of mathematics at Yorktown Heights, who just died in fact last week — Herman Goldstine. Goldstine had worked with von Neumann on producing the first stored program electronic computer — that's not exactly right — at Princeton and had come to IBM after that. So he identified a very good numerical analyst, Jim Cooley, and after a good deal of prodding on my behalf — by me — Jim Cooley went down to Princeton to see John Tukey and devised a very clever means of actually doing this Fast Fourier Transform so that the entire calculation would fit into the memory that the data points themselves occupied to begin with. In those days, that was important because storage was very costly.
By that time, of course I had done the experiments on helium-3…
Did IBM make any money out of Fast Fourier Transforms?
Get to that in a moment. By that time I had already done the experiments on solid helium-3, and I had devised a different way of looking for the frozen-in spin wave. Because I knew it was periodic — that is, it would be the same everywhere in the lattice — and that meant that there would be a local amplitude times some single sinusoidal modulation, and all I needed to do was to look for the direction and the period that is the length of that sine wave. I didn't have to do all this Fourier transforming. I just needed to calculate the energy that such a spin wave would have for different periods and directions and minimize that energy by adjusting the direction and the length of the sine wave.
Of course, that's exactly how we did our data reduction. We had a minimization program that we had programmed for the computer — a so-called variable metric minimization program — and it didn't care whether it was working on experimental data to get the best fit of the field gradient, the diffusion coefficient, and the relaxation time, or whether it was working on computer-generated data for the spins in order to get the best fit of the X and the Y and the Z component of this sine wave. So that's how I did it. It was really very easy.
I wrote 100 or 200 letters to people whose papers I had read or whom I knew from my government work who really could use this FFT — Fast Fourier Transform. You had radio astronomy and submarine quieting, in structure design, in optics — and it was a revelation to them. I also found out that there was a long history to the FFT which had been forgotten — that in the 19th century, Gauss had devised something like that — that in the office next to me at the IBM Watson Laboratory at Columbia, L.H. Thomas had devised something like that in the 1930s. But with the introduction of computers, programming was so hard and arithmetic was so easy that people had forgotten all about these efficient schemes. Anyhow, I published a paper in 1966 on the history of the FFT, and many other… That's something of which I know only my part, but other people contributed a more scholarly analyses.
So no, IBM didn't, and we talked about it a good deal. In those days, it wasn't clear that you could patent programs or algorithms, so it wasn't clear that we could make money out of it. We decided that, really, contrary to what you might think — that when you reduce the amount of computer effort by a factor-2,000, you're cutting into your market. It's not true. What's your doing is to greatly expand the market. So instead of having a 200-point Fourier transform, people will go to 20,000, 100 million points of Fourier transform. Transform a whole picture.
So they [IBM] never did try to protect it; they tried to publicize it as much as possible. That happened later — I was, I mentioned to you, on the IBM Science Advisory Committee, from its inception until its demise. I was for a long time the only internal (inside) member, except for our chief scientist; the others were all outside scientists.
Same thing happened with the scanning tunneling microscope that was invented in the IBM Zurich laboratory and that received a Nobel prize I think in the 1970s, '80s. Anyhow, there we thought long and hard: Should IBM try to commercialize this? Should it try to get a license on the patents? And decided it was such a fundamental advance that the best thing for the company to do in this case would be simply to publish it and not to assert any intellectual property rights.
Jim Cooley retired only about five years ago. He may still be around. Very interesting innovation. John Tukey died about two years ago, unfortunately. He was a wonderful person. I had first worked with John Tukey and other people mostly from Bell Laboratories on this committee chaired by William O. Baker to look at the National Security Agency.
Anyhow, that was my physics in superconductors, my physics in — and I didn't do much in superconductors. It was mostly my students, and they didn't do all that much either. And the physics of liquid and solid helium-3, and there Haskell Reich and I did a lot. Over the years we have quite a few papers.
On the superconducting business, the object was to make superconducting computers?
No, it was just physics. IBM wanted to be renowned for the quality of its research, so it had pure research and applied research, and I was nominally in pure research. But I had a great interest in applications, I'm about to tell you. One of the first things I did when I went to IBM…
Among the first things I did when I went to IBM, I was asked to look at the time-equipment recording part of the business. At that time, you probably noticed when you went to school, the clocks on the walls every hour or so would shudder and be adjusted to the right time. This was done by carrier current in the few-kilohertz range over the power line. Each clock just plugged into the wall, but the entire power-line system in the school, in the building, would have a generator of audio signals connected to the power line, and it would put a few volts of audio in addition to the 110 volts of 60 Hz. Each of the clocks had a little coil and a resonance circuit and a thyratron — that is, a gas-discharge tube — so it would reset the minute hand to the proper time when this signal came along.
They had clever people there, inventors, who had taken this to a new level by being able to provide thousands of different commands and to select different recipients for the commands. So you might control a thermostat and set it to different temperatures. You might control the lights in a shop window. You might pull down the blinds by a motor or flush toilets.
That was done by having maybe two coils and a little clock motor in each of these things that was being controlled. So the first signal would start the clock motor going for a time which was determined by the timer, and it would move a cup, and there would be — a second signal would come along, and it would move a little lever so that it would not fall out — spring-loaded lever so would not fall out through a slot in this cup. So if you wanted to set command number 9,000, you would have two cups, one of which had 100 possible exits, and the other, which had 100 possible exits, and so you would choose the 90th exit from the first and the last exit from the last, and that would be command number 9,000 and it would do whatever that one was supposed to do.
That would also be the selection of that particular device, because 9,000 might be the address of that unit, and then it would do whatever the signal command was supposed to do. But you could also select the command, if it was that kind of unit. Lights might go on and off. Temperatures might be set.
This was now a very important system, and the lifeblood of the store or the building could depend on the accuracy of receipt of these commands, and they wanted to have acknowledgment [from the individual units]. They had done this in smaller buildings. This task was Lever House in New York City, which, as I recall, had 17 megawatts of load—17 million watts of load at 100-watt bulbs, that would be a fifth of a million light bulbs, which weren't being turned on and off individually. They wanted for some hundreds of units anyhow to have an answer back that the command had been received and executed, and they asked me how this might be done.
The generators themselves were big mechanical generators, because to put on, say, five volts onto the power line, which has 100 volts for 17 megawatts – that's 1/20th of the voltage and the power goes like the square of the voltage, so 1/400th – so you would need about 50 kilowatts, a big generator, to provide that power. To answer back, you didn't want to have a big generator at each of the controlled units.
In my physics, of course, I dealt with extraction of low-level signals, and in this case, I decided that each of these units that was required to answer back should have another little motor in it that would modulate a coil — would move a vane in and out of a coil at 17 times per second. This coil would have a frequency dedicated to the answer back — say 3,500 hertz — and the big generator would provide this steady 3,500 hertz and, when it detected a modulation — which would be at 3,517 and 3,483 hertz — the probing signal would always be there, and the response signal would be generated by this variable absorption. They're exactly the same sort of thing we did in nuclear magnetic resonance before spin echoes came along. Sure enough, they built it according to the design, and I went down and saw its installation. It worked fine. Then IBM sold that part of the company.
The other thing I did when I first went to IBM was to sit down and just ask how science and technology could be used, for instance, for memory in computers. So I wrote a little paper, and I said I could have a vacuum system in which I would have slow ions, and they would go out like sound — we had mercury delay lines in computers. Acoustic impulses would go down and…
What good does that do? Well, you have a single pulse which comes in, or not, and then maybe 500 of those pulses later, it would be picked up by a transceiver, a receiver, and sent back immediately to that same transmitter. So, transmitter-receiver. That's a delay line, and the idea is that you could have several pulses — 500, 10,000, a million — in transit at the same time, and it would serve as a storage system. Of course, only an evanescent storage system — ephemeral — because the pulses would disappear if you didn't recycle them.
So there are other ways of implementing delay lines. In fact, IBM implemented one in a rotating drum. Sperry Rand had a patent on storing pulses on magnetic medium in a rotating drum, and IBM wanted to build a computer — the IBM 650 magnetic drum computer — and they didn't want to have to negotiate with Sperry Rand for the patents. So somebody at IBM, maybe it was John Lentz, invented the so-called revolver. In this case, there was not a specific point on the drum that had a bit or not a bit. The drum could run at variable speed or be variable size, and as the pulses were put onto the magnetic surface by a write head, they were then later all the time picked up by a receive head.
There was no permanent storage on the drum. It was just that for the 10,000 bits that had been written, they were now being picked up and recycled. This was a delay-line storage system, but the delay line was a moving magnetic medium. So I said to myself, "Well, I could have ions." The trouble is, the ions come off at different speeds. But I could arrange the tube in which they went so that the ions were reflected not after a given distance, and if I don't have them reflected there suddenly, but I had the faster ones go farther, then they could come back at the same time. Same sort of thing as the spin-echo implementation.
There are many other ways you can get storage, so I evaluated these. Saw which ones might be useful. On the spin-echo system that we used for my experiment — Erwin Hahn was doing other physics experiments using it to look at crystals — a group of us decided that we could make a computer memory out of spin echoes. So we did. We demonstrated using a medium which was hair oil, because if you have just a liquid like water or oil… In water the relaxation time is long enough. It's a second or two seconds, so you can store information for a long time. You might store 10,000 or 100,000 pulses. But in that time, the water molecule will diffuse too far and will not have the right frequency anymore.
Erwin Hahn, who had a great insight into these things, had looked at emulsions. In emulsions, you can have a water-in-oil emulsion, or you can have an oil-in-water emulsion. In a water-in-oil emulsion, the water is present in little spheres in the oil liquid. In an oil-in-water emulsion, the oil is present in little spheres in a water liquid. Since we wanted to use the water as our medium for magnetic resonance, we had to use a water-in-oil emulsion.
The water molecules could diffuse back and forth within one of these little droplets, but they couldn't go much farther. In fact, that's very helpful, because if they're all diffusing, then they all have, on the average, exactly the same frequency because their different frequencies have averaged out. So sure enough, you use this — it began with a 'V' [Vitalis] — hair oil, and we could store a pulse for a long time and still have it come out with good amplitude.
We decided to try to store hundreds, thousands of pulses, and to do that, you have the same recollection pulse that I told you about, but instead of having a single 90-degree pulse — that is, which flips the spins 90 degrees when they're precessing this way — and the echo then shows you how many of the spins have been tipped. You would only tip it maybe one percent, and so you get a signal back which is one percent of the maximum. But you'd have, in fact, more than 99% percent: You'd have 99.99% percent because you deplete the population only by the square of the angle of tip.
You could still put in 10,000 other pulses and get these one-percent amplitude pulses, in principle. Or you do it with two little pulses, and you get out two little pulses. That's great. You do it with three, and you get out three and a tiny little one. But now you put in 10,000 pulses, and except for one, the place where it should've been is full of interactions between these other pulses.
As soon as we saw that, we knew what was going on — it’s that each of these little pulses act as a recollection pulse for the other little pulses, and even though it's only a tiny pulse that comes out, there's so many possibilities that it contaminates the signal output. So, time to think. We thought of a couple of ways: One was to use random phases on the pulses so that they wouldn't add up in their recollection ability, and another was to have a frequency modulation, so a steady change of frequency of the pulses as they go in, so again they would not add up in their effect on one another. Either one of them solves the problem.
We actually made one of these, and it's published as “Spin-Echo Serial Storage [Memory]” by Erwin Hahn and myself and Arthur Anderson, who later went on to be Director of Research at IBM, and Bob Walker and maybe one other person [J.W. Horton, G.L. Tucker]. Of course we said this is not a practical memory, although when I was Director of Applied Research at IBM in 1965, they had been dragging their feet about whether they should buy a computer memory, and I decided we would do it. So I spent half-a-million dollars, and I bought a 500-kilobit — a half-megabit — memory. This was less than 100 kilobytes for a half-million dollars that was shared in the entire Research Division.
When I bought my first PC in 1981, I paid extra. It had a standard 16-kilobyte memory, but I got a 64-kilobyte memory. I probably paid $500 extra for that. Of course now you buy a 256-megabyte memory for $100 or $50. Just such an enormous ratio. Of course storage systems… My first hard drive — first PC in 1981 didn't have one. Then I bought a 10-megabyte hard drive for $1,000, I think, probably in 1982. Now you buy 100-gigabyte hard drives for $100, so that's a factor-10,000 in the storage capacity, and a factor-10 reduction — so a factor 100,000 — in storage per dollar. So you have to be humble.
You'd have to live a long time to benefit from it.
We had this nice little technique.
…In cost means a factor-100,000 in ratio of storage to cost. You really have to be humble about that. In the process, of course, we were now working on IBM's core business, so we filed for patents. You want to patent things that you may not even be using but that might be useful to somebody else in the same field.
From the moment I joined IBM, I had big arguments with them about their lack of aggressiveness in patenting things. We needed patents and licenses in order to carry on our business, as was the case with the license from Sperry Rand that we avoided, of course. But in order to obtain such licenses, you either had to pay money or you had to cross-license your own patents. There were companies that were in the computer business, or wanting to be in the computer business, that were also in other businesses.
With RCA for instance, if we had licenses in radio or sound reproduction — not in IBM's line of business at all — these would have been valuable to us in cross-licensing negotiations. IBM was really very reluctant to apply for patents in other fields, and they were interested really only in cross-licensing and not so much in obtaining license fees for their intellectual property.
What does cross-licensing — that means swapping?
Cross-licensing means exactly that. That is, we make an agreement, I can use all your patents without fee, and you can use all my patents without fee.
In fact, only in 1993 or 1994 did IBM take seriously patent licenses as a source of income. As I recall, their income from license fees jumped from $100 million to $200 million a year in a single year, because they began to pay attention to it.
This is the whole story of the contract they had with Gates with the operating system. They [IBM] didn't require that it be proprietary to IBM, and he went and sold it to everybody else in the world.
Worse, IBM did not, in that agreement, obtain the rights to the source code for Windows — what ultimately became Windows — that Microsoft developed under contract with IBM.
I was working on touch screens with my colleague Jim Levine at IBM, and we wanted to know the details of the source code so that we could see where we could plug in our touch screen in the instruction screen. They would not give it to us, so we made the touch screen look like a mouse, and we intercepted the signals to and from the mouse, and we allowed the mouse to plug into our touch-screen adapter box. But that was a lot more awkward than what would've been the case had we had access to the source code.
In conjunction with this spin echo memory system, I devised — well, we have patents, the whole group, on this frequency modulation and random phase means of avoiding contamination of the echoes. But I devised a means for obtaining the echoes without a recollection pulse simply by reversing the steady magnetic field gradient — that is, if the field is high on the left of the sample and low on the right for storing the pulses then after I put in the pulses, then if I reverse the field gradient so it's low on the left and high on the right, then the spins that have been going faster than average will now go slower than average, and they'll all end up at the same starting line at an equal time. Or, if I have the gradient only half as intense, then it will take twice as long for them to cover the same ground going backward.
IBM never used that. We never actually marketed a spin-echo memory, but it's used in all of the nuclear magnetic resonance imaging systems that take pictures of you, in order to make most efficient use of the time and the spins.
IBM never patented that?
Oh yes, IBM patented it for me. But patents last only 17 years and it expired before anybody made any money out of using it.
I patented also… Of course, you invent things most readily — at least I do — when you first have contact with a field. I was at a Gordon Conference — a whole series of Gordon Conferences for Research used to take place at the New England private schools for one week or two weeks in the summertime. I was never one for going much to meetings, but I thought I would go and see what was happening in this field of nonlinear optics.
It was very exciting. Lasers had just come in, and this field was — people were generating green light from near-infrared light by putting intense laser beams through crystals. You could do that with a sufficiently intense laser beam, and you could do it with a weaker laser beam too if you had a longer crystal through which the infrared — let's call it 'red' — and the blue double-frequency beam would pass. But there's an important criterion: They have to have exactly the same speed in the crystal, otherwise they get out of phase. The blue beam is being created from the red beam at one end, and the blue beam is going along and strengthening as more power is being fed into it by later elements of the crystal adding to the intensity that's already there.
In fact, it doesn't add just to the intensity: It adds to the amplitude. If the blue beam travels more slowly, which is usually the case, then you can only go so far before the new additions get out of phase with the old additions. The blue beam is getting weakened by the red beam feeding energy to it. In fact, the red beam is extracting energy from it because the phase is wrong. It's like jumping rope or wiggling – if you have a rope trying to make a big wave there, I wiggle it a little bit if I'm limited in how much I can wiggle, and the wave will go out to the tree and come back. If I continue to wiggle at the same speed, then the wave will grow and grow. But if I would now wiggle at the same speed, but I suddenly go left instead of right and continue that way, the wave that I put in there will be damped. So that's exactly what happens with this nonlinear generation of second harmonic of the blue wave from the red wave.
Can you tell me more about the arguments you had with IBM over patent strategy?
There wasn't much of an argument because there's nobody to argue with. There are people in intellectual property in the Research Division, and then there is a person in charge of their intellectual property for the whole IBM company, and I would write him letters. I could probably find some of the letters. But there wouldn't be an answer. They have their policy, and they will change it when they change it. There are many arguments I had in that sense that I would tell them what I think and what I thought they ought to do, and it never really came to any contention because they didn't have to answer. Now, if I were a vice president — senior vice president or whatever — or head of a division, then I might have had some bigger influence.
The speed matching was usually done by choosing a particular angle in a crystal so that the speed of the blue light matched the speed of the red light in that direction. This took very special crystals, and high quality.
So I said to myself — I'm always talking to myself — "If I have a waveguide with microwaves going through" — and waveguides are this big around — for microwaves at 1,000 megahertz, which are 30 centimeters in length, so the waveguide has to be half a wavelength across — 15 centimeters, about six inches. They don't have to be very thick, but for efficiency, they're made pretty thick.
You see, these ventilating-like ducts came in during World War II at MIT, where they were learning to work with waveguides for the microwave radars. But a wavelength for light is only a wavelength across, or actually because you don't have metal walls for the light — you have fibers, fiber optics, or similar things built into surfaces — the wave is confined not by reflection at the metal surface, but because the refracted index of the material is greater in the center than it is at the outside.
There are two kinds of fibers for conducting light: There are step-index fibers where you have an inner core of fast glass and an outer core of denser, slower glass… I'm sorry, it's the other way around: You have an inner core of slow glass and an outer core of faster glass so that, if you take a water surface — light coming from the outside, goes in directly into the surface, but a little bit of it is reflected — a couple percent. As you have greater angles of incidence, almost a grazing angle, you have more reflected, but some always enters the surface. When it's normal to the surface it goes straight down, but when it's at a grazing incidence, it still gets refracted down into the water.
Light coming up from the water goes exactly the same way: coming straight up, goes out, some of it's reflected out to this so-called angle of total internal reflection, because the light coming from the outside can only go so close inside to the surface. It went all the way from the normal up to that maximum angle. So if you have light inside which comes up to the surface, then it gets reflected totally at the surface. So all light is totally reflected — that is, more grazing [incidence] in the water than this critical angle of reflection.
Now, if instead of having a surface of water you have a tube of water, a column of water, or a column of glass, you have a glass rod vertically, and you shine light into it at the end — has no problem getting in. If that light is at a big angle, then some of it will come out, but if it's at a more vertical angle, then when it goes in it strikes the surface at a low enough angle that none of it comes out. So it goes forever until it's absorbed by the greenness of the glass. We have glass which is good enough so that light can go for 30 kilometers before being absorbed. That's not visible light; it's light at 1.3 microns, because the impurities in the glass have specific absorptions. There's hydroxyl (OH) impurity in the glass.
In principle though, the light in a fiber is confined to the slower core, and a little bit of the light gets out, but it's not out permanently. It's just traveling. A tiny bit of the energy is traveling in the faster glass core around it. That protects the fine glass core, keeps things from coming close to it to interfere with the light. So that's what we had with fiber optics.
In a waveguide, the speed of light along the axis — the speed of the waves along the axis — changes with the wavelength. In fact, gets to be faster the closer you get to the cutoff of the waveguide. I said that a 15-centimeter-across waveguide can carry two gigahertz — no, one gigahertz, 1,000 megahertz signals which have a free-space wavelength of 30 centimeters. The waves that are going — that have higher frequencies than that, which are within the band pass of the waveguide because they fit — go at a certain speed, and very high-speed waves will go at the speed of light. But the lower frequency waves go… Well, the waves actually bounce across, so they go more slowly than light, but they come up to the speed of light as you go to the cutoff frequency of the waveguide.
Same thing with a fiber optic: If you have a small fiber optic, the blue light would be traveling more slowly in a large amount of this glass, simply because it has a higher index of refraction, and that's why your camera lenses are so complicated. That's why you see the rainbow, because the blue light is more refracted in the little droplets than the red light. That's why, when you have a prism, you see a rainbow from it. In order to have a lens that forms a sharp image with white light, or to have a prism that would bend a light beam without having rainbow effects, people usually have two kinds of glass: One has higher dispersion than the other, and they have different refractive indices as well. So you cancel the dispersion with these two kinds of glass — one convex, the other concave, or one bending the light this way, the other bending the light that way — without canceling the entire effect of the lens.
With a laser, you don't need to do that, because there's only one frequency, so only one refractive index. But there's another way to change the speed of light besides changing the wavelength in a dispersive medium, and that's, as I said, to have a waveguide whose size is very similar to the speed of light. So those waves that just fit, that are the reddest possible to be conducted in the fiber optic, go faster because they're being squeezed. This can compensate the inherent variation of refractive index, the inherent variation of speed, with wavelength.
I told my IBM colleagues at this meeting, where we heard about the second-harmonic generation in this velocity-matched crystal, that we could use a waveguide, that we ought to have a fiber — could be round, could be square, could be a fiber that's implanted in the surface of a material — of a glass material. This fiber would have a slower propagation so that the waves would be guided within it. If we now have to propagate not only the pump beam but also the second harmonic — the blue that we're making from the red — we could match the speeds because we can control the speed of the red by tailoring the size of the fiber optic to that. First, you wouldn't have to use crystals. Second, you wouldn't have to match an angle. Third, you'd get a much greater interaction length.
So we have a patent on that, too. The late Ralph Landauer and somebody else [Bill Hardy] were are the ones I talked with and who helped write it up. That, too, expired in 17 years, before anybody was using such things, but that's very big business these days because a lot of the integrated photonic fiberoptic stuff uses exactly that. I had another invention about the same time…
What do you call that, again?
That's a speed-matched fiber-optic second-harmonic generator. I don't know what the patent's called ["Optical Traveling Wave Parametric Devices"]. Patents are sometimes deliberately obfuscating in their titles so that people won't know necessarily what you have patented. You can wait until they use it, and then present them — tell them they have to pay.
This speed-matched fiber-optic second-harmonic generator is used today?
In communications, in communications systems…
Well, sometimes you would like to have an efficient laser. So, you'd like to have a green laser, for instance, but the efficient lasers are in the near-infrared, the neodymium glass lasers at 1.06 microns — whereas the green laser is 0.53 microns. So one sometimes uses a crystal and then a thin plate of material — potassium, dihydrogen phosphate, or whatever — and pumps the thin plate with this intense beam in order to get a green light. But you can do that by putting the pumping material — pumping radiation into a fiber or into a groove and getting the green light, and then amplifying it more if you like. You'd get a laser pointer — a green laser pointer — this way, not because it lases in the green, but because it has been converted from the pumping radiation to the green. If you're having laser weapons or laser designators, then you'll sometimes want to have this second-harmonic conversion.
You can use it not only for second harmonics; you can use it for other frequencies. You can bring in two frequencies that are close together and get out a third frequency, which is the difference between them, for which you have better amplifiers. I don't know. I could look up to see where it's used. I don't have any interest in it. But it was certainly, at one time, very important to the industry.
Just talk about IBM itself.
I liked doing this, inventing these things. It's a lot of fun. I was not much involved in the patent negotiations, but I recall once in the office of the Director of Research at Yorktown Heights, I was asked to sit in on a session where people from Bell Labs were coming to tell us about their portfolio.
Anyhow, they were coming to tell us about their patent portfolio, and IBM felt perhaps somewhat — lacked confidence about the merits of our own patent portfolio, so they asked me to sit in. Bell Labs had just invented a two-photon laser. That is, one would pump with a blue beam of light and get out a red beam of light. Not the other way around. Not creating a second harmonic so that the lower frequency would create a higher frequency, but a two-photon laser so that each photon coming in would produce two photons of half the energy.
I thought this was a good idea. It's really quite interesting, because it has a threshold: You don't get any of the subharmonic (you might call it) out until you have a certain power level of the pumping system. I knew about these subharmonics because in 1953, or thereabouts, John von Neumann, who was a consultant to IBM, had come to our little [IBM] laboratory at Columbia University and given us a talk about subharmonic resonators as computing elements.
This was very clever, and the idea was that one would have some kind of diode and resonance circuit that would be resonant at two frequencies, and this is possible, say, at 10 megahertz and also at 5 megahertz. Two of these could be fed at the 10 megahertz, just independently, and they would break into oscillation at 5 megahertz. So you'd be pumping them at 10, but they'd be oscillating at 5.
Now, it turns out that these two states are absolutely indistinguishable. That is, the threshold for pumping at 10 and getting out 5 is precisely the same whether the 5 megahertz is synchronized with one cycle of the 10 megahertz or the next cycle. This is hard to understand, but if you have pumped one of them with 10 megahertz and you have it continue to run at 5 megahertz, and now you gradually bring another one of these things in — so that's being pumped at 10 megahertz — it can fall into one of two states relative to the — let’s call it the clock…
[Abrupt end of recorded material]