Robert Doering

Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.

During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.

We encourage researchers to utilize the full-text search on this page to navigate our oral histories or to use our catalog to locate oral history interviews by keyword.

Please contact [email protected] with any feedback.

ORAL HISTORIES
Interviewed by
Orville Butler
Interview date
Location
Texas Instruments, Dallas, Texas
Usage Information and Disclaimer
Disclaimer text

This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.

This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.

Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.

Preferred citation

In footnotes or endnotes please cite AIP interviews like this:

Interview of Robert Doering by Orville Butler on 2008 December 9,
Niels Bohr Library & Archives, American Institute of Physics,
College Park, MD USA,
www.aip.org/history-programs/niels-bohr-library/oral-histories/33752

For multiple citations, "AIP" is the preferred abbreviation for the location.

 

Abstract

In this interview Robert Doering discusses topics such as: his family background and childhood; his undergraduate work at the Massachusetts Institute of Technology (MIT); Philip Morrison; Jack Rapaport; nuclear physics; doing his graduate work at Michigan State University; Sherwood Haynes; quantum mechanics taught by Mort Gordon; Aaron Galonsky; working at the cyclotron laboratory; George Bertsch; teaching at the University of Virginia; low-energy heavy-ion collisions; switching to industrial physics research; beginning work at Texas Instruments (TI); working with semiconductors; Don Redwine; Defense Advanced Research Projects Agency (DARPA); George Heilmeier; Semiconductor Research Corporation; SEMATECH; Moore's Law; complementary metal oxide semiconductors (CMOS); Birch Bayh and Robert Dole; Morris Chang; research and development changes throughout his career.

Transcript

Butler:

I’m Dr. Orville Butler and we’re here this morning at Texas Instruments in Dallas, Texas. I’d like to start off by asking you a little bit about your family background, what your parents did, and growing up.

Doering:

I was born in 1946 in Louisville, Kentucky, and my mother did not work after that. During World War II, my father worked at a Curtis-Wright plant that made bombers. After the War, that factory became an International Harvester plant. My father worked there until he retired.

Butler:

Were you the first or the only?

Doering:

I have a brother who is five years younger and lives in Atlanta.

Butler:

What sort of life did you lead as a kid? Were you one of those children who took things apart and put them back together? Did you have anything that might have been a premonition that you were interested in the physical world?

Doering:

Yes, there were quite a few things along those lines. As far as taking things apart and putting them back together, my father was very adept at repairing anything mechanical or electrical. I don’t think we ever had a repairman come to my house. One of my earliest memories is of him building a television from a kit, a DuMont kit back in about 1952. I was young, but I can still clearly recall that TV. He had received training in electronics while he was working for Curtis-Wright. In fact, he took the Lee de Forest course, which, I believe, was taught in Chicago. I can still remember looking at the notes from that course and finding them interesting because the model that was given in that course, which was just intended for people with a general education, was an analogy of electricity to plumbing. In other words, voltage was water pressure, water flow was current, and vacuum tubes were analogous to fluid-flow valves. So, he had taken that course, which got him interested in electricity. He had been studying chemistry at the University of Louisville when the War broke out, and he didn’t go back to that. Soon after the War was over, he had a job and a son and decided not to go back to school. But he had an interest in science, mainly, still in chemistry. He was an amateur photographer, which, especially for color, involved a lot of chemistry back in those days. In fact, when I was older, I remember looking through a shelf in the basement that had old chemicals on it from his early photography days. One of these was uranium nitrate, which, back in the ’30s, you could buy at the drug store. It was used in color photography, despite being fairly radioactive. We managed to build a proportional counter as part of a science fair project when I was in high school, and the uranium nitrate came in very handy for verifying that the proportional counter was indeed working. He had a three of brothers. One was a mechanical engineer, one was an electrician, and the other started out as a machinist. They were all into those sorts of very practical types of working with everyday equipment. My father had also worked as a carpenter, and we had lots of projects that he designed. And I would help out. Every year, as I got a little older, I was able to help with more — especially in finishing the basement, including the carpentry, running new gas lines to put a gas fireplace down there, and all the plumbing and new electrical lines. Similarly, I helped with repairing appliances and cars. The area ion which I worked with him the most was electronics. After he built that television kit, a company came out with “Heathkits,” a wide range of electronics kits, and we built at least a couple dozen Heathkits over the years. The first ones were mainly instruments, such as voltmeters and oscilloscopes, the tools that you need to diagnose circuits. He made a nice electronics bench with all of this equipment on it. Later, we were building stereos, with separate pre-amps and amps. Again, these were mostly Heathkits, so I helped him with those and learned a little bit about electronics that way. Thus, my father was definitely a big influence on me. I already mentioned science fair projects. When I got into high school, I started doing science fair projects. I did them 9th through 12th grade, and they got quite elaborate near the end. The last one we made was an x-ray florescence spectrometer, and we managed to get a bunch of equipment for that from local industry where it was surplus, for example, giant capacitors and rectifier tubes. We had a transformer that could take 110 volts up to 90 kilovolts. It had been part of an x-ray machine that was obsolete by that time. Our capacitors and rectifier tubes could handle that kind of voltage, allowing us to build a big DC power supply. With a Variac controlling the voltage into the main transformer, we had zero to 45 kilovolts DC output at a fairly large current. We used that to power a Crookes x-ray tube that we made with a water-cooled tungsten target. Then we built the proportional counter that I mentioned before. Of course, the x-ray tube had to have a vacuum system. We had that made by a local glassblower. He made a mercury vapor diffusion pump and a McLeod gauge to measure the pressure. We operated around 10-4 Torr, where you could support the high voltage and also get a pretty good current. The primary x-rays from the tungsten target came out through a foil window and hit whatever target we put in front of it. The first thing we tested was a penny, with its copper K-alpha line at about 8 kilovolts. With the fluorescence x-ray detector shielded from the main x-ray beam from the tungsten, but in a line-of-site with the test target, you didn’t see any x-rays in the proportional counter until the voltage on the x-ray tube was high enough so that the Duane-Hunt limit of the bremsstrahlung from the tungsten exceeded the energy required to produce x-ray fluorescence from the test target. So, as a function of tube voltage, the output of the proportional counter would just show a series of plateaus, one for each crossing of an excitation edge for fluorescent x-rays from the test target. Our second test target was a chisel blade, and we could easily distinguish the iron K-alpha edge of the chisel from that of the copper in the penny. That x-ray spectrometer was a pretty elaborate project. The whole thing probably weighed almost 1000 pounds, mainly because of all these oil-filled transformers and capacitors in the power supply. It took us about a year to round up all the parts and build it. By the time I was in 9th grade, I was already convinced that I was going to go into science, and then, a little later, narrowed that down to physics even though I hadn’t had a physics class yet. But I had read enough to understand what the scope of physics was compared to chemistry and biology. I was interested in what I regarded to be the more fundamental subject. The other main scientific influence on me was my uncle Bob, who I was named after. He was one of my mother’s brothers, and had been a navigator and bombardier in World War II in the European theater, mainly on B17s. He had always had an interest in science himself. In fact, he was a chemist that worked for a paint company, and he knew something about astronomy because he had that training to be a navigator. I can still remember when I was about six to eight, going outside at night and watching him point out some of the constellations, stars, and planets. He told me how far away they were and some other basic astronomy that he had learned. We saw all of my aunts and uncles fairly often; it was a close-knit extended family on both sides, and he had many opportunities to teach me. He taught me how to take square roots long before we learned that in school. Also, in his paint company chemistry lab, they had made some balls that were like predecessors to Super Balls. I remember him bringing one over to my grandparents’ house one time and giving it to me and showing me how amazingly elastic this material was. So, he was definitely the other major scientific influence on me as a youngster. There were also some influential school teachers. I had one math teacher in high school that was particularly good. In contrast, the physics class that I took when I was a senior in high school was probably just average for that time. But, I had already decided, by about the 10th grade, that I was really interested in physics and would probably major in that in college.

Butler:

Did you have a group of friends that was oriented in the same direction?

Doering:

Yes, there were a few. There were two or three who were not quite as interested as I was. One of those friends was a partly involved with a couple of the science fair projects. And we occasionally did some other things like building small rockets. I was the ring leader of that group in the sense of encouraging whatever we did along scientific lines. I was also interested in sports, so sports and science were my two main interests as a teenager.

Butler:

It’s an interesting contrast in that the athletes tend to be the super cool people, whereas the science nerds are sort of shunted off to a corner.

Doering:

I was much more of a science nerd than I was an athlete. My early opportunity in athletics was partly a matter of size. I was a bigger than average as a kid, especially in late elementary school and junior high school. Then, after the 8th grade, I didn’t grow nearly as much as some of my friends. For example, in basketball, I started off as a center, and then I wound up being a guard a few years later, which is not an easy transition. I wasn’t nearly as good an athlete than as a student. In fact, I was co-valedictorian of my high school class. Both of us had straight A’s all the way through high school. So, I was generally regarded as more of a nerd than an athlete. But, I enjoyed playing basketball and a little bit of football and baseball, but not enough to stick with it through high school.

Butler:

Were your teams fairly successful?

Doering:

In baseball, we were pretty successful. We wound up being second in our league one year. Basketball and football were just sort of so-so. Football was probably a little better than basketball. There was a lot of competition, especially in basketball, in Louisville. In fact, in junior high, I played against somebody who became an NBA All Star later on, Wes Unseld. He played center in the NBA for a number of years and was one of the best players in the league for a long time. Kentucky is a real hotbed of basketball, and so there have always been many strong teams. Even though I enjoyed playing, I wasn’t either talented enough or motivated enough to move to the next level. In college, I only played some intramural sports, again because I enjoyed just being out there and playing and competing. Of course, intramurals wasn’t too tough, so I did relatively well in intramural basketball, and I also played intramural golf. I also played in this university club Tiddlywinks league which included nearby schools like Harvard — just a fun little thing to do. I enjoyed such competition, no matter if it was a real knockdown, drag-out sport or as gentle as Tiddlywinks. Oh, there was also Ping-Pong. Ping-Pong was probably what I was actually the best at of anything I ever played. I worked very long and hard, mainly on powerful serves. At MIT we had a lot of good Ping-Pong players, a lot of people from Asia, where it was a much bigger sport than in the U.S. I really liked the game. I had played it some as a kid, but this was a whole new thing, playing with these paddles that had super-soft and tacky surfaces. At one time, I got to be number two in the MIT Ping-Pong rankings, which was based on head-to-head challenge competition. Matches would be arranged by issuing a challenge to someone who was higher ranked. If you beat them, you moved up on the ladder and they moved beneath you. Again, it was a club sport rather than an intramural sport. I worked very hard on serves, because I realized that the game isn’t very long, there aren’t that many points, and if you have enough different serves so that your opponent only sees each of them no more than once or twice in a match, even if they can basically see what you are doing, that is, what kind of spin you’re putting on the ball, if the spin is extreme, it’s very hard to get calibrated on it well enough to counteract it on the return. The first thing I remember thinking about is that I could try to make a core of six serves which would put a lot of spin on the ball in either direction around each of three orthogonal axes. Of course, everybody had at least one such pair, they had the heavy top spin and the heavy under spin. So, I studied and practiced what I thought was the most effective way to put on really heavy spin in even more than six directions, and it was very effective.

Butler:

What about your science teachers in high school?

Doering:

Let’s see. I can recall three of them. One was the physics teacher that I mentioned, that was senior year. I also took two years of chemistry and one of biology. The biology teacher was very good. I thought biology was kind of interesting, but I liked physics more because it seemed to me that there was less rote memorization and more math. Chemistry was sort of in between. The teachers were all, at least, adequate; I wouldn’t call any of them a poor teacher. It is probably as hard today as it was back in that day to get someone that could do a really fantastic job of teaching science at the high school level. My biology teacher was the closest. Even though I wasn’t as naturally drawn to that subject material, I remember that she did a good job. The chemistry teacher was okay. Physics was just the very basic stuff, inclined planes and Ohm’s law, those kinds of things, so I was glad to learn about that and thought it was interesting. The teacher, though, wasn’t able to answer deeper questions that I had about some of these phenomena. I knew there had to be deeper theories behind all of these things, but, unfortunately, the teacher didn’t major in physics in college. Perhaps he majored in history and took a general science course. He did an okay job of teaching what was in the book. The main issue was that I couldn’t go to him with any questions outside of what was covered in the book.

Butler:

You indicated that by the 9th or 10th grade you had pretty well selected physics for at least your college career. [Right] What about your decision as to where to go?

Doering:

The decision as to where to go boiled down to about five universities to which I applied. My hope was that I would go to either MIT or Caltech. From what I could tell, they seemed to be equally good physics schools, and I was pretty sure that I’d be happy at either one. But I did apply to a few other places: RPI, Case Western Reserve, and, I believe, Stephens Institute of Technology. I also had some offers of full scholarships at schools to which I didn’t apply. For example, Michigan State University offered me a National Merit Scholarship. Also, Pepperdine, in California, sent me a letter offering a full scholarship, just out of the blue. At the time, I knew nothing about Pepperdine; later I learned that it was a pretty good school. MIT offered me a partial scholarship, and I don’t believe that Caltech offered any support. That’s probably what made the main difference, because they were both pretty expensive schools. Even though my dad had a good job, it was still relatively expensive. Tuition was around $2,000 a year, which would be very inexpensive today!

Butler:

This would have been in what year?

Doering:

1964. So, $2000 in tuition was about as much as any place was then. Of course, today, tuition is ten times that much at many schools, so it’s gone up an awful lot in those 44 years. But, back then, $10,000-$12,000 per year was considered a good salary. So that’s how I wound up at MIT. Applying to MIT was an interesting process. They had a procedure in which you were interviewed by an alumnus in your local area. I can still remember going to interview with Mr. Entwhistle. I can’t remember if he said what he had majored in at MIT many years earlier. He was probably around 60 at that time. He owned a local franchise, several restaurants, of Howard Johnson’s. So, he was a successful businessman in the area, and he was the one MIT had on their list for my area to be the interviewer. He was very nice, and I’m sure that he wrote an encouraging letter of recommendation for me to MIT. The other thing I remember that surprised me about applying to MIT is that, in 1964, there were only two of us from the whole state of Kentucky that were accepted into the freshman class of MIT. I didn’t know the other guy, but, when I first saw the freshman class list, I noticed that there two of us from Kentucky that year. I began to worry a little, thinking they might have some kind of quota that forced them to take a couple of us, even if we weren’t as qualified. Sure enough, I did find out that a number of the freshmen, especially from the big cities in the northeast, had already taken at least a semester of calculus in high school. And we had a lot of people in our class that were from places like the Bronx High School of Science and similar schools which provided better science backgrounds in high school than I had. Thus, I found out that I’d have to work harder than I did in high school, which I did and got good grades. Anyway, I enjoyed the whole process of figuring out where to go, and certainly enjoyed the experience at MIT.

Butler:

Who were the big influences at MIT?

Doering:

There were definitely a few. The freshman physics course was just terrific. It was brand new and was supposed to be the course that you would take next in college after the new high school course that was called the PSSC Physics, which was created by a very prestigious team of physicists from MIT and several other places. Of course, I had never heard of it. We used a much more traditional physics textbook in high school in Kentucky. This freshman course at MIT was called “Physics: A New Introductory Course,” which had the amusing acronym PANIC. It was taught by Anthony French, who was very big in physics education at that time. He organized the course, but he didn’t do all the lectures for the whole year; he brought in a couple of other people to do some of them. The best part was special relativity, for which Philip Morrison was the lecturer. He was just great — very inspirational. You’ve probably read something by him or seen him on television. He was a very dynamic lecturer and made the class fun as well as informative. There were several other influential people in the subsequent courses, and not all of the courses that impressed me were actually in physics. For example, I took three excellent courses in Electrical Engineering and Computer Science. The first was on computer language, and was taught by the famous Marvin Minsky. Next, was course 6.01, Network Theory, taught by Amar G. Bose, the guy who started up the Bose speaker company. He was very good. Alan V. Oppenheim taught the second semester, 6.02, and was also very good. Electrical engineering was really amazing at MIT because in that department, you got time with a graduate student every week to go over your homework. So, you’d get the homework assignment, you’d do it the best you could, then you’d have an hour scheduled to meet with the graduate student and go over your homework, and he would show you what you did wrong. The amount of money that the department spent on that was obviously significant. I don’t know if other places did that, but I remember being really impressed because they didn’t do that in the physics department, and I can’t recall anyone ever mentioning having that kind of one-on-one in college. Another person who definitely influenced me a lot was my undergraduate thesis advisor, Jack Rapaport. At MIT, except for mathematics, you had to do a senior thesis, which was just like a course: it lasted all year, and you signed up for it as a thesis research course. You got to go to the lab of that professor and participate, almost like you were a graduate student. Jack was working in nuclear physics. They were still running the old MIT Van de Graaff accelerator at that time, and they were taking data on chromium-54 plus helium-3 reactions producing manganese-55 plus deuterons, so they wanted me to help analyze the data. Of course, a lot of my work was simple stuff. It wasn’t nearly as computer automated back then, so the counts in each energy channel had to be graphed by hand. Then we had to figure out what was a reasonable background level. Then take the raw data and convert it into cross sections for the nuclear reactions. Then there was this analysis that I couldn’t understand quite as well and involved a computer program. I understood in principal what was going on — the program was numerically solving Schrodinger’s equation via partial wave analysis. They had wanted me to concentrate more on the experimental technique than the theory, so I didn’t get as much exposed to that part of it. I really liked working in that lab, especially with a post-doc who was also new to the lab. And I really liked the nuclear physics. Partly what I liked about it was that the experiments were so relatively clean. The error bars for a lot of it were mainly determined just by counting statistics, so very easy to do. Also, there wasn’t much concern about contamination. Even if the target was not very pure, you could usually identify the associated background and subtract it out. So, the fact that you could do such clean experiments and also that you were dealing with the nuclear force, which was still not well understood then, made it interesting. In the early 1960s, Murray Gell-Mann invented the idea of quarks, but even he didn’t think that they were physical particles at that point, but just a mathematical construction. So, even low-energy nuclear physics seemed very fundamental and very much digging into the frontiers. I can recall maybe two-thirds of the way through the year, when I was starting to think about graduate school, talking to Jack Rapport about it, and he said, “I can tell that you really like nuclear physics.” Then he said, “Well, if you want to go to graduate school in that area, I would recommend you consider Michigan State. Which I thought was a little ironic because that’s one of the places I had considered as an undergraduate because of the National Merit Scholarship that I was offered there. He said that MSU had a new cyclotron lab that was going to be able to do experiments for the next decade that nobody else would be able to touch, because they had a combination of energy and resolution and other features that would open up a wide range of experiments. I applied several places, but I got a very nice response from Michigan State. I was called by the physics department head at MSU, Sherwood Haynes. He had one or two NSF fellowships at his disposal, and he said that he was willing to give me one if I would come there. So, that, combined with Jack’s recommendation, and my interest in nuclear physics made it a pretty attractive offer, which I accepted. Back to MIT. There were a lot of other professors, who were big names in physics. For example, Victor Weisskopf, Charles Townes, Herman Feshbach, Bruno Rossi, Jerome Friedman, Jerrold Zacharias, and Kerson Huang were all there. By the way, at least two of my MIT Class of 1968 physics classmates also went on to fame: Alan Guth and Shirley Ann Jackson. Outside of physics, as I mentioned, were also some very well-known people, especially in computer science and electrical engineering, and some of the other courses I took in those areas were really great. I also took a couple of history courses in a series called the History of Technology, which were very interesting. The professor took us through how they thought the pyramids had been built, obelisks erected, etc., on up through the Middle Ages. It was very engineering-oriented, which helped it to be more interesting than I had thought it might be.

Butler:

’64 to ’68 was a turbulent time in America as well. How did that affect you at MIT?

Doering:

Well, I guess Boston and, on the other coast, around the Berkeley area, were probably the two hot spots, if you want to call them that, in terms of a lot of what was going on — various protest movements against the war and those kinds of things. I didn’t get directly involved in any of that. At that time, I didn’t spend much time thinking about the history of how we had gotten into this war. Obviously, everybody hoped it could be more quickly resolved than it was. But I wasn’t involved in any demonstrations or related activities.

Butler:

Were there times when demonstrations disrupted your studies?

Doering:

No, I can’t remember a single time when a demonstration disrupted my studies. It was not that much of an issue. What affected me a little more were other cultural offshoots — such as some of the music. Of course, protesting the war was one of the themes in folk music. In Boston, there were “coffee houses” where folk music was performed. There was not usually anything more than just peaceful protest, via the music, in those settings; in fact, it was mostly just entertainment. I developed a real love for folk music. Even though a lot of the folk music songs of the period did have an element of protest to them, many were pretty amusing in the way it was expressed. Overall, I liked the catchy tunes and harmony as well as the funny lyrics.

Butler:

Probably best known is Alice’s Restaurant.

Doering:

Yes, Arlo Guthrie was one of my favorites. I liked almost all of his music. For example, “The City of New Orleans” really appealed to me. There was also a faculty member, Tom Lehrer1, who wrote witty folk songs that poked fun at many people and events. There was one about Werner Von Braun, for example. He implied that Werner just wanted to work on rockets and didn’t really care what the consequences were. There was this one line in which he would parody Werner in a German accent and sing: “Once the rockets go up, who cares where they come down? That’s not my department, said Werner Von Braun.” Lehrer put out an LP, which I bought.

Butler:

His name was LEHRER. We want to distinguish it from Timothy Leary.

Doering:

Yes — Tom Lehrer, not Timothy Leary. Tom got popular enough that he was on some of the TV variety shows back then. He was never hugely popular, but he had a year or so of reasonable fame. Timothy Leary was of course around too. I don’t remember seeing him in person, but certainly lots of people talked about him and his advice on drugs. Of course, marijuana was very available, as was LSD. It was pretty obvious to me that LSD was something that you just didn’t want to fool with at all. I had some friends who had bad experiences, and it seemed to me that they were lucky to still be alive because they had such violent hallucinations and behaved irrationally trying to respond to what they distortedly perceived. As I said before, the biggest cultural influence on me at the time was probably just the music. I really enjoyed going to some of the coffee houses and hearing people like Lehrer, Arlo Guthrie, or the Clancy Brothers, who had these wonderful Irish drinking songs. Maybe I had heard one or two of those when I was much younger and it never made much of an impression on me. But, especially in the coffee house environment, it seemed mostly just lively, funny, music, even though some of it would be also poking fun or making more serious accusations against the “establishment.” None of the political implications really fazed me too much. I just enjoyed a lot of that music.

Butler:

So you moved to Michigan State in?

Doering:

That was in 1968. In the fall of ’68, I was a new graduate student at Michigan State. The plan there for most everybody was two years of class work before getting into research. When you first arrived, you took a “qualifying exam,” which was to make sure that you didn’t need anything “remedial” and could go right into the regular first year classes, which was no problem for me because they were teaching some of those first year grad courses out of the same books that I had used at MIT. For example, it was Jackson and Goldstein over again, but more thoroughly. The course that I remember best of the standard grad courses at MSU was quantum mechanics, taught by Mort Gordon, who was really an amazing fellow. He was blind, but was very good at everything he did, despite the handicap. He worked with Henry (“Hank”) Blosser, who was in charge of the cyclotron laboratory and was also an accelerator designer. Mort worked on the very theoretical side of accelerator design. The homework problems in his quantum course were typically thought-provoking, and the lectures were very good. It was amazing to me how somebody with that handicap could get along with as little assistance as he needed. He could write on the board and keep everything straight and legible, and get to the classroom and talk to everybody. You would just barely notice his blindness. I think maybe he could distinguish just a bit of lightness and darkness, but he was very close to 100% blind. He was a remarkable man. After the graduate course work at MSU, there was second exam, which was the “preliminary exam.” It determined whether you were allowed to go on for a Ph.D. or only for a master’s degree. So, after passing this exam, I started thinking about who to work with. I knew that I wanted to do research in nuclear physics at the cyclotron lab. I can’t remember exactly how I met my thesis advisor. For the most part, our classes were all in the Physics and Astronomy building, a few blocks away from the Cyclotron Lab. The Cyclotron Lab guys had their offices over there, and they only occasionally came to the Physics and Astronomy building, for example, for teaching a class. Aaron Galonsky became my thesis advisor. I hadn’t taken any courses from him and didn’t really know him from any contacts during the first two years. I’d heard the name, but that was about it. I don't recall if somewhere we had indicated preferences for what kind of research. I don’t remember that at all, but he probably looked at the student records as he was trying to find a new graduate student. I do remember that he wanted to meet with me and talk about working for him, and we just hit it off right away. I liked what he was doing. He was mainly working on nuclear reactions that produced neutrons. He was building neutron detectors of various kinds. Overall, the research on neutron-producing reactions seemed like an interesting niche — something that nobody else was doing there. It involved some large detectors and some different techniques, but because of the size and organization of the Cyclotron Lab, I also got to work with other groups to learn all of the other techniques that people used with charged particles. The lab was like a big family where you would invite other people to help on your experiments because otherwise you couldn’t cover the beam time required. You had to use the cyclotron around the clock, and you might get two, or even more, consecutive days of beam time. Most of the professors worked pretty much individually, sometimes in pairs, and they might each have one or two students. Typically, they didn’t have a large number of students like you often see today. So, it was very common for them to invite students of other professors to help with their experiments, which was encouraged both ways. Thus, you got to meet everybody pretty quickly and learn a lot of different experimental techniques and co-author papers far outside of your thesis area. That was a great experience. In fact, I wouldn’t have minded if time could have magically extended so that I could have kept working in that lab forever. To me, it was an idyllic world, and it was as much fun as anything I’ve done in my life. With all of these new experiments, people were always finding things that seemed to contradict what was supposedly already known. I was on several of papers where we discovered things contradicting conventional wisdom on some subject. Sometimes it was my approach that led to the new insight. A simple example involved an apparent discrepancy between experimental and theoretically predicted cross sections in a reaction being studied by a couple of professors who had invited me to help collect and analyze the data. During the first pass at extracting the cross-sections with a standard peak-fitting program, our results didn’t seem to resolve the discrepancy. This program assumed a Gaussian-shape profile for the peak and allowed a polynomial, usually quadratic, background. So, we’d make a simultaneous best fit to both and subtract the background. However, I realized that our experimental resolution was sufficient that observed line width of this peak was not dominated by the experimental resolution. It was mostly from the intrinsic width of the state, which meant that we should be fitting with a Lorentzian shape rather than a Gaussian. A Lorentzian, of course, has relatively long tails. If you are simultaneously fitting a quadratic background, the shape distinction makes a huge difference in how much of the background is either pulled into the peak or into the quadratic-fit background. This improvement in fitting procedure increased the measured cross section very significantly and resolved the previously claimed discrepancy. It was wonderful to feel that you could play a part in something like that, making a contribution which resolved an apparent problem or contributing to something entirely new and surprising.

Butler:

What did you finally write your dissertation on?

Doering:

My dissertation was on a model of the effective nucleon-nucleon interaction inside nuclei. At that time, we didn’t have a really fundamental theory of nuclear forces, like quantum chromodynamics. So, even free nucleon-nucleon scattering was usually just parameterized. Of course, nucleon-nucleon scattering inside “nuclear matter” would have some effective interaction which would be different than free nucleon-nucleon scattering. And we would usually parameterize in a simpler way. Basically, we took the effective force to have four components, all of which had an exponential drop off. In other words, we used a potential of the form introduced by Hideki Yukawa, in which the range is determined by the mass of the exchanged particles. So, if it were pion exchange, you’d expect a range of about one femtometer. So, we took a Yukawa shape for each of four components: spin flip, isospin flip, both simultaneously, and no flips. Of course, isospin flip corresponds to turning a neutron into a proton or vice-versa. So, if you shoot a proton in and a neutron comes out, then you’ve done an isospin flip between those two particles as well as between the target and final nuclei. We particularly focused on trying to come up with a better value for the isospin flip component, and we could do that by doing precision measurements of cross sections on states where both the initial state and final state were very well known, because, of course, the cross section would depend on these states as well as on the interaction. Now, once you determine the force, then you can reverse the process by measuring other reactions and improve our knowledge of the structure of “poorly known” states. That was the main value in the whole enterprise. We could focus on the isospin-flip part of the force by studying proton-induced reactions producing neutrons and leaving the final nucleus in the isobaric-analog state of the ground state of the target nucleus, because that state is, by definition, very nearly the same wave function as the target ground state. It’s almost like elastic scattering plus isospin flip. Thus, the analog-state transitions have large cross sections. As target nuclei, we used aluminum-27, zirconium-90, tin-120, and lead-208, which spanned a good range of nuclear mass with relatively large neutron excesses, yielding large cross sections. We had to build detectors that could be operated at sufficient distances from the target to give good neutron energy resolution from time-of-flight measurements. There was a lot of effort in building detectors, ways to automate moving the detectors around to different scattering angles, and shielding the detectors. The main detector that we used for my thesis work was actually inside a U.S. Navy 5-inch gun barrel, which was the immediate heavy shielding, around which were boxes containing bags of borated water to both moderate and then absorb stray neutrons bouncing around the room. All of this was carried on a compressed-air-levitated and motorized cart attached by a big arm to an axle above the target. This allowed remote change of scattering angle — we were taking angular distributions — so that we didn’t have to run all the way out to the experimental vault, lower the door, move the detector, and close the door, which would otherwise have been about a 15-minute procedure just to change angles. We obtained excellent results. It was an experiment that we were well-equipped to do, and we published the results in Physical Review C. The most interesting thing that came out of the thesis work, though, was a surprise. As I was looking through the data, I noticed broad peaks at higher excitation energies than the isobaric analogs of the target ground states, especially at very forward angles. That was one of the advantages of neutron-producing reactions; you could go all the way to zero degrees if you arranged to magnetically bend the primary proton beam out of the way after it passed through the target. The neutrons from the reaction would still go straight, so you could actually measure the zero-degree point on the angular distribution, completely forward scattering. These broad peaks in the spectra were also more prominent at higher incident beam energies. They were typically 3-4 MeV wide and always centered a few MeV above the isobaric analog state. Some people looked at the data and said that it was probably just background, but I re-plotted the data in several ways, and was able to certainly convince myself and a few other people, most importantly, the lead nuclear theorist at the lab, George Bertsch that this was a real phenomenon. George was editor for Reviews for Modern Physics for many years; I think that he just stepped down from that role fairly recently. He’s at Washington University now. He was the best-known theorist at the MSU Cyclotron Lab when I was there — a very brilliant guy. He thought that the peak looked real and encouraged me to go ahead and extract the angular distributions and the angular-momentum transfer with the distorted-wave Born approximation. George looked at the results and said, “I think this may be the giant Gamow-Teller resonance,” which had never been seen in any but very light nuclei. This resonance is basically like the isobaric analog state where you flip the isospin, but, in this case, you flip both the spin and the isospin, so it’s like an M1 transition plus an isospin flip. The prevailing theory had suggested that such a state would be very broadly spread. If you looked at the total strength of that state, there would be little bits of it in many, many states that would be indistinguishable from the background. But it appeared that we had indeed distinguished it from the background. It had only been observed previously in very light nuclei, with only few discrete states up to roughly 10 MeV, so that the G-T state was isolated. We wrote a Physical Review Letter that probably wouldn’t have been accepted if George hadn’t been a co-author, since it was very controversial. I didn’t realize how controversial it was, actually, until I presented at the next fall APS meeting, the usual nuclear physics meeting, which that year was in Chicago. One of the senior theorists at Argonne National Lab, John Schiffer, gave me a very hard time about it. I was just a grad student presenting what I thought was a pretty neat discovery, and he was giving me all these reasons why what I was saying couldn’t possibly be true. I was somewhat depressed about it. But, nevertheless, it got vindicated a few years later when a higher-energy cyclotron was built. The 200-MeV cyclotron at Indiana University was perfect for basically repeating and extending our experiments at higher energies. At MSU, we could see the Gamow-Teller cross section growing as we raised the proton beam energy from 25 to 45 MeV. At even higher energies (and the typically associated lower resolution), it really stood out. Everything matched up experimentally, so it was obvious that we had indeed seen the same thing, although not quite as dramatically at the lower energies. That was undoubtedly the real highlight that came out my thesis data. It was a pretty significant discovery. In fact, one of the guys using the Indiana cyclotron eventually won the APS Tom Bonner nuclear physics prize a few years later for the work there on the Gamow-Teller resonance in heavy nuclei. So, my pioneering observation of it was undoubtedly my most significant contribution in physics.

Butler:

You got your dissertation now.

Doering:

Yes. I graduated in early 1974. Like I said, I was in no hurry to get out. There were a number of interesting on-going experiments, many with old friends. As I received my Ph.D. in ’74, it was just coming up to be time for my thesis advisor to take a sabbatical, and he wanted to go over to the Karlsruhe cyclotron lab in Germany. He asked me if I wanted to stay on as a post-doc and basically run the neutron program in his absence. So I did. It wasn’t until he returned, a year later, that we started thinking about me moving on. It was a little disconcerting, since I was still having so much fun at MSU, but, of course, it wasn’t practical to be a post-doc forever! I wound up going to the University of Virginia. They were looking for a new assistant professor in nuclear physics, which Aaron had heard about, so he suggested that I interview with them, and I did. They made an offer, which I accepted, and began the next stage of my career at the University of Virginia. I went there in the summer of 1976, stayed through the spring term of 1980, and then came to Texas Instruments in May of 1980.

Butler:

Why did you make a shift from academia to industry?

Doering:

Well, there were several reasons. The main one was simply that, at that time, the prospects for getting tenure were very low. Many physics departments had lots of tenured faculty who were still fairly young. In fact, I’m amazed when I visit back to either Michigan State or Virginia; quite a few of those guys are still there from the 1970s. I suspect that World War II had a bit of a synchronizing effect on physics faculty growth. Many of the professors that I knew were sort of the “second generation” after the war. For example, some of them had been students of the guys that had worked on the Manhattan Project. Anyway, there weren’t many new tenured positions in the late 1970s. In fact, some friends of mine that I thought had done a pretty good job did not get tenure. For example, one friend had just won the university-wide award as the best teacher, but got rejected for tenure. He was in solid-state physics, and I wasn’t familiar with the details of his research, but it was obvious that the teaching award must have counted for almost nothing toward him getting tenure. So, even back then, the emphasis was almost entirely on the research, and the bar was very high. There wasn’t anything you could do, any other aspect of your faculty accomplishments that could possibly make up for not being at an extremely high level in your research results. Now my research was going pretty well at Virginia. I had gotten two new Physical Review Letters based on a research program that I started there. At Virginia, the group that I came into was doing lower-energy nuclear reactions than I had at MSU. A lot of it was low-energy heavy-ion collisions between light nuclei, for example, carbon on carbon or carbon on oxygen, mostly as users of the Van de Graff lab at Oakridge. I participated in those experiments and contributed to them, especially on the experimental side. I had the best knowledge of pulse electronics in the group and used it to get better resolution, reduce the noise, etc. I enjoyed that aspect of those experiments. The data analysis didn’t excite me too much. It was all of this “statistical-model business” that I didn’t find very interesting. But I had two other ideas for research. One was based on going back to Michigan State to do some new experiments as a user of the facility. I had a graduate student who had his undergraduate degree from Michigan Tech, and he was glad to go back and visit Michigan. Thus, we started some collaboration with my old friends at Michigan State on neutron reactions, and that led to the thesis work of this graduate student. We had five graduate students between me and the professor whose small group I had joined at Virginia. Then I had this other idea about using an accelerator which had been essentially abandoned. It was probably only going to run for another year or so. It was one that NASA had originally built in Newport News, Virginia, at what was called the Space Radiation Effects Laboratory. The building was later used for Jefferson Laboratory. SREL had a 600-MeV proton synchrocyclotron, along with a big x-ray generator and other equipment. These had been used back in the early ‘60s to investigate the properties of materials that would be going into space. NASA had built it for that purpose, to do radiation damage studies on various kinds of materials to see how space-worthy they might be. After NASA was finished with the facility, it was inherited by a group of universities including UVA. There was a medium-energy nuclear physics group at Virginia that mainly used the synchrocyclotron to produce meson beams. But then a new machine was built at Los Alamos, LAMPF, the Los Alamos Meson Physics Facility. LAMPF had more beam current and made SREL obsolete for meson physics. However, SREL was still barely available to be used for a few years, even though it had almost no staff. They could round up operators to come in and run it, and most of the necessary equipment was still there, but it was relatively difficult to use. However, I could see that we could do some experiments there in an emerging area of higher-energy heavy-ion reactions. It was just barely a tiptoe into that regime, because the heaviest beam that they had ever accelerated was helium nuclei, which could be brought to 720 MeV. I envisioned that we could bombard a wide mass range of target nuclei and measure the angular and energy distributions of the light charged fragments which would be produced. These distributions could be compared against some models that theorists were just starting to develop at that time about what higher energy nucleus-nucleus reactions might produce. These were some very simple, essentially classical, models. One of them was called the Thermodynamic Model and another was called the Fireball Model. It was easy to write my own programs to calculate the angular distributions predicted by these models. I took two or three of the graduate students with me to SREL because we were going to be running around the clock for a few days at a time to get this data, and it would be a great and different experience for them than at Oak Ridge, where the experiment were basically setup and ready to run when you arrived. I also enlisted a post-doc from another group, who was the only guy that knew how to take the data at SREL on its IBM 360 that had to booted-up from tape. The data acquisition programs for it were written in the assembler language for that machine. But, fortunately, we had this post-doc from Princeton who had experience using an IBM 360 for data acquisition. So, we assembled a suitable team, ran these experiments, and got two Physical Review Letters out of the results because there were essentially no other data of this type from anywhere else yet. Thus, we were fortunate to have the opportunity to get in on the ground floor of nucleus-nucleus reactions at relatively high energies and compare our data to the theories being developed then.

Butler:

Talk a little bit about the transition to industry.

Doering:

My research was going fairly well. However, I don't know if my work in nuclear physics would have been enough for earning tenure at the first opportunity — after three years at Virginia. But, there was another issue, which was totally independent. In those days, the late ‘70s, we were into a period of double-digit inflation and economic turmoil due to an oil-supply crisis. The State of Virginia, by its own constitution, had to have a balanced budget. UVA, being a state school, was getting increases of only about three percent per year for everything, not just our salaries, but for various facilities expenses, teaching-related expenses, etc., despite inflation in cost-of-living of at least 11-12%. I remember that all of us, especially the young guys who weren’t being paid all that well to begin with, were thinking, “Wow, how many more years of this can we take? What’s going to happen here?” And it looked like there weren’t any sure answers. Nevertheless, most of the assistant professors that I knew just went ahead through the tenure application process, didn’t get it, and then decided to leave. If it was the end of your second three-year period, you had one more year after you were turned down for tenure to get a job somewhere else. I watched a couple of my best friends go through that, and I decided after my first three years, not to even apply for tenure. My decision was also influenced by some other things, like “department politics,” and I don’t want to get into all that. So, I just decided that I wouldn’t wait for tenure. I was in the middle of the third year, and I told the department chair, “I know I would normally be up for tenure at the end of this year, but don’t bother to go through the process. I’ve already decided that I’m going to go into industry. But I will stay through the next year while I search for a job.” By then, I was also beginning to hear from some of my colleagues who had already gone to industry. They were finding it to be not just a lot more financially rewarding, but also very interesting. In most cases, they felt that they were working in big labs with large budgets on interesting problems, even though it was usually a shift of gears in research topics. For example, if somebody had been in superconductivity, they might be working on some other aspect of solid-state physics now. But it was still something that they found interesting, even if it wasn’t a direct follow-on to the research that they had been doing. This was sounding pretty good, and it helped me in making the final decision. I was a little bit surprised when the department made me another offer. The chairman talked to me one day and said that they would like me to consider a special position, some sort of “research professorship” that actually sounded more administrative than it was research, and it was not a tenured appointment. They were going to pay about 50% more than what I was making, which wasn’t too shabby, but still not what I could get in industry, because academic salaries for starting positions were so low at that time and inflation had further widened the salary gap with industry. The proposed job would partly involve being a liaison between the professors and the funding agencies, for example, NSF and DOE, in part, trying to identify what were the hot research areas and helping coordinate proposals for the department’s research initiatives. I wasn’t really so interested in that. I thought I’d rather be doing full-time research. As I was starting to get my resume updated, I heard back from one of my friends who had recently left UVA and gone to a Texas Instruments laboratory in Houston. He had just been there for a few months, not long at all, and he was saying that he was really enjoying it. He was telling me all the good things about it and how it was in a technical area that was just full of opportunities. The semiconductor R&D was advancing very rapidly, and there was still a shortage of people fundamentally educated to do that kind of technical work. At that time, electrical engineering schools weren’t teaching the solid-state physics that they do today, which supports the semiconductor industry. Physicists, at least ones that had specialized in solid-state, had a much better-suited background. My friend said, “I’m working in a small group that I’m probably quickly going to lead because I have by far the best background in tunneling,” and this research had to do with tunneling in non-volatile memory. My friend had done research on tunneling in Josephson junctions, which was close enough. He was seeing opportunities to move up in the ranks of the R&D leadership and to make technical contributions, including patents that would be used in products that have an impact on the world. This all sounded exciting. So, I decided to apply to that same laboratory, which TI had recently built in Houston. That lab was created as MOS was getting very important. Up to that time, integrated circuits and even discrete transistors had mostly been bipolar devices, as had been invented in germanium form back at Bell Labs back in the late ‘40s and in silicon form at TI in the early ‘50s. MOS was this new technology that looked like it could revolutionize electronics once again, and TI had set up a separate lab in Houston to focus on MOS R&D.

Butler:

Was that a part of their defense business?

Doering:

No, it was not. It was part of TI’s semiconductor business. There were separate labs that were supporting defense products and working on government R&D contracts. However, our lab did sometimes work closely with a lab in Dallas that was doing a lot of defense related work. In fact, that lab in Dallas had a contract that was a piece of a very large overall government program called VHSIC, Very High Speed Integrated Circuits. TI had a VHSIC contract that required some aggressive feature scaling of MOS technology for that time, to make 72-Kbit static RAM memory circuits. This was one area in which there was some synergy between the efforts in both labs. The semiconductor business also had their original lab in Dallas, which was called the Semiconductor Research and Development Lab, which was still focusing mainly on the continuing evolution of bipolar technology, and it also started doing some MOS development. Of course, the semiconductor lab supporting the defense business was also working with compound semiconductors. But, even with these two labs in Dallas, TI decided that to take advantage of this new emerging MOS market, they would create a whole new site, in Houston, with a couple of wafer fabrication facilities, mostly for MOS, and an MOS laboratory as well. This site became the headquarters of TI’s MOS Memory business unit. However, about three years after I joined it, that lab got consolidated into an even bigger MOS lab in Dallas. Of course, labs frequently evolve in industry. So, that’s how I came to Dallas in 1983.

Butler:

That’s a significant shift from the sort of research that you were doing.

Doering:

Yes, it really was. It was somewhat amusing, because the tradition at TI was that when you had your interview, you gave a seminar, and I just gave a seminar on the neutron-related research that I had done, including the Gamow-Teller resonance, because I had the best slides on that. We didn’t take many pictures at SREL or I might have shown that work as well. Anyway, I gave a talk on nuclear charge-exchange reactions, which was obviously not very related to what was going on at TI. However, by then, the semiconductor industry had started using ion implantation as a means of doping, and ion implanters are, from my point of view as a nuclear physicist, just low-energy, high-current accelerators. That was still pretty new technology for the semiconductor industry, but it’s only one technology out of many that are required to build an integrated circuit. Another connection between my nuclear physics experience and semiconductor R&D was the devices themselves. I understood semiconductor diodes very well because we used large diodes as charged-particle detectors. We applied huge reverse biases, sometimes as much as thousands of volts, to get depletion depths larger than the range of the particles. So, I understood diodes, and it turned out that you didn’t need to go too much beyond that back then to get to the forefront of MOS transistors. In fact, there was a short book by physicist Andy Grove that was a very good introduction to MOS, and it was given out to new research staff in our Houston lab. So, it wasn’t very hard to get up to speed on MOS technology then because it was still pretty new. If you already understand the diode and its depletion region, you just needed to appreciate that the MOS “gate” electrode can capacitively induce an inversion layer as well as another depletion layer in a region between two diodes called the “source” and “drain” of the MOS transistor. You don’t have to go deeply into the band structure or how you calculate band structures or any of that. For simple purposes, you take the band structure as a given, and then use a semi-classical model of electron transport and hole transport to calculate the current-voltage characteristics. If you had any exposure at all to solid-state physics, you had enough background back then to soon be working at the forefront of MOS transistors. When I first got to TI, they gave me a copy of Grove’s MOS book. It took me less than a week to read, and then I knew about as much about MOS as almost anybody! — except for more detail on the overall fabrication technology, which the book didn’t cover in enough breadth. I was learning fabrication techniques mainly from two technicians who were assigned to me. Their primary job was to run material through the lab, building experimental devices and circuits for me to test and evaluate. It was a tradition in that lab, although the lab wasn’t very old, that new engineers should learn how to run all the process equipment themselves, because we were ultimately responsible for inventing the process sequences that would advance the state of the art in new devices and circuits. So, my technicians taught me how to run the equipment, which took roughly another couple of weeks. Overall, it was a pretty quick transition. Today the whole field has gotten so much more complicated that individual engineers and scientists tend to specialize in one of the many subfields that have grown so much in complexity over the past 30 years. Even back then, we were generally called engineers, although many of us were physicists and chemists by training. There were a few engineers and, of course, every year there would be relatively more engineers as they started getting the relevant training for developing semiconductor fabrication processes. Most of us physicists worked on developing the overall process flow, so we were called “process integration” engineers. Chemists tended to work on developing individual process steps, and we called them “unit process,” or just “process” engineers. For example, a chemical etch for a particular material is a unit process, which might be engineered to increase the rate and/or selectivity to other materials. Process integration engineers design the devices: transistors, diodes, capacitors, wires, etc., from which the circuit designers design the circuits. For example, for an MOS transistor, we need to select the materials and the dimensions of the features, such as the gate length and the thickness of the gate dielectric, to get the desired current-voltage characteristics. Other choices involve the amount of doping used in various regions of the silicon in the body of the transistor: primarily in the channel region and the source and drain regions. Eventually, the MOS transistors became further optimized via additional doping structures called “pocket implants” and “lightly-doped drains.” We worked on designing the devices to have good performance, yield, reliability, and low-cost manufacturing. For years, the simplest way to get better performance in terms of more speed as well as lower power, which was also desired, was just to shrink everything, feature dimensions along with applied voltages. Part of the work of the process integration engineer in doing the device design was working with all those unit process guys on how can we make the gate dielectric thinner without the yield and reliability and getting bad. What are the things we can do? Can we introduce new materials? For example, not just use pure silicon dioxide, but maybe put a little nitrogen in there, because the nitrogen would tend to raise the dielectric constant to give more capacitance per unit area, even though it wasn’t physically thinner, which might hurt yield or reliability. So, you would be working with the unit process engineers individually on their processes to figure out how to improve the device parameters, and, on the other side, working with the circuit designers to figure out what are reasonable targets for the devices? How good do they need to be? Of course, this is all part of designing the whole process flow. In other words, you’ve got all these unit processes, but to build the device you’ve got to decide their sequence and how to tune each one. For example, for a particular implant, do you want to do it before or after some particular oxidation? Maybe you want to put in a sacrificial layer to help make that implant shallower, if you can’t turn down the energy on the implanter enough and still get a decent current. Then you might remove that layer before the next step if it no longer serves a purpose. There are all these tricks you can imagine in how you can design the whole process flow, weaving all of those potential steps together in order to build the actual structure that you want. This was all focused on coming up with the next generation of technology. The process integration engineer would first talk to circuit designers to get some ideas of what device densities and performances were needed, then he would typically design, using what we call a layout tool, some appropriate test structures — for example, transistors in an array of different sizes that can all be characterized to see how they actually perform. In doing that, you would be guided partly by simulations. You would run some modeling programs and try to theoretically calculate how such transistors would perform with different specifications of size, doping, dielectric thickness, etc. So, you’d design these test structures concurrently with designing the process flow. Then, you would have the masks made that would specify the lateral dimensions for the lithography processes to build the test chip structures with your process flow. Next came directing your technicians to run those batches of test material. There would always be some experimental alternatives in the flow that we called “splits.” For example, various wafers in each run might have different doping levels applied at particular implants. So, the results weren’t all determined by the differences in the lithography patterns. As the wafers were being processed, you would write some test programs. We had mostly automated testers about that time. Just as in my nuclear physics experiments, minicomputers were coming into use then for semiconductor device testing in industry. They controlled an automated test station to step probes across the contacts to the test structures on the test chips, apply desired voltages, and collect the data for comparison to theory and the target specifications. What I’ve just described would constitute one iteration in the development process. If the results were good enough, which they usually weren’t, you might stop there. Otherwise, you’d run another series of experiments and try to converge with the circuit designers on the final specifications for the next generation of semiconductor technology, which, overall, we still call a “process node.”

Butler:

I’m curious about the nomenclature. You indicated that you were all called engineers. I remember when I was an undergraduate physics major being deigned an engineer when you were a physicist was sort of like being called the janitor when you were the building engineer.

Doering:

Depending on the context, most people don’t regard it as a big deal anymore. At TI, we normally call almost all technical employees “engineers,” regardless of what their formal educational background is, because it’s just a convenience. For example, it’s partly a matter of what our human resources managers regard as efficient categories in the HR database.

Butler:

It’s very similar to Raytheon, in that at Raytheon, when I talked with the physicists, they felt that they belonged when they were finally deigned an engineer.

Doering:

We did once have a little bit of a barrier like that. Most of us just quickly got used to the idea of being called engineers. And once in a while, just as a joke, something would come up and one of us might say something like, “Well, you know, I wouldn’t really word it that way. But that’s because I’m a physicist, so I have a little different perspective.” [Chuckles] Occasionally, we have that kind of joking. But for the most part, it wasn’t any big deal. With the exception that, back when I was still in Houston, I don't know if it was ’81 or ’82, a “engineer title” controversy developed, and I can’t remember whether it was just statewide or national, but it was at least within Texas. It had to do with being a “licensed professional engineer.” As I understood it, there were two ways you could be one. You could be an engineering graduate of an ABET-accredited engineering school. The other way was to pass some exam, and I knew people who did that. A big part of the exam seemed to be about construction, for example, knowledge of building codes, etc. It seemed to me that even an electrical or mechanical engineer would probably have to study to pass that exam because there was so much “trade-type information” in it. But anyway, it got to be kind of a big deal for us because many people were putting “engineer” on their business card, since that was what we were generally called. Also, we didn’t have separate job classifications in our HR system for physicists, chemists, and other scientists. Furthermore, it was common practice in the industry. So, a distinction didn’t really seem that important to us. But to the “licensed professional engineer” organization, it somehow got highlighted and got to be a big deal to them. So, they were putting pressure on companies and government to not allow people to call themselves “engineers” unless they were licensed professional engineers. The main thing it affected was the title you could put on your business card. So, we decided to just pick another title. Generally, most of our R&D organizations picked “member of technical staff” as the default entry-level title. TI already had a technical ladder, but it had started at a very high level. There were only two levels to begin with, back in the 1970s, and they were equivalent to TI Fellow and Senior Fellow today. Later, lower levels were added to the technical ladder, the next being Senior Member of Technical Staff. Thus, “member of technical staff” seemed like a reasonable starting title for our laboratory staff that had not yet been elected to an official level on the technical ladder. From my point of view, it seemed amusing that we needed to select a business card title that labeled us differently outside the company than the informal title of “engineer” that we had been using within the company.

Butler:

And the technical ladder would be used for the scientists and engineers who didn’t want to shift over into the administrative side?

Doering:

That’s right. However, there’s a crossover in the sense that, at each level on the technical ladder, you can also have up to some level of management responsibility, for example, how many people report to you. And the rules on this are somewhat flexible within each business unit, so there’s not a precise TI-wide definition. But, basically, the way it works is if you’re higher on the technical ladder, you can also be higher on the management ladder, up to a point. For example, if you become a vice president, you can’t hold any rank on the technical ladder. We also have a technical ladder “emeritus” status. For example, if I were to become a vice president at TI, then I would automatically become a Senior Fellow Emeritus and not be an “active” Senior Fellow anymore. The two main consequences would be that I would no longer count against the very small percentage of technical staff limit for Senior Fellow and would no longer vote in the below-Fellow levels of the technical ladder election process. In most business groups, we strive to keep the technical and management ladders fairly distinct. In fact, one of the things that we continue to discuss every year in our improvement process for the technical ladder is what those distinctions and guidelines should be. Because, if you get too many people that are too high up on both ladders, then people who aren’t being elected on the technical ladder feel like the percent of engineering population quotas at each level are too severe. In other words, they feel that people they regard as mainly managers are unfairly holding relatively scarce tech ladder positions. Thus, we continue to work on making the technical ladder as useful as possible. In large measure it is about recognition and further challenging people to achieve their potential. There are rungs on the technical ladder to meet different expectations. For example, they may get opportunities to do something other than what they might have done otherwise. So, we view it as raising bar. These are good people, we’ve recognized them as superior technical talent, and we want that recognition to give them more visibility for greater technical challenges as well as more opportunity to pursue a management career if that becomes their objective. Anyway, we want the technical ladder be a mechanism which has a positive impact on morale and be useful to management in identifying our technical people that have achieved breakthroughs. We have a good process for it; you don’t just vote on a list of names. Candidates have to fill out a nomination form, it’s broken into various categories, and you state what your accomplishments are in these different areas, and the voters take that into account by scoring three specific categories of achievement. This process really helps with visibility for a nominee across a voting population broader than his immediate management chain. It’s also often helpful to managers looking to move technical talent into new areas of opportunity for a candidate’s skills.

Butler:

I think we’ll take a little break here.

Doering:

Ok. [Break]

Butler:

I said when we came back we would ask you to talk about your career here, and how research has changed at Texas Instruments since you arrived.

Doering:

Of course, I’ve already touched on some if it. I’ll start by elaborating on how our R&D staff back in the 1980s generally had to cover a broader scope than they do today because many aspects of development were less complex. A good example of how we used to do it was the 256K DRAM project — “the next generation” of Dynamic RAM memory at the time. Amazingly enough by today’s expectations, I was assigned to be the lead process integration engineer on that project after less than a year at TI. I was totally responsible for all those aspects of the R&D that I mentioned earlier: essentially the design of the devices and the manufacturing process that would make the circuit. At that time, there was only one other engineer on the whole project, a circuit designer named Don Redwine. Fortunately, Don had been at TI a lot longer than I had. In fact, by the time he retired in 2007, he had almost set the record for longevity at TI. His official start at TI, as a co-op student from Texas A&M, was in 1959. Thus, his total career at TI spanned 48 years, counting a couple of years as a co-op student. So, even around the end of 1980, when I started working with him on the 256K DRAM project, he had already been at TI for over 20 years. At the beginning of the project, Don and I, along with our technicians, were essentially the whole 256K DRAM team. He was the circuit designer. And it was probably more important that he was the more senior person because the circuit designer had to meet the product specs, for example, the chip form factor, readout modes, etc. required to be a competitive DRAM product. And DRAMs really set the pace back then. They were what led process development for the industry. They were what we call the “technology driver.” So, all the more amazing that I was the only process-integration engineer working on this initially, but not so surprising that a guy with 20 years of experience was in charge on the circuit-design side. The product wasn’t going to come out for another two or three years, but when we started the R&D, it was just the two of us as engineers. We each had a couple of technicians that helped us. His technicians helped with the chip layout, and mine, of course, processed the experimental chips. But between the two of us, we knew how to design and make the entire integrated circuit, from concept and “sand” to packaged, tested, qualified product. Today, no two people span the gamut of things you would need to know to do that, and especially so on the process side. It’s much more complex now. There are now much larger teams from day one. Of course, the 256K DRAM team also grew as we progressed from the initial R&D stage, but for at least six months or so it was just the two of us. Don and I would typically have a meeting in the afternoon to discuss the latest results. Progress often resulted from his ongoing circuit-design analysis. For example, one day he said, “It sure would be good if we could lower the capacitance on the word lines a little bit, because it’s presently limiting the speed with which we can read and write the bits compared to the smaller arrays that we had in the 64-Kbit DRAM.” I replied that I would give it some thought. Then I had this idea about depositing some extra dielectric before we put down the second layer of polysilicon, which was the word line, and then etch those two layers together, as a stack. Depositing part of that interlayer dielectric, rather than growing all of it, allowed it to be much thicker, and resulting in less capacitance. That was a new idea; nobody had done that before, at least as far as I knew. Certainly nobody at TI had done that. It was the result of a simple interaction that you could easily have between two engineers. We could make the decision just between us on this new “process of record,” and from that instant, we would go forward with that plan and run the experiments to see if the results indeed matched up with our expectations. As I mentioned, the teams are significantly larger today from the beginning, both on the design side as well as the process side. Today, there are immediately multiple engineers focused on sub-fields such as reliability, testing, modeling, design rules, and new types of processes, devices, and interconnect, which is the on-chip wiring. In the early 80s, we could almost ignore the metal wire resistance. Today, it is a major limitation, requiring R&D comparable to what we devote to transistors. This extra complexity is mostly a result of continually shrinking the on-chip feature sizes, which has been the main enabler of continuing to follow what’s popularly called “Moore’s Law” for integrated circuits. The other thing that’s different is we definitely have less “old-style, blue-sky, central lab” research inside semiconductor companies these days. In other words, almost everybody who has been in this business for more than a couple of decades is doing significantly less very-long-range, “academic-type,” research in-house than they used to do. You see that reflected in several different ways. One is that few semiconductor companies are taking government contracts for research anymore. Earlier in the interview, I mentioned that we had this big VHSIC contract that got the lab in Houston working together a bit with the lab in Dallas. But now, we hardly even think about the possibility of pursuing government research contracts for in-house efforts. Part of it, of course, is due to the divesture of our defense business. [That happened in?] That happened in 1997, so it’s been quite a while now.[1]

Butler:

You sold it to Raytheon, right?

Doering:

We sold it to Raytheon, although the government would not allow some parts of the TI defense business to be part of the sale, since they thought it would give Raytheon a monopoly in those areas. This concern was mainly with our compound semiconductor-based products: gallium-arsenide devices for radar and mercury-cadmium-telluride and other 2-6 materials that were used for infrared detectors. So, those had to be divested separately. The infrared business wound up in a relatively small company called DRS, which is still leasing one of the TI buildings on this campus. The gallium-arsenide business went to a company called TriQuint, which is a pretty good size now, due to the emergence of the cell phone. TriQuint was lucky to be in a position to catch the cell phone wave, specifically, the need for a relatively high-power, high-frequency amplifier to drive the antenna — something that was more efficient to do in gallium arsenide than silicon. That became a big niche business for them. So those two businesses got spun off separately as we were selling the defense business.

Butler:

What was the reason why TI decided to divest its defense operations?

Doering:

The immediate reason was very slow sales of our products due to a low rate of replenishment after the first Gulf War. TI made several types of radar, missiles, bombs, sensors, and guidance systems that had been heavily used during the war. For example, TI supplied High-Speed Anti-Radiation Missiles (HARM), to take out radar, and several types of bunker-buster bombs. We had assumed that there would be a much more rapid replenishment of the arsenal in those areas than actually took place. So, our defense business was dropping off significantly. Historically, the defense business was less volatile than the semiconductor business, which tended to go through big boom and bust cycles. So, it was nice to also have a more steady business like the defense business had been. But it began to look like the defense business was not going to be that kind of business anymore. I think that was the biggest single reason. However, there were other reasons. For example, there was an overall vision from our new CEO Tom Engibous that we would be better served by essentially focusing just on semiconductors, so it wasn’t just defense that we spun off. We also sold our computer/printer division, which, at that time, was fourth in the world in marketshare for laptop computers. Then, we sold what had actually been the parent company of TI, Geophysical Service, Incorporated, which was started in 1930 by several physicists and geophysicists and was based on using reflection seismography for oil exploration. During World War II, this technology was adapted to make sonar, which started the company’s defense business and led TI toward being mainly an electronics company after the war. Thus, the name Texas Instruments was adopted in 1951, with GSI becoming a subsidiary of the now larger company. The most recent large divestiture, in 2006, involved TI’s sensors and controls business. With that sale, TI finally reached the point of becoming more than 90% a semiconductor company. And it’s semiconductor revenue today is much larger than when I joined the company in 1980, even though TI had the largest revenue of any semiconductor company in the world at that time and even though TI had also sold what once had been the largest piece of its semiconductor business — the commodity memory business in which I had first worked at TI. The memory business was the biggest contributor to boom-bust, and that sale further addressed Tom’s vision that we could have both good growth and profitability if we focused most of the company on the non-commodity, diverse, high-growth, semiconductor markets, for example, in signal processing, both analog and digital. TI has been successful in this pursuit, as reflected in increased earnings per share and stock price compared to the first half of my career at TI. In part, this strategy reflected that the days of the “conglomerates” had come to an end. Companies were now getting rewarded for best serving growing markets rather than just “being huge.” Thus, we moved into a new era of becoming almost entirely a semiconductor company, with the only remaining end-equipment business being the calculator business. We have held on to that one as the hardware basis of a more comprehensive educational products business.

Butler:

About the same time, you began cutting back your research labs as well.

Doering:

Yes, that’s right, primarily with respect to the old Central Research Laboratories, which had been around for a very long time. By the late ‘80s, they were mostly doing research that was aimed at the defense business. A lot of it was being supported by federal R&D contracts from DARPA, etc. By then, essentially all of the silicon R&D had been moved into the semiconductor division labs. So, the Central Research Labs research on semiconductors migrated almost exclusively to compound semiconductors, which supported the defense business. Once in a while, CRL would do something that could have led to a breakthrough on the commercial side. For example, their gallium arsenide R&D could have become kept inside TI and commercially developed for cell phones. But that’s one of those things easily lost in the shuffle. Of course, TI pioneered many silicon chips for cell phones into a leading position in that business. And, of course, this niche for gallium arsenide may be taken over by continually improving silicon-based devices or some other contenders that have come onto the playing field in the research for RF applications. Anyway, it’s one of those interesting topics for speculation. For many years at TI, we’d joke about gallium arsenide and some other research topics as being “the technology of the future, and always would be!” During my first decade at TI, George Heilmeier was our CTO. He had been the director of DARPA in the ‘70s, and was there during some of the really big developments, like stealth technology. When he came to TI, he had a very big vision for research, and he managed our Central Research Labs very much like Bell Labs or IBM’s Watson Labs. TI CRL worked on many of the same subjects as university physics research, for example, high-temperature superconductivity, artificial intelligence, quantum dots, atomic-level simulation via non-equilibrium Greens function techniques, etc. It was an exciting area for large high-tech companies involved in scientific research.

Butler:

When did he come?

Doering:

He preceded me at TI by a few years. He came here from DARPA in 1977. He’s a very well-known guy, not just for being director of DARPA for, arguably, a number of their most productive years, but, principally, for his role in liquid crystal research. When he was at RCA Sarnoff Labs, he discovered several new electro-optic effects and invented the first forms of liquid crystal display. So, he developed a distinguished scientific reputation early in his career. At TI, he was the type of leader who could set a vision for CRL that fit into the era of operating in much the same way as DARPA, Sarnoff Labs, Bell Labs, IBM’s Watson Lab, etc. Anyway, one reason that George came to mind up was the old joke about gallium arsenide once being an example of a perpetual technology of the future. He had pushed some gallium arsenide programs at TI in the ‘80s, and, at that time, they typically struggled to demonstrate commercial practicality. Of course, he was always getting some flack about that from various quarters. So, one day, he joked something to the effect: “Well, we’re not going to call it gallium arsenide anymore. We’re just going to call it the G word.” He was tired of hearing doubts about gallium arsenide and its future, I guess [chuckles]. He would be a great interview, but his degrees were in electrical engineering rather than physics.[2] He lives here in Dallas. He is very outspoken, and has a super-broad perspective from having so many senior positions in managing industrial and government research. He left TI in 1991 to become CEO of Bellcore, which provided central research for the Baby Bells after the breakup of the original AT&T. Since his retirement from Bellcore, he is still a frequent advisor to government as well as to corporations and is probably still on several corporate boards. Anyway, I have digressed a long way as one thing reminds me of another. So, back to “what things are different?” — which was your original question. In summary, there are two things that I see as most different in R&D today. The first is how the integrated-circuit technology has become so much more complex and, therefore, more specialized at the individual R&D staff level. The second is the difference in the proportion of work that we do on very high-risk, high-reward, long-range research internally. Maybe in a minute we’ll talk about consortia and how we handle this in a different way now, managing it more externally than internally. Those are the two main differences that I see.

Butler:

One would think that one would be counterproductive to the other, though. That if you have increased specialization in your research team, that that would raise the importance of having a centralized research team to pull everything together.

Doering:

We actually do have large integrated teams working together to cover all specializations within the next couple of generations of technology development. However, this is not what most people consider to be “research with a big R,” like we used to do in the old central corporate labs. The new R&D organizations are “centralized,” but they’re mostly doing “big D” and we don’t usually call them “labs.” Today, we tend to reserve the “lab” label for smaller R&D operations. Today at TI, we have actually divided the central semiconductor process and device R&D into two “sister” organizations, one for analog devices and one for digital. That is currently more efficient for us, partly because of the way the whole supply chain works. Today, the value-added for digital logic products is mostly at the design level. The process tends to be pretty “standard,” mostly limited by what is available from the process equipment and materials suppliers, and, even in integrated form, from the leading foundries. In contrast, analog processes and devices have more diversity, even with older equipment, and there is still a lot of room for various kinds of tweaks and offshoots and nonstandard forays in different directions. It’s also more specialized into diverse products with niche processes supporting them, rather than one or two standard processes that support massive volume for a smaller number of product types. For either analog or digital process development, we need hundreds of engineers spanning the whole range of diversity and complexity in each domain. So, we do have a central process and device R&D organization; we just don’t call it “Central Research” anymore. Many of the types of things that we used to pursue in CRL are now done more collaboratively, even with some of our competitors. Thus, today, we define what is considered to be “pre-competitive research” that, to the extent it’s successful, will benefit all collaborators in a “research consortium.” Basically, we don’t mind sharing the research output among the consortium members and have it be a level playing field at that point. Of course, we are each free to add our own value in different ways as we individually move along the road from those concepts to designing and building actual products. So, the competitive vs. pre-competitive distinction is very important in the way we conduct research today. One way of summarizing might be to say that what we do in-house, i.e., competitively, might more properly be called Central Development, and maybe you could say “and research” in parenthesis with a “small r.” [Chuckles] Whereas, before we had a large organization that was really a central Research organization, which “handed off to development” if one of its projects got to that point. Hopefully that explains why it’s not an antithesis. If you broaden the notion of “central research” to “central development,” then it’s basically the same. However, I would be remiss not to mention that we still actually continue to do some relatively high-risk research internally, mostly at the level of systems and circuits exploration. In the process and device space, high-risk research is mainly on the analog side. Again, for digital, we no longer see either as much opportunity or need to try to gain a lot of advantage at the process level. We think that everybody in the industry is going to be fairly close on the digital process. It’s partly because of the advent of the large foundries as well as the world-wide equipment and materials suppliers. Of course, the R&D consortia also focus on the digital Moore’s-Law challenges, further leveling the digital process playing field.

Butler:

How do you form these consortia?

Doering:

Well, most of them usually start via some involvement from a trade association and/or government agency. In the case of the U.S. semiconductor industry, it was definitely a trade association initiative in most cases. Typically, we later sought partnerships with government to further support some of the research activity. Elsewhere, for example, in Europe and Japan, they’ve clearly had more government industrial policy than in the U.S. In other words, they select commercial areas to emphasize, with government initiating programs and putting money into research that’s aimed at commercial ends, not just defense needs, or general support of academic research. In Europe, in addition to country-specific programs, there has been a succession of large EU-funded programs. Roughly, 10-20 years ago, the big EU programs were called JESSI. More recently, they have had the MEDEA series. In Japan, their Ministry of International Trade and Industry has sponsored several consortia over the years in the semiconductor industry and in other industries. In the US, the first semiconductor R&D consortium that we initiated was the Semiconductor Research Corporation, which started in 1982. That was an outgrowth of discussions by the CEOs at meetings of the Semiconductor Industry Association, which is the trade association for this industry, formed in 1977. The SIA was originally formed mostly to address trade issues. We had this famous “below-cost dumping” issue in Japan and later Korea in the semiconductor markets. Some companies were accused, and in some cases it was validated, of selling below cost to grab marketshare in the U.S. and elsewhere. The SIA provided a consensus voice for our industry in working with government on how to deal with these trade issues. Later the SIA dialog expanded into other issues associated with areas like workforce and technology. In fact, a convergence of workforce and technology issues led the SIA to form the SRC.

Butler:

And the SRC is?

Doering:

The Semiconductor Research Corporation is a consortium of semiconductor companies originally established to ensure, primarily, that there would be a continuing flow of relevantly-educated graduate students in the technical disciplines of interest to the semiconductor industry. The members were originally all U.S. companies, but the SRC now allows international members and usually has one or two. In the early days of the semiconductor industry, the U.S. government was putting a fair amount of money into semiconductor R&D, mainly for defense programs, and, of course, there were significant spin-offs into commercial use. Of course, the government-sponsored R&D at universities was keeping the pipeline pretty well stocked with graduates that we could hire. It played a large part in producing Ph.D.s educated in the kind of solid-state technology that is the backbone of the semiconductor industry. Ultimately, it had more impact on engineering departments than physics departments, because many of the contracts naturally migrated to focus on applications goals. However, by the early 1980s, there was a concern that much of the government investment would move from silicon to compound semiconductors, since the government could afford to go after the ultimate performance with much less regard to cost than the commercial market. From the industrial perspective, there was still a lot of work to be done in silicon because we knew that, theoretically, we could keep shrinking feature sizes for a long time. However, we needed continued university research to help explore all the possible routes of future silicon progress. And we needed to hire the students involved in that effort. So, the Semiconductor Research Corporation was founded mainly to address that need: to create another funding source, which in this case was just collected as dues from its member companies, issue relevant requests for research proposals, and then administer the resulting contracts. The SRC keeps a small percentage, on the order of 10% for management overhead, and puts the rest of the money into the university research funding.

Butler:

So the consortium doesn’t have their own lab; they just fund…

Doering:

Yes, that's right. They only fund university research. They don’t have any labs of their own and also don’t fund any research in industry labs. For the initial 15 years, it was all U.S. universities, and now the SRC also funds a few universities overseas. Over the years, the SRC has produced both many outstanding students as well as a lot of significant research. The SRC received the National Medal of Technology in 2005. We have tapped great research and education talent at many universities, both large and small. So, those two things, the research results and the students, really were the key outputs of the SRC in its early days, and continue to be today. The next U.S. semiconductor consortium was SEMATECH, and again, it was born out of discussions at the SIA, but on a different problem, one that was closer to the reason that the SIA was originally started, which had to do with balance of trade and how commercially successful U.S. companies were going to be in the next few years. By the middle 1980s, the U.S. industry had already lost not only some integrated-circuit market-share, but even more in the semiconductor-manufacturing equipment and materials market. We were rapidly losing those businesses, mainly to Japan, because their government saw great economic and strategic value in this industry and had an industrial policy to support its growth. So, it was partly the worry about losing that infrastructure in the U.S. that prompted the creation of SEMATECH. The real fear was that the Japanese might even get to the position where they would only sell critical equipment and material to us after they sold as many as the Japanese companies wanted to buy. Then, the U.S. chip-makers would be late in developing their products relative to the Japanese. Another possibility was they would charge us a higher price. So, we had this range of concerns. Thus, SEMATECH quickly developed goals that were directly focused on helping the U.S. suppliers to the semiconductor industry. Those suppliers didn’t actually join SEMATECH. That is, they didn’t become members of the consortium. It was the semiconductor makers that formed SEMATECH, and they used a large portion of the consortium dues to help the domestic supplier industry to be more competitive. The main methodology was to build consensus targets for equipment performance and, then, negotiate with U.S. suppliers on contracts for developing equipment that met those targets. So, the SEMATECH approach was somewhat like that of the SRC, but the contract funding was going to suppliers rather than to universities. This helped the suppliers fund their R&D on a next generation of equipment and materials, which would, hopefully, make them successful, not just in the U.S. market, but worldwide. SEMATECH got started in 1986, four years after the SRC. And then, about ten years went by before another US consortium was created. I wasn’t involved in the early formation of SRC; I hadn’t been at TI long-enough for such a role. However, I was involved in the startup of SEMATECH. For example, I was TI’s representative on the first two Technical Advisory Boards at SEMATECH, which first met in late 1986 or early 1987. In 1992, I was involved in another activity that affected the consortium landscape in the U.S. semiconductor industry. That was the creation of what was then called the National Technology Roadmap for Semiconductors, which subsequently evolved into the International Technology Roadmap for Semiconductors because, after about six years, we decided to take it international. The genesis of the NTRS goes back to another aspect of the semiconductor R&D landscape in the late 1980s. As I mentioned previously, the U.S. government had a history of funding some areas of semiconductor R&D. In particular, lithography was sort of the “poster child” for semiconductor equipment development since it mainly determined how small we could make the features on an IC. As we made transistors and wires smaller and smaller, we needed increasing sophistication in lithography tools. Of course, it was getting to be “tough sledding” to further extend the incumbent optical lithography. Even when I came to TI back in 1980, there were people saying that there was this “one micron barrier” that we’ll never be able to go through with optical lithography, and, of course, we’re more than 20 times smaller than that now, still with optical lithography! [Laughs] Anyway, that was the barrier that people were talking about then. However, the consensus in 1980, TI included, was that we would need some other technology to go much below one micron, and it would most likely be based on either x-ray or a new form of electron-beam technology. By the late 1980s, the government was funding development programs for e-beam, x-ray, and, a newcomer, deep-UV lithography. I represented TI on two separate DARPA lithography development advisory committees in the late 1980s, one of which focused on deep-UV. It was often difficult to agree on what should be funded for continuing lithography development. In particular, IBM favored x-ray and AT&T, which was still in the IC business then, supported e-beam. So, several companies were going to DARPA and other government agencies and laboratories and suggesting that they fund significant work on various lithography technologies. Ultimately, the government agencies decided that they couldn’t afford so many disparate lithography projects. So, they came to the industry, using the SIA as a point of contact and said, in effect, “Okay, the federal government is still interested in supporting research in lithography and other areas of research for semiconductors, but we would like for the semiconductor companies to get together and prioritize the R&D needs. Get together and decide what are the main R&D challenges, and we’ll put the money there.” That was basically the message to us. So, that kicked off an activity, in early 1992, which I’ve been helping to manage ever since. I became the TI representative on this SIA advisory board that was charged with creating the industry response to the government request. We decided that the response should be in terms of something that we would call, generically, “a roadmap,” more specifically, the National Technology Roadmap for Semiconductors. Initially, we didn’t worry about a continuing process of updates. We just decided to hold a big conference in Dallas in October of 1992 to gather inputs for the NTRS by building consensus in each major technical area. Thus, the SIA companies were asked to send a single expert representative in each of the areas. We had a pre-meeting in Dallas that I hosted for the overall SIA NTRS organizing committee. In particular, we decided what the technical areas should be for the conference and the NTRS. For example, lithography was one area and device technology was another area. Overall, we came up with eight technical areas that we thought should be addressed in parallel sessions in this conference. Thus, we asked each company to send one representative to this conference in each of these eight areas, which resulted in about ten or fifteen experts in each, depending on how many SIA companies wanted to participate. At the conference, the eight working groups were given directions to create a “technology needs” roadmap in their area that was driven by the high-level goal of being able to continue following historic improvement trends for integrated circuits. For example, in lithography, one of the elements would be the printed line width that you would need for transistor gate lengths in a certain year to continue on “Moore’s Law.” We asked them to extrapolate out fifteen years into the future and just to give the requirements, not to assume the exact “solutions.” That was the challenge; just assume that we were going to stay on “Moore’s Law,” just take that for granted, because we’re just trying to set a goal, so that’s as good a goal as any. Thus, essentially, the instructions were, “We’re happy with the pace at which we’ve been improving the technology for the past 30 years — we’ve been able to quadruple the total number of transistors on integrated circuits every three years, and that’s a good pace. What would we need to do in each one of these different areas to be able to continue on that pace in an integrated fashion for the next 15 years?” The conference was mainly organized into breakout sessions for each of these technology working groups to create an initial roadmap. They were also asked to provide comments on “potential solutions” which might meet the roadmap technology requirements. This might be in the form of anticipated pros and cons. However, we only wanted consensus inputs, because that’s what the government was seeking. So, we created this first NTRS, the 1992 edition, which was spread over two documents, and the government seemed satisfied with the result. It turned out that we wound up doing another one in ’94, and then another one in ’97. After that, we decided to take it international and update it every year. I’m the only person left from the original organizing committee who is still involved in the top level international management organization for the ITRS, which we now call the Roadmap Coordinating Group. Internationally, there are five regions of the world that support it: the U.S., Korea, Japan, Taiwan, and Europe. Each of them have their own trade association analogous to the SIA, and those five trade associations signed an agreement that they would sponsor this as an ongoing activity, this International Technology Roadmap for Semiconductors. There are now approximately a thousand engineers involved in this process. They have breakout meetings in their own regions and then send a couple of representatives each to three international meetings that we have during the year. The first meeting is always in Europe and provides a good basis of early coordination between working groups for that year’s update. The mid-year meeting is always in San Francisco, and there we present a “rough draft” of the new roadmap for feedback from anyone who wants to attend a public conference. This event and the next are covered by the media. At the end of the year, we meet somewhere in Asia, this year it’s Korea, and we present the new update of the ITRS. This international process has been conducted for a decade now. Each region has two or three people who are on the Roadmap Coordinating Group. Paolo Gargini of Intel and I are the two U.S. members. In this role, we are acting, not as our individual company representatives, but as representatives of the SIA for all of the U.S. semiconductor companies in continuing to guide the roadmap. The ITRS continues to have a 15-year rolling horizon. However, instead of the eight original chapters, we’re now up to 15, reflecting the further complexity and diversification of the industry today. Our roughly thousand-person “volunteer” organization now produces a roadmap that is about a thousand pages in length. It gets totally renewed every two years, and, in the even-numbered years, like this year, we just do highlight update; we don’t necessarily write everything over “from scratch.” The impact of the semiconductor technology roadmap is very broad. It is frequently cited in academic and industry publications on research addressing the technology challenges that it sets out. It has also had a major effect on the semiconductor consortium landscape. In particular, it has influenced the formation of two new consortium initiatives in the U.S. It has had similar impact in the other four regions as well. Almost anyone who writes about the future of the semiconductor industry references the roadmap right away, because they either quote some of the goals and say whether they think they’ve got a way to get there, or they think those goals are too hard, or whatever their comments might be. It must be one of the most widely cited technical documents in the world. After we completed the 1994 roadmap, there were a number of us who felt that the technical challenges were growing faster than the current level of R&D funding, even with the SRC, SEMATECH, and various international consortia. In the U.S., SEMATECH had the largest budget by far, but it was mainly looking at the shorter-range challenges, trying to address the equipment and materials issues in the next three-to-six-year timeframe, not three or more technology generations out. So, we began thinking that we needed to do something else. We needed to get more research funding into addressing the longer-range part of the roadmap. The dominant semiconductor technology these days is CMOS. I talked about MOS just starting to get hot when I came to TI, and, by the late ‘80s, basically all of MOS was CMOS. There wasn’t much PMOS or NMOS anymore, it had all gone to Complimentary MOS because the energy efficiency is so much greater. So, the main question that we were asking ourselves was “how much longer can we scale CMOS?” Over its history, most of the IC production had migrated from the original bipolar transistor technology to PMOS, then NMOS, and eventually to CMOS. By the middle 90’s, we had already been riding this CMOS horse for over a decade, continuing to shrink it and improve it; but how much longer could we continue? In other words, “what does ultimate CMOS look like?” Even the roadmap was still using an implicit assumption that it would still be mainly CMOS for the next 15 years, because there wasn’t any obvious next device technology “platform.” In fact, even today, it’s still primarily a CMOS roadmap, but, now including some of the long-term possibilities that we hope can be developed for practical commercial use. Of course, these new “potential solutions” are in the ITRS today because, back in the middle ‘90s, we were beginning to worry that we needed to identify, at least theoretically, what we thought the ultimate limits of CMOS might actually be. Without that first step, it would be hard to guide any research on what might eventually supersede CMOS. A big part of the concern was that there weren’t any great ideas floating around then on an obvious “next switch.” This is in contrast to the situation during the “bipolar era,” when people were pretty sure that, MOS would mostly replace it; the only question was “when?” For years, the main challenge for MOS transistors was finding a sufficiently clean process, specifically, one in which there was so little sodium contamination that you could have a stable threshold voltage for a practical transistor. Thus, it was mainly a matter of overcoming this particular “manufacturing-engineering challenge.” Therefore, in the middle ‘90s, since we didn’t know what was likely to come after CMOS, we thought it was even more important to make sure that we could “wring every last ounce of potential” out of some form of “ultimate CMOS.” In other words, we didn’t know what the ultimate CMOS geometry or materials might be, but we wanted to increase the efforts to find out. We decided to start a new consortium initiative with that goal — to explore ultimate CMOS. As before, we had SIA-hosted meetings on this, and I was on an SIA committee called the University Research Working Group, because we quickly decided that we were probably going to pursue this through additional university research. But, we still needed to figure out how it would be funded and structured in some detail. My suggestion was that we just create a sub-consortium of SRC, rather than start some entirely new company that would then require a relatively bigger overhead to run. We already had the SRC, and it had a mechanism that had been working for us on how to manage university research. We wouldn’t need to add that many people to create a new sub-consortium “under the SRC umbrella.” Of course, not all of the current SRC members might want to join this new consortium, but we could keep the money and IP separate, and have new advisory boards to create RFPs and award contracts to the best proposals. That was my suggestion, and everyone else agreed. An alternative might have been to make some expansion of the SEMATECH consortium, which had also funded some university research. But, overall, the SRC looked like the best choice to host this new program. The legal name of this newer consortium is MARCO, Microelectronics Advanced Research Corporation, but we normally refer to it by the name that we gave its “initiative,” which is the “Focus Center Research Program,” or FCRP. One of the things that we did early on in creating the FCRP was to make a partnership with DARPA to get more leverage, because DARPA was also interested in the ultimate-CMOS goal. Thus, DARPA, with additional federal funding for the initial years from DDR&E,[3] has been a large partner in the FCRP effort. Basically, we agreed to a one-to-one match, about $20M/year from the federal government and another $20M from the SIA companies that wanted to join. It took us until 1998 to get the FCRP fully operational, and it’s still running today in its fourth 3-year phase. It will be re-competed next year, for another phase running from 2010-2012. The research is currently organized into five big centers working on different aspects of ultimate CMOS. Each of these centers is “headquartered” at a lead university. Right now the lead universities are MIT, Georgia Tech, Carnegie Mellon, Berkeley, and UCLA. There is a professor at each one of those schools who is the director of the corresponding center. Part of the money that goes to each of those centers gets spent at the lead university, but each center splits their funding over, typically, at least, seven or eight universities that might be anywhere in the country that are also part of that center. Of course, they all collaborate together on the theme subject of that center, for example, Materials Devices and Structures, as coordinated from MIT. In 2003, we began working on the next logical step – starting to address “beyond CMOS.” This time, I chaired a subcommittee of the SIA Technology Strategy Committee, that we called the Nanotechnology Strategy Committee. We started by assuming that the FCRP might give us the basic ingredients to continue scaling CMOS until about the horizon of the 2003 ITRS. And we knew that the problem of getting anything that could be better than CMOS was so tough from so many different angles, that even 15 years might not be long enough to develop a solution. This followed from an analysis of the history of technical innovation, basically, how long it took from the first publication or first germ of an idea to when there was an embodiment that made it into production. We went all the way back to the inventions of the telegraph and the computer, and many other significant technology developments that were generally in the information technology space. The result was a broad distribution, but typically about twelve years elapsed before the initial concept was developed into a product. So, we agreed that 15 years was not too early to start working seriously on a really tough problem like this — one that might not even have a pervasive solution. As before, we recommended that the SIA companies start another consortium. We wanted to make this again a U.S.-only consortium, like MARCO, because we also wanted to partner further with the U.S. government. We enlisted six companies: TI, IBM, Intel, Micron, Freescale, and AMD and started the new consortium in March of 2005. Its legal name is NERC, Nanoelectronics Research Corporation, but, again, it’s more commonly known by the name of the initiative — in this case, the “Nanoelectronics Research Initiative,” or just NRI. Our first government partner in NRI was the NSF, and we have an arrangement with them in which we jointly agree to start and fund some new projects each year that get added into their existing NSECs and MRSECs.[4] More recently, in fact, just last year, we created a partnership with NIST in which they are contributing about $5 million, over several years, to NRI. Most of the NRI research is organized into “centers,” somewhat like the FCRP centers. The NIST partnership allowed NRI to expand from three centers to four. So, now we have four centers, one of them is headquartered at the University of Texas at Austin, one is headquartered at UCLA, one is headquartered at the University of New York at Albany, and the newest one is headquartered at Notre Dame. NRI also partners with state and local governments. For example, California has a Discovery Program that is for the University of California system. It can match industry money for research at the California universities in the UC system. In Texas, we have a state program called the Emerging Technology Fund that has matched private and UT System funds in supporting the NRI center headquartered at UT Austin. New York State has a history of supporting semiconductor R&D and provided strong support for the NRI center headquartered there. More recently, the State of Indiana decided that they would enter into a similar agreement with us, and the city of South Bend also contributed a million dollars toward our latest center headquartered at Notre Dame. Thus, our most recent consortium is researching the most long-range goal that we have in the semiconductor industry: “a technology which is superior to CMOS.” This has spawned wonderful research. In particular, it has re-engaged our industry with the academic physics community, because we are talking about such fundamental phenomena. We are even looking at new logic-state variables rather than just quantity of electric charge. For example, we are investigating spin, pseudo-spin, excitons, nanomagnets, “wave-function optics,” and properties of multiferroic materials. One of my favorite concepts came out of the University of Texas at Austin. APS Buckley Prize winner Allan McDonald is a theorist at UT Austin who has calculated that an appropriately-spaced bilayer of graphene should host a Bose-Einstein condensate consisting of excitons in which the electron is on one layer and the hole on the other. Thus, the excitons would form a superfluid, not out of weakly-bound Cooper pairs like you have in a superconductor, but out of these strongly-bound pairs which constitute the excitons. His calculations suggest that the resulting superfluid condensate should exist even significantly above room temperature. This is a very exciting prospect because it would provide the basis for a new type of transistor that could operate at extremely low power compared even to what we estimate for ultimate CMOS. The exploration of such concepts, which are involving some of the topnotch physicists as well as electrical engineers around the country, is a very exciting program for our industry. My current role is in chairing what we call the Governing Council of the NRI consortium. We have one executive research manager on the Governing Council from each member company, and we also have a lower level of representation called the NRI Technical Program Group. We also have assignees from the companies that actually work along with the faculty and students in some of the university labs. I can honestly say that we’re having a lot of fun in NRI. For me personally, one of the reasons is that it has put me in touch with more physicists, people like Allan MacDonald and others.

Butler:

One of the tensions that we found between industry and academia, in large part coming out of legislation by Senator Dole, [Bob Dole, that’s right] that gave intellectual property rights to universities [Right] they previously did not have. How are those addressed in this?

Doering:

We have a very nice model for that. However, every few years, something seems to come up which causes us to get together with the universities and wrestle with it again. For example, every time we get a new sub-consortium of the SRC, like NRI, the “IP model” questions tend to get re-opened. Since SRC has become this “umbrella” for sub-consortia, we have the same IP model for each. It is basically very straightforward. From the industry point of view, this is all about pre-competitive research. Thus, the semiconductor companies are not so much interested in actual ownership of intellectual property via these consortia. We see this as just part of the level playing field, at least amongst the members of the consortium. Companies that aren’t members, well, they’re on their own. But, for consortium members, we just want equal rights to this IP. The way we accomplish this is by making research contracts with the universities that grant royalty-free use to the consortium members of whatever is invented using funding from the consortium as well as any potentially-blocking background IP. The universities still maintain ownership of the patents, except in occasional cases where they ask us if the consortium would like to file for the patent instead. If some non-consortium member, which includes most of the semiconductor companies in the world, wants to use a patent resulting from SRC-funded research, the university is still free to negotiate that license with them. They just can’t charge a license fee to the members of the consortium. That’s been the SRC IP model since the beginning, and, as I said, we occasionally revisit it when a new issue arises. Sometimes, we even need to get the company CEOs or CTOs to meet with university presidents to discuss the situation, but we’ve always wound up with a renewed agreement on this same model. Even after Bayh-Dole, many university presidents still seem to be more interested in working with industry to get their work into commercial application and get more R&D support than in just licensing patents. And, of course, a university that does not accept this model is, basically, giving up SRC funding that will instead go to another school. So, generally speaking, we’ve been able to resolve the potential Bayh-Dole issue with respect to consortium-funded research for our industry. It’s ironic that the intent of the bill was actually to encourage the universities to commercialize their research by providing additional financial incentive to do so. But, it’s one of those areas in which it’s [Butler: difficult to negotiate value] — yes, that’s right. Well-intentioned legislation, in this case, turns out to have unintended consequences. Bayh and Dole aren’t to blame for this. It’s another of these things where you would have needed more foresight than almost any of us have to actually envision all of the ramifications. Fortunately, we’ve been able to live with it. It appears to be more of an issue in other industries. Earlier this year, the AIP and the APS organized a second “Industrial Physics Summit” at the March APS meeting in New Orleans. I was asked to lead the discussion, which was on industry-university research collaboration. We had about 20 industry participants, mostly CTOs, from different companies. Someone raised Bayh-Dole as an issue for commercialization. Several people from various industries resonated with that point, and we agreed to consider it as a subject for follow-up discussion. I got the impression that we have been able to live with it better in the semiconductor industry in the consortium context rather than when we are acting individually, as most of the companies from other industries were describing.

Butler:

Or they’re dealing with IP that is at the competitive stage.

Doering:

Yes, that’s very true. In many cases, it is probably something that’s obviously ready for commercialization. In contrast, our precompetitive research, even if leading to patents, is usually still uncertain with regard to reduction to practical implementation. In fact, there’s a good chance on some of this IP, especially in the NRI domain, that the patent may be expired by the time the concept ever gets to be commercialized. Of course, you can always do things to extend IP by, for example, adding something that limits the scope. However, many of the patents from long-range research will never have commercial value. But, nevertheless, we pursue some of these. The consortium is paying for some and the university is paying for others. The university gets right of first refusal, so it’s ones that they don’t want to patent that get presented to the consortium for us to decide if we want to patent them on our nickel. Most of the consortium-generated IP in the semiconductor industry is far enough out that we’re only looking at it from a defensive position. We just want to avoid being charged some day in the future for using this, if indeed it does turn out to be significant. In most cases, something else will come along and outflank it anyway, because that’s just the nature of the R&D these days. However, some fraction of even the long-range R&D does get to be important, and that’s why we don’t make light of these university-consortium IP agreements and the principles behind them. When we have a another Industrial Physics Summit, we may have a more detailed IP discussion and see if any consensus develops on how industry might work with the AIP and APS with regard to Bayh-Dole — whether it should be amended in some way, or to make it more like it was intended to be, or to remove whatever obstacles it seems to be causing in some industries, which may have a different experience.

Butler:

Do you see the semiconductor industry going back to in-house research?

Doering:

I think it is, but focused more on specific areas. The degree to which it may approach “fundamental physics” research will depend on the circumstances. For example, a very good case for reasonable risk would now be required for a return to many of the types of physics projects that characterized the old central research labs. On the other hand, much publishable research continues in electrical engineering, computer science, etc. Of course, as we previously discussed, the bulk of the really long-range research is now being performed via consortia. However, the universities don’t have all of the infrastructure capability to completely demonstrate the feasibility of new semiconductor devices and manufacturing processes. In fact, there are now relatively few semiconductor companies that can afford the investment you need to be a manufacturer. The capital equipment expenditure automatically puts a lot of people out of the game. If you don’t have revenues of over five billion dollars, you probably have no business even thinking about building and equipping a reasonably up-to-date semiconductor factory. So, just the nature of the technology is pricing a lot of people out of the manufacturing game. Of course, that’s what the foundry industry was set up to address, the TSMCs, UMCs, etc., which started over in Taiwan. I was just at the annual SIA banquet in San Jose a couple of weeks ago, and we gave Morris Chang a major award for pioneering the “foundry business model” in our industry by starting TSMC. This creation of separate companies to which semiconductor manufacturing can be outsourced has enabled a lot of smaller companies to exist that only design and market integrated circuits. There are over a hundred such “fabless” companies in the U.S. This was analyzed by economist Claire Brown of UC Berkeley a couple of years ago at a National Academy of Engineering workshop at which we were both presenting. She had just completed a study of off-shoring in the semiconductor industry. The theme of that NAE workshop was off-shoring of engineering in general. They selected several industries as examples and paired industry and economist speakers for each. She thought that off-shoring, especially in the form of the foundry model, was a great benefit to the U.S. semiconductor industry since it allowed the existence of many fabless companies here, including several large ones like Qualcomm and Broadcom. These are multi-billion-dollar-a-year companies that wouldn’t exist if they couldn’t get their chips manufactured on reasonably close to state-of-the-art technology at foundries. Qualcomm has grown large enough that they could build a fab, but it would be a huge business-model change for them and almost certainly not worthwhile. Morris Chang, by the way, was at TI when I was hired. I had just barely seen him in my early years at TI before he left. He started something that has turned out to be a very good model for much of our industry as it has grown to almost $300 billion per year. In fact, as I said, only a few of the largest semiconductor companies still find it worthwhile to build new IC manufacturing facilities rather than outsource manufacturing to the foundries. And, even for some of the largest companies, it often makes sense to outsource some of the manufacturing. This make vs. buy decision boils down to tradeoffs involving capital cost, product volume, and relative advantage of any proprietary process vs. proprietary circuit design. Increasingly, product value has become derived more from circuit design than unique processing, especially for state-of-the-art logic products. The technological constraints and expense of developing new processes has forced everyone more and more toward a common evolution of the CMOS logic manufacturing process, as consensus is built through the ITRS roadmapping activity. Of course, the process equipment and materials suppliers can also only afford to explore a few R&D options and must soon converge on the basic technologies in their products. This has resulted in, essentially, a commoditization of digital logic process technology. For example, with a large investment, a single IC company might develop a year or so lead in logic process technology over a bunch of fast followers, including foundries, but design cycle times are now so long for highly-integrated logic products, that would not be much of an advantage anymore. It used to typically take a longer time to develop a process than to do a design; now it’s tending to be more balanced because of all the complexity that’s going into the design, in part, to help compensate for the fact that we can no longer continue to affordably improve every parameter of the process technology from generation to generation. In particular, we can’t increase speed and reduce power usage at the transistor level anymore simply by making the transistors and wires smaller. That used to lead to a substantial improvement, but it doesn’t any longer. Presently, at the device level, we can hold power constant at a moderate cost in process complexity, but we won’t even be able to do that much longer. Fortunately, we are still able to improve speed and power consumption at the system level by adding complexity to the circuit design. For example, we now power-down parts of the circuit when they are not in use, even if it’s just for the next few milliseconds. Thus, we’ve gotten to a stage where more of the emphasis and more of the value added from R&D is on the circuit design side. For example, that’s why you see microprocessor makers putting advertising emphasis on “number of cores” these days rather than “megahertz.” This type of digital logic is what the man-on-the-street typically identifies with integrated circuits, and it is the type of product that generally best fits the fabless-foundry model. However, we also need a lot of analog integrated circuits for interfacing with real-world signals and other purposes. Of course, there are also “mixed-signal” ICs that contain both analog and digital circuits, but they are almost always process-optimized for one or the other. Because there are many more parameters to optimize across the many types of analog ICs, the feature scaling which is so dominant for digital ICs is less significant. In addition to speed and power efficiency, analog design needs to put considerable focus on noise, linearity, dynamic range, component matching, relatively high voltage, and many other parameters. Optimizing these in different combinations over the very large diversity of analog ICs provides more advantage for custom processes, which don’t necessarily require the most expensive lithography. Furthermore, many of these analog products are sold into relatively small markets. Thus, most analog chips do not fit the fabless-foundry business model as well as digital chips. Thus, in the analog regime, there is such a diversity of parameters and products that you can add significant value by tweaking the process or by fairly revolutionary process innovations that aren’t as relatively expensive to develop as for logic because they don’t depend on that next big step in lithography. So, you don’t need to buy the next lithography tool, which is going to cost at least $40 million for only one machine. And when you buy that first one just to do the development, it’s only what’s called an “alpha” or “beta” tool. You need that tool for a couple of years just to do the R&D, to build features at that size to test, before you do real manufacturing, at which point that tool is just an albatross around your neck. It’s not even still fully depreciated — you may still owe $25 million on it, but it’s really no longer useful for much of anything. Maybe you get a little continued R&D from it, but there’s very quickly another one that you’ve got to buy, and for $50 million this time. Even beta tools don’t usually have the throughput and other characteristics that make them fully production worthy. So, you’ve also got to start purchasing the 20 or more production-model machines that you need for the real factory — the volume factory. Thus, the fabless-foundry business model supports a natural division into companies that design circuits, especially logic circuits, and companies that build them and keep up with the big next-generation manufacturing investments. This model is very much involved with government industrial strategy in different countries. For example, in Taiwan, semiconductor foundries are essentially a “national industry” and a big percentage of the economy. Their semiconductor foundries are subsidized in whatever ways necessary to make sure that they’re not going to close down. It’s a similar strategy in Korea. Of course, there you’ve got a bit of a different situation. Samsung, for example, is huge, and they make everything. Their semiconductor business could fail and there would still be a Samsung. Texas Instruments is in a pretty unique situation in being the world’s largest maker of analog/mixed-signal ICs and also being very big in digital logic. Maybe the next closest in this regard is ST Microelectronics over in Europe. In contrast, Intel, Samsung, and IBM, for example, are all firmly at the digital end of the market, but TI is involved in both. On the digital logic side, considering the process commoditization and all of the capital investments required, we have decided that it makes sense to partner with foundries on the required R&D. So, we work with the partners that are going to be our suppliers, since their whole business is based on logic fab infrastructure and they’re going to need to make those huge investments in scaling feature size at the leading edge of logic products. For example, we don’t both need to purchase the alpha/beta lithography tools. Of course, we can work with them to develop a base process plus, as required, a few special process features for our logic products. Thus, we can still maintain some of our own logic-process IP, and they can build this portion exclusively for us. Such options depend on details of the business deal, which are negotiated case-by-case with each new technology that we develop. In contrast, on the analog side, we still perform all of the process R&D in-house. It’s very broadly spread across a large number of differentiated product types that, just by their very specialized nature, resist process commoditization. Thus, we do custom process R&D for analog, mostly in a central organization via relatively small teams, much like when I came to TI.

Butler:

One more major issue, one that I talked a little bit about during our break, [Right] is to what extent does TI use its own research and development teams to watch small startups for acquisition?

Doering:

We do a fair amount of that. It’s spread pretty broadly throughout the company. In process R&D, we’re not actually anticipating that there will be much process technology worth acquiring from startups. Nevertheless, we are continually keeping track of new developments and meeting with small companies and universities looking for a co-development partner and/or first customer, especially for their IP. Such evaluations are a part of my job at TI, and, if you look across the business units of TI, for example, high-performance analog, power devices, microcontrollers, digital signal processors, etc., you will find a number of people who are also involved in this type of activity. These people are typically what we call a “chief technology officer” for that business unit. It’s been a trend recently to have “CTOs” for each business rather than for the whole company. Some still do, but with increasing product complexity and diversity, it’s very hard to be a CTO for a company that is in a broad range of semiconductor markets. For a company like Intel, dominated by a single product, you can still have a CTO that understands, in their case, microprocessors, well enough to be the CTO for the whole company. But when you’re more diverse, the trend is to have multiple CTOs. They work very closely with strategic marketing, which is a centralized organization, on the related questions of what’s on the horizon, what are customers looking for in the next five years, ten years, etc., and what are some of the new things that are showing up through startup companies and universities? It tends to be these CTOs, their teams, strategic marketing, and our TI Fellows who are principally involved in this outward-looking activity. In fact, we have an annual TI Fellow meeting at which we typically try to look at a new area — that could be a new area of business opportunity for TI. For example, a couple of years ago, our theme was “medical devices,” and that helped us decide that medical electronics could be a new business unit at TI. An important aspect of these meetings is hearing from some outside invited speakers. Often, these speakers are identified through prior personal connections. In general, those connections that senior technical people usually develop are very important in leveraging outside expertise. Other pieces of this outward-facing network are our internal venture-capital activity and our acquisitions team. The venture-capital goal is usually not just an investment, but a technology connection, something that could possibly lead to a new supplier, or a joint-development partner, or even an acquisition. So, our venture-capital organization often hosts some of these meetings with startups. In fact, later this week, I’m attending such a meeting. We will be hearing from a company that’s been working on a new kind of capacitor technology. It involves what’s called colossal magneto-capacitance, in which magnetism can affect a material in a way that can increase its dielectric constant. Thus, the CTOs of our businesses, the TI Fellows, strategic marketing, internal venture capital, and others are involved in evaluating external opportunities with startups and others. If what emerges from their discussions is a recommendation for acquisition, then our acquisitions team will take care of the necessary business analysis. Of course, in most cases, the recommendation is something less dramatic. In general, we almost always learn something interesting, and, occasionally, it leads to a new R&D partnership of some form.

Butler:

One of the things in our next project, we are looking to interview about 30 startups, of which we would like about five to be startups that have reached an acquisition stage where a larger corporation is either currently looked at them or has actually acquired them. Our constraints are that one of the founders has to be a Ph.D. physicist. So, if you know of any companies that TI is planning on acquiring or has acquired where one of the founders is a Ph.D. physicist, we might like to interview the founder.

Doering:

I’ll check on that. Of course, I can’t say anything about companies that are just under consideration. [I understand that] But, among ones that we have acquired, there is a fair chance that there could be a physicist as founder of at least on one of them. These include both hardware and software companies, and I do know that a lot of physicists have migrated into software, especially high-energy physicists and others who did a lot of software development as part of their experimental or theoretical work. So, I can check on our acquisitions, of which we’ve done a fair number in the last ten years. Of course, I’ve also met with the founders of many companies that we have not acquired. Thus, I can also check on those for a potential interview with a physicist founder.

Butler:

I’ll keep in touch with you on that.

Doering:

O.K. Send an email to remind me on that. [O.K.] I’ll be glad to help.

Butler:

Thanks so much and thank you for a great interview. [I enjoyed it. It’s been my pleasure.] Very good.

[1] TI sold its defense business to Raytheon in 1997 for $2.95 billion. Cf. Wikipedia, “Texas Instruments.”

[2] George Harry Heilmeier (born May 22, 1936) received his BS in Engineering from University of Pennsylvania and his M.S.E., M.A., and Ph.D. degrees in solid state materials and electronics from Princeton

[3] Department of Defense Research and Engineering

[4] Nanoscale Science and Engineering Centers and Materials Research Science and Engineering Centers