Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
We encourage researchers to utilize the full-text search on this page to navigate our oral histories or to use our catalog to locate oral history interviews by keyword.
Please contact [email protected] with any feedback.
Courtesy: Norman Jouppi
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of Norman Jouppi by David Zierler on April 2, 2021,
Niels Bohr Library & Archives, American Institute of Physics,
College Park, MD USA,
www.aip.org/history-programs/niels-bohr-library/oral-histories/47041
For multiple citations, "AIP" is the preferred abbreviation for the location.
Interview with Norman Jouppi, Distinguished Hardware Engineer at Google. Jouppi provides an overview of the organizational hierarchy at Google and where he fits in, and he surveys the distinctions between applied physics, electrical engineering, and computer science. He recounts his childhood in suburban Chicago and his early interests in computers. He describes his undergraduate education at Northwestern where he pursued his interests in computer architecture. Jouppi discusses his graduate research at Stanford, and he reflects on the early days of startup culture in Silicon Valley. He explains the origins of MIPS and the influence of Jim Clark and John Hennessy and he describes his work for Silicon Graphics and his thesis research in CAD. Jouppi explains his decision to take his first postgraduate position at Digital Equipment Corporation and he describes the importance of VAX computing. He explains the corporate transition from DEC to Compaq to HP, and he explains the origins internet browsing and the creation of Alta Vista. Jouppi explains the concept of telepresence and he discusses his responsibilities as director of the Advanced Architecture Lab. He explains the interest in exascale computing and his early work in artificial intelligence. Jouppi discusses his involvement in VLSI design and he explains the process that brought him to Google to work on platforms and TPU infrastructure. He reflects on how ML has changed over the years and he describes both the research and collaborative culture that Google promotes, and he explains why quantum computing is a completely different domain of computation. At the end of the interview, Jouppi considers how and when Moore’s Law will end, and he conveys his commitment to advancing technology that has a tangibly net-positive impact on society.
Okay. This is David Zierler, Oral Historian for the American Institute of Physics. It is April 2nd, 2021. I'm delighted to be here with Dr. Norman Jouppi. Norm, it's great to see you. Thank you for joining me.
My pleasure.
To start, Norm, would you please tell me your title and institutional affiliation?
Currently, I'm a Distinguished Hardware Engineer at Google.
What does the title Distinguished signify at Google?
Well, Distinguished is kind of complicated. It's similar to other computer industry organizations, so it means, basically, a high level of senior technical expertise, but not quite a Fellow. Google has Fellows, and they and the two Senior Fellows are VP level positions. I was actually a Senior Fellow at HP, and I told all the people who wanted to get promoted who worked for me or asked my advice, Distinguished Technologist was the best place to be because you get to have more fun. You get to work on your own problems, more or less, as opposed to someone else has a problem that may or may not be possible to solve, and you get called in to solve it.
Norm, to get a sense of your own place in the organization, and Google's philosophy toward information science from a basic science perspective. The first question there is, unlike Bell Labs, there is no laboratory that is sort of cloistered off from the rest of Google. Whatever happens at Google from a pure research perspective is administratively and intellectually integrated within the entire company. Is that a fair way of looking at that?
Well, there's a Research and Machine Intelligence group which is large, and it's run by Jeff Dean, who's well known. He's one of the two Senior Fellows in the company. That's the closest thing we have to research. Most of the people in that group are involved in research-oriented things and publish in ML conferences and the like.
And for you, hierarchically, give a sense of who you report to, and who reports to you.
Well, I'm very happy that no one reports directly to me. I took a retirement as soon as I turned 55 from HP, and also retired from management as the main thing I retired from. It was good to learn about management and the challenges. I have more empathy for my managers now, but I'm glad I don't manage anybody. And then, my management chain used to be pretty direct, but as Google has grown, I've gotten more people in my management hierarchy. So, now I report to three VPs in a row, who reports to a senior VP, who reports to Sundar, the CEO.
Norm, a question we're all dealing with right now, in terms of the research, in terms of your work generally, how have you fared over this past year in the pandemic? Has remote work allowed you a certain amount of bandwidth to work on problems you might not otherwise have been able to, or alternatively, to what extent is your research style collaborative, and really dependent on physical proximity, and working out problems with colleagues at the white board?
We don't use white boards or smart boards much anymore, but there is that element of being in a room together where -- I used to do research in telepresence. I call it my "technical midlife crisis" where I went off and did something completely different. You know, where you can look around and see if someone wants to say something, or whether people are generally agreeing with you or not, and things like that. I think it's easier to come to decisions quickly in person. But luckily, we have this Google videoconferencing, and there's other things like Zoom as well. We have Google Docs, where we can basically type on the same document at the same time while we're connecting via videoconference. So, if this happened ten years ago, it would have been a much bigger disaster, in terms of work productivity. I remember one time when, this was about 15 years ago when I was at HP, and a power transformer blew and we all had to work at home for a week, because they had to order a power transformer from somewhere. Our productivity just went way down. It was like 20%, or something like that. But now, I think we're around 65% percent. Probably 70%.
Norm, from your vantage point, and of course, with the caveat that you're only speaking on an individual basis and not necessarily representing Google, what is your sense of the long-term impact that the pandemic will have on Google, both from a cultural perspective and a research perspective?
Well, it's hard to predict the future. I can say that one of the things that I'm really proud of is that when the pandemic started and everyone started working from home, and the demand for videoconferencing went way up, and hence, the demand for Google services throughout the workday. The teams that were managing and deploying infrastructure were able to respond in a fairly seamless, as seen by the user, way to support that extra demand. So, I would assume after the pandemic that demand will go down a little bit, but I think, just talking to some people, with climate change and global warming, people are saying, maybe we don't have to go in the office five days a week. Maybe four days a week makes sense, or three or something. I think that now that people have tried out these work-from-home methods, obviously it works better for some folks than others. If you're working in a hardware lab, you have to go in. But I think work in general will be more flexible, and hopefully at Google we can help enable that.
Well, Norm, before we take it all the way back to the beginning and develop your personal narrative, I'd like to ask a question so that we get our terms on the table, so that I know where you're coming from with these things. So, for you, we have electrical engineering, we have computer science, and we have applied physics. Where are the boundaries of these disciplines for you, and have they shifted over the course of your career?
I think one of the things about me is I've always been interested in the whole stack from top to bottom. So, even though my degree was in electrical engineering, there were other people doing the same thing and getting computer science degrees. I have patents in the mechanical thermal area, with co-inventors. Not entirely on my own. But you're probably familiar with the end of Moore's law, and the end of Dennard scaling. I don't know if you're familiar with the end of Dennard scaling. But for a while, computer design was not so exciting. One year, we'd have eight cores per chip. The next year we'd have twelve. The next year we'd have sixteen. You just keep stamping them out. But now, it's an exciting time to be a computer designer, because with the end of Moore's law coming up, and the end of Dennard scaling, it makes it much more challenging, and makes that interdisciplinary approach more important because you can't afford to do things in a silo anymore.
When will Moore's law end? What's the timescale for this?
Well, it's not like you can say by December 31st this year, at 12:01 a.m. It's been slowing down over the last couple process generations. So, typically transistors got much cheaper at every process generation, and now they're getting barely cheaper. So, it's slowed down. It's driven by the mobile industry. The phones are still talking about 2 nanometer processors. It's gone on a lot further than I expected. I don't know if you're familiar with EUV.
No.
Extreme Ultraviolet Lithography. So, it's 13 nanometer radiation. It's just extremely challenging to work with, and they worked basically for decades to develop it, and they finally got it to work just in the nick of time. So that allowed us to scale past 7 nanometers. They're continuing to make the transistors, but the EUV machines are rumored to cost $100 million apiece, and you need a lot of them. That's why we've seen the consolidation in the fab industry. It's a gradual process, but even if you look at aviation, you could say innovation peaked in the 1970s. There was the SST and then 747. I mean, the Concorde SST. But lately we've had carbon fiber used in airplanes and high bypass engines and stuff. So, I expect it to go on slowly, but it's a completely different pace.
So, Norm, as you say, these are primarily economic considerations that will herald the end of Moore's law. Not necessarily technological.
Mainly economics, but it is getting challenging technologically as well.
Well, Norm, let's take it all the way back to the beginning. Let's start first with your parents. Tell me a little bit about them and where they're from.
My mom is from Chicago originally; German heritage that came over in the late 1800s. And my father is originally from the Upper Peninsula of Michigan, with Finnish and Swedish roots. My mom's father was head of the electrical machine shop at U.S. Steel. So, he was a second level manager, and I guess he classified as an engineer at the time. Long time ago, back in the Depression, they didn't have the same degrees as they have now. He was a practicing engineer, and my dad graduated in mechanical engineering. So, he went to work at U.S. Steel in South Chicago, and that's where my Mom and Dad met because they lived in the same neighborhood.
Jouppi is a Finnish name?
Yeah.
Any idea what it means, or where it comes from in Finland?
It goes all the way back to 1530 in Seinäjoki, Finland. It's a more northerly town. Not super north, but it's not near the coast. Everybody says my name wrong. If you were in Finland, you'd say, Jouppi [iopi] (I.P.A.), but that's too complicated. Just settling for Jouppi [d??opi] (I.P.A.) is good enough.
Norm, did your father involve you in his career? In other words, when you were a kid, did you have a pretty good sense of what it meant to be an engineer?
Yeah, I think I had a reasonably good sense. He did car maintenance, and I helped with that. Sometimes he'd bring blueprints home, and things like that.
Where did you grow up?
South suburbs of Chicago, in Dolton.
Public schools throughout?
Yeah, it was a working-class community. There were steel workers and all their families, a lot of the residents.
When did you start to get interested in science? Was it early on?
Yeah. In grade school I had a friend, and his father worked in the chemistry lab at the University of Chicago. We had a shared interest in growing crystals. My friend got this controlled temperature bath, but I just had to make do with the basement in our house, which was a more controlled temperature than our house. We grew various kinds of crystals, and I built various -- I got those kits where you can make your own radio, electronic kits, by wiring together things. Before then, all my toys were building toys, and chemistry sets with chemicals that you can't buy anymore for kids.
Do you have a clear memory of your first interaction with a computer?
This was really through movies, my first interactions. There was a golden era of films then, Forbidden Planet, Robbie the Robot, always those big computer panels in Lost in Space, and things like that.
Did your parents bring a computer into the house while you were living there?
No. We didn't have any computer in the house while I was living there. By the time I got to high school, I built -- I had a lot of independent study classes, because my high school didn't have advanced courses. So, I built computer stuff out of TTL logic. I designed my own PC boards, I fabricated them in the machine shop, and drilled the holes, and did everything, and made an auto ranging digital frequency counter.
What were you most interested in when you were doing this on your own? Was it more the building aspect, or the information science, what you could do with the end result?
I think it was both. I've always had an interest in both. I liked learning about the different aspects. We had some basic electrical equipment at the high school lab. We had an oscilloscope, but we didn't have a digital auto ranging frequency counter, and I thought we should have one of those. So I wanted to build one.
What early ideas did you have about the way that computers might influence society?
I first started using computers when I was in high school, just rolling back to our other question. They had an IBM 1130, so I took a class and we learned to use it with Fortran and assembly language, with card decks. Then, I got hired for several summer internships by the school district to use the machine during the summer to do various things for the district. That was my primary computer experience in high school. But I always just saw it as a tool that hopefully could help people. But I wouldn’t have imagined the iPhone, and where we've gotten today.
Yeah. Norm, between geographic considerations, your grades, financial constraints at all, what kinds of colleges did you apply to?
I was pretty naive, so I only applied to two colleges. One, Bradley University, because I had gone there on a summer engineering camp. They had this program called JETS, where in the summer, you could spend two weeks there and they'd give you an overview of all the different kinds of engineering. Then, because Caterpillar was headquartered there, and some other companies, they talked to you about mechanical engineering, or some other type of engineering, and then we got to have tours of the factories. I still remember the big factories and all the giant machines. I don't know if they have those programs so much anymore, but I found them really interesting. Sorry, I went off -- I know I'm not answering your question.
What was the other school that you applied to?
Oh, Northwestern.
And where did you end up going?
Northwestern. It was on the lakefront, and they had a program there where you could tour the campus and then spend one night in a dorm room. They'd roll out a cot and I got to stay with two freshman engineering students. One of them had an HP-25 calculator, and it had a program in it which used up most of the memory and capabilities, where you could do simulated moon landings with a little display. It would display first your velocity, and then your altitude. They said I played with that for half the night.
Did you have a pretty good idea of what you wanted to study in terms of majors, right at the beginning, or were you open minded?
No, I wanted to be an electrical engineer since I was in the fourth or fifth grade. Then, by the time I got to middle school, I knew I wanted to design computers.
So, you looked at electrical engineering as an entrée to computer architecture.
Right.
Is that the standard way? Academically, is that generally how you go about that process, electrical engineer to get to computers?
Some people go through computer science, but I think the best designs have tradeoffs across the whole stack from software, compilers, operating systems, through the electrical design into the mechanical and thermal issues. I think electrical is a good place to be. Plus, I went to Stanford for graduate school, and that's Silicon Valley. We weren't really Software Valley at that time.
Was computer science available as a major at Northwestern?
Yeah.
So, you made the specific choice for electrical engineering.
Right.
How much of the curriculum in electrical engineering was reliant on theory? Any at all?
There were some classes on theory. Mainly electromagnetics, control system theory. There were a bunch of things that were -- I think more theory at the very beginning, and then more practice later on.
How much of the curriculum was oriented in physics?
We took the Halliday and Resnick undergraduate series—it was required. Some of the things like electromagnetic propagation, wave guides, and stuff like that are borderline physics. One of my least favorite things is complex exponentials, when you have all equations filled with JWC and stuff. I never really did get that 100%.
Norm, what laboratory work did you do as an undergraduate that was really formative for you?
Mainly, it was just computer labs, writing programs. I had done a lot of labs in high school and middle school. I stopped, basically, watching TV somewhere around sixth grade, and I just did experiments in the basement. I built my own tube audio amplifier, and a whole bunch of other things.
Did you do any relevant summer internships, or summer work as an undergraduate?
Yeah, as an undergrad I did COBOL programming for Sargent & Lundy Engineers. So, when the pandemic hit and they were scouring the country for anyone old enough to know COBOL, it made me laugh a little bit.
Norm, what year did you graduate undergrad?
Actually, I graduated from high school in '76, but I completed the undergraduate studies in two and a half years, so I graduated in '79 as an undergrad, and then I did a master's degree in one and a half years.
Did you place out of a lot of introductory courses, or did you load up your semesters?
Both. But then I decided -- I went for an interview after my master's, and I decided I really needed to get a PhD to have the kind of job that I wanted to do.
Did you have a master's thesis, a formal project that you presented?
Yeah, but it was just a computer simulation and extended paper kind of thing. But ironically it was a computer architecture specialized for AI.
So, as you say, you wanted to go onto the PhD for the kind of job that ultimately you can only get with a PhD. So, just in terms of your options, what might you have done if you had wanted to enter industry just with a master's? Or another way of looking at this is what was not available to you, as you understood it, without the PhD?
So, I interviewed at IBM, and it was Poughkeepsie where they built the mainframes at the time, and not the labs. They said I could write test scripts for microcode for a disk controller. That sounded like -- I'm glad someone's doing it, but that didn't sound like a big creative outlet.
Where besides IBM? Who else was supporting the kind of work you were interested in, in an industrial research setting at that point?
Well, also, there was Bell Labs. Naperville, and most of them in New Jersey. I did have a summer internship in 1980 at IBM Yorktown Heights. I got to meet some really amazing people there. John Cocke was one of them. He was an IBM Fellow, and I remember having lunch with him. He'd talk about things like compiler transformations and Hilbert spaces, and it was just amazing to try to follow along in his conversations.
Norm, besides the improved job prospects, what was attractive academically, or intellectually, for you to stay on and go for the PhD?
Just to do totally new things that hadn't been done before. That's the key thing.
What kind of advice did you get, or what were the most attractive graduate programs you were considering?
There, again, I was naive. I only applied to Stanford. I did get a National Science Foundation fellowship, though.
Now, in the early 1980s, did Stanford have the Silicon Valley connection that we all understand nowadays, or was that non-existent at that point?
Yeah, a lot of the professors worked one day a week in industry, consulting. They still have the policy, as far as I know, where faculty can spend 20% time consulting. When I was a grad student there, I actually did the 20% consulting thing myself with some various startups.
What were some of the key companies in the Palo Alto area at that point? Was Apple a presence already, or no?
No, not so much. Xerox PARC had a big influence because we used their Alto systems, which had one of the first WYSIWYG editors. There were some defense companies in the Palo Alto area, but if you went further south in Sunnyvale, it was like a who's who of semiconductor companies, Intel, AMD, National, all the semiconductor companies.
Norm, what was the curriculum like when you got to Stanford? Coming in with a master's, were there specific courses you had to take, or could you just jump right into the kind of research you wanted to do on your own?
No, I had to take one year of coursework, and I had to pass a qualifying exam. So, I was at a little bit of a disadvantage there because they don't admit many people who already have a master's because they like to get to know the students better. I didn't know that many professors when I was taking the qualifying exam. Until recently, they had an unusual qualifying exam where you have 10 twelve-minute oral interviews with different professors, and they can ask you any questions. Some professors were famous for asking totally random questions. Like, Fabian Pease asked, “why are there two tides a day?” I mean, he was a semiconductor physics professor, and he asked, “why are there two tides a day?”
What was the process for you in terms of developing a relationship with the person who would become your graduate advisor?
When I first got to Stanford, I worked with Jim Clark, and he was doing the Geometry Engine. Across the hall from him was John Hennessy, and he was doing conventional processors, interested in doing what became the MIPS project. It was just starting up in my third quarter at Stanford.
What were the origins of MIPS? How did this get started under Hennessy?
It goes all the way back -- I was privileged to be an intern in the 801 group at IBM which did the first official RISC machine. John Cocke from the group gave a talk at Berkeley, which encouraged the Berkeley RISC project, and after that the Stanford MIPS project got started. So, it was really an amazing collaboration between John Hennessey and Dave Patterson, who I both know well, and I still work with Dave. They took different approaches. Dave took a hardware centric approach for the register file, having different windows, and John, since he had a compiler background, he took a software approach. So, they were very complementary but still both testing out ideas in the area of RISC computing.
At a very broad level, were there particular problems that they were seeking to solve, or was it more of an academic, basic science approach just to learn more about what computers could do?
Well, a lot of the work previously had espoused making computers more and more complicated. There was the VAX computer system which had extremely complex instructions. We actually had, when I was at Stanford as a grad student, engineers from DEC who were sent there for a six-month sabbatical, and they'd explain how impossible it was to execute the VAX instructions quickly. Just to execute one instruction, it would have multiple dependencies within itself, so you couldn't do instructions in parallel because even just one instruction could have all these complicated dependences. So, that's why we were trying to do something simpler, more like in the Seymour Cray style of computing, or the IBM 801. We didn't know about the 801 publicly. I had seen the prototype and the instruction set when I worked there, but I couldn't talk about it, obviously, because it was not published for several years.
Hennessy became your advisor, ultimately?
Yeah, after two quarters.
What was his style like as a mentor? Did he work closely with you? Did he hand you a problem and you worked mostly on your own?
Yeah, he was certainly not a micromanager. He was busy with a lot of things. He hadn't gotten tenure at that point, so he was trying to do work in three different research areas. There was the compiler and language research, which I mentioned how we took a more compiler approach to the architecture, because of John, which I think ultimately turned out to be the best approach. He was also doing some CAD work for VLSI. So, he had three different research groups going at the same time. And assistant professors are always on the road trying to get funding or do this or that.
Obviously, this is way before he's on the trajectory to be leading Stanford as president. This is way before that.
Yeah, but he was just so talented that I don't think anyone was really surprised that he became president.
Norm, how did you get involved in consulting work as a graduate student? What was the culture like where that was the natural exchange of ideas between Stanford and surrounding technology companies?
I think that's one of the best things about Stanford. I mean, there are a lot of great things about Stanford, obviously. The connection to industry keeps the research pragmatic and there's emphasis on doing things that are useful. I remember when we were doing the MIPS project, talking with Jim Clark, the founder of SGI and then Netscape, about RISC processors. He said that there are all these economic factors you have to deal with if you're going to introduce a new architecture. You have to have at least a factor of two and a factor of three is even better before a new solution can oust an incumbent in a market. So, the combination of knowing about markets and economics and stuff like that, really helps guide the innovation towards more practical impact.
Were these connections formal? Would companies come and recruit right on campus, or was it more informal, meeting at coffee shops and things like that?
No, it was basically through the professors. I did a little bit of work for Silicon Graphics. That was through working with Jim Clark earlier. And then, John Hennessy, on his 20% days, besides all the other things he was doing, consulted for Silicon Compilers, which Carver Mead originally started. I never went down there with him, but I kind of tagged along afterwards, after he stopped going there. My thesis was CAD related, and they had data sets. I was looking for data sets for my research.
Did you have any interactions yourself with Carver as a graduate student?
No.
What was the process like for you, developing what would become your thesis research?
So, this is kind of a funny story. At Berkeley, the folks working for Dave Patterson could do a chip, like RISC II, and then write up the details about the chip and why they did what they did, and the performance measurements, and stuff like that. And that would be their thesis. So, I had done all that with the MIPS processor. I've mentioned this before in public talks, so it's not a surprise. John, besides being extremely talented, he had high standards. So, I went into his office for a periodic meeting, and he said, "This MIPS stuff is a nice project, Norm, but what are you going to do for your thesis?" And I was kind of like, oh. It would have to be something completely different. He said, "It's a nice project, but what are you going to do for your thesis?"
Norm, was there a sort of implicit suggestion there that there were expectations at Stanford that might not have been present at Berkeley?
I think it's more to do with the actual professor.
What was your response after he shared this news with you?
I said, "Let me think about that." So, I did a thesis in CAD, which was one of the areas that he was working in. We used the CAD tool to help design the MIPS processor.
What do you think was behind the way that John put this to you, that MIPS only offered so much from an academic standpoint, or that he had bigger dreams for what you were able to accomplish?
I mean, he was an assistant professor, so he wrote many papers. Not as many as Dave wrote. We joked that -- and Dave is the best writer out of anyone I've ever worked with. John is very good, but Dave is even better. I'm privileged to work with Dave now at Google. But as grad students, it seemed like every month there was another publication that Dave had about RISC computing. We used to joke about his paper of the month club. We were only at half that rate, with John. Maybe he was inferring that there would be some aspect of MIPS I could write about. I think other people have done other theses like that with John, where they took some aspect of the project and looked at it in more detail with tradeoffs. But I just decided to do something new, because we needed this CAD tool when we were designing the chip, and this CAD tool didn't exist. Kind of like with the auto ranging frequency counter in high school.
What did CAD provide that wasn't previously available?
It was a timing analysis and optimization tool. So, nowadays it's a standard thing in industry. Some of them had been developed for mainframe design with ECL, but it was very different because they had gates that had a clear input and output. If you changed the output somehow, it wouldn't affect the input, where with MOS transistors, you can have pass transistors, and you can have an output flow back into the inputs. So, circuit-wise, it's different. Also, the clocking methodologies were different. It was an interesting problem, and I built an interactive mode, we ran it interactively on VAX-780s, where it would make suggestions on where to improve your circuit.
When did you know you had enough to defend as a thesis?
I talked with John about it, and he said that it would be good to add a little theory, just to tick all the boxes, or round the corners. It was pretty quick once I got the data sets. Like a year or something.
Besides John, who else was on your thesis committee?
Forest Baskett, who I eventually went to work for when I graduated, and Abbas El Gamal, who researched FPGA integrated circuits among other things.
What was the process for you getting your first job? Did people recruit you, or how did that work?
Yeah, Forest was a professor, and he had done various sabbaticals in industry, and was connected to DEC. So they set up the DEC Western Research Lab with him as director. One of the founding members was also one of the people who was on sabbatical from DEC. Forest came into my student office one day -- you know, an extended closet shared by three grad students -- and said, "You should come and build this stuff for DEC in industry."
And DEC is Digital Equipment Corporation.
Right.
What was their mission, their main product?
Well, the VAX computers that everybody was using at the time, in minicomputers. But they wanted to explore using RISC processors. That's what I had done when I was at Stanford, so I went to work there. The other thing of interest to me was it was a research lab, so I'd get to publish papers. I could have gone to the MIPS startup, but then everything would be confidential, so I decided to go to the research lab instead of the MIPS Computer Company.
What were some of your first projects at DEC?
Well, I built a 32-bit microprocessor based on learnings from the MIPS project and work that other people had done. That was supposed to be a multiprocessor, but we never ended up building the multiprocessor aspects. The goal was to do work on multiprocessor research, but we ended the project just after getting a single CPU working.
Was it an academic environment? Were you encouraged to publish and present at conferences and things like that?
Yes. Sometimes, what I'd do is work on a project for a year, where I'd attend conferences but not publish so much. During that year, I'd have lots of ideas, and then the next year I'd write a paper every six weeks, or eight weeks, or something. Partly on the things that I had done during the year, development, but also on other ideas I had in the meantime. Also, Forest was teaching at Stanford. He'd co-teach classes, so for the first seven years, I co-taught classes at Stanford, too.
Who were some of DEC's key competitors, just to get a broader understanding of its business model?
They were the leaders in the minicomputer space. Coming down from the high end, there were IBM minicomputers, but they were known more for their mainframes. Within the mini business itself, the next most famous competitor was Data General. There's The Soul of a New Machine, a book that was written about that. Later on, the minicomputer market kind of got eaten up by the PC and the server market from the bottom.
Was that part of the explanation for DEC's decline in the late '80s and early '90s? Was it PCs?
Yeah, the attack of the killer micros, and servers, and stuff like that.
What were the companies that were sort of at the vanguard for this threat to DEC?
Well, there were workstation companies -- they started the Unix boxes, and from there, there were server computers made by lots of manufacturers.
How long did you stay with DEC?
Well, I’ve said I worked for three companies, but I kept the same desk. DEC got bought by Compaq, and then Compaq got bought by HP. So, I started on April 16th in 1984, and then I worked at HP until September 23rd in 2013. So, quite a few years.
But even with the purchases, it's all basically the same company, or the same part of the academic space that you occupied.
Yeah. And I'll tell an interesting anecdote, if you don't mind. My father worked for the same company, U.S. Steel, for most of his career. So, when I was interviewing at companies after getting my PhD, he asked what the retirement plans were like at these different companies. And I said, "Dad, that's so old school. In Silicon Valley, everybody moves to a new job every three years." And I ended up working for the same company until I retired, so it's kind of ironic following in my father's footsteps.
Norm, from your vantage point, were you paying attention to what Steve Jobs was doing? Was DEC paying attention, or was that considered a separate world?
It was pretty much a separate world. DEC made some halfhearted attempts to do PCs, but they were pretty bad. So, they just kind of stuck to the minicomputers until they were acquired.
In terms of a business philosophy, or even just corporate inertia, why not go more full-fledged into PCs?
I think in DEC's later days, there was an inconsistent strategic vision. We used to joke at the research labs, like there were different seasons of the year, there were different seasons of what we were told to do. Ken Olsen, when the labs were founded, he actually came by personally a couple times, and we got to meet him. That was great. Later on, we'd get other execs, like the VP of engineering, come by and say, "Drop everything you're doing about new ideas. The divisions need help." So, we'd say, "Okay, we'll go help the divisions." And we'd collaborate with the divisions. And then, six months later, he said, "Stop collaborating with the divisions. We need new ideas for the future of the company." That's how Alta Vista got started in our lab. And then, six months later, he'd come by and say, "Stop working on new ideas. The divisions need your help."
Norm, when did the internet become part of your work? Just simply the remote connectivity that the internet provided, when did that start influencing your day-to-day?
Well, we developed Alta Vista in our lab. We were one of the early adopters. DEC had a Class A IP address block of 16.0.0.0, so it was one of the first companies on the internet. It was a strong provider of networking gear. So, even when we were at Stanford, we'd use it to transfer files between universities. When Mosaic came out, we were all using that and trying it out, and writing our first HTML by hand, like assembly language.
What about email? Do you have a memory of your first email that you either sent or received?
Yeah, as soon as I got to Stanford, it was all based on email.
Oh, really? You were using email as a graduate student?
Oh, yeah. I would set up meetings with professors and had discussions with them over email.
Was DEC involved at all in fiber optic technology, or dealing with the need for increased bandwidth?
Yeah, they were a very early adopter of fiber optics. At the research labs, one of our roles was what they called the wrecking crew. We sometimes got new releases of software or other products, like a new workstation, and then tried using it and seeing what would break, to help out the rest of the company in the product uses.
What was the impact when Compaq acquired DEC?
Just getting back to the fiber thing. When we moved from our first building to the building we were in for a long time, it was around 1988, and they installed fiber optics. We had workstations connected with fiber optics, which was overkill because those bandwidths weren't so high. But we were experimenting with it. So, your next question?
About Compaq. What was the cultural and business impact of Compaq's acquisition of DEC?
So they had bought Tandem just before us, and Tandem was more of an engineering company. They had NonStop, they ran the world's banks and stock exchanges. We had a commonality, a camaraderie with them, because they were also based in Silicon Valley. DEC was a very engineering driven corporation, whereas Compaq was more low-cost production, marketing, stuff like that. For most of my life, I worked for companies where they told the joke, our marketing is so bad, if we sold sushi, we'd market it as cold dead fish. Compaq was more of a consumer business, too. So, it was quite a difference there.
On that point, given the differences in the company, from your vantage point, what did Compaq get right and not get right in terms of how it optimized its acquisition of DEC?
There was consolidation going on, and DEC didn't have really any PC presence. NonStop was a niche market with a very high availability system. I think it was a rational merger. They kept the Alpha microprocessor line that we had been working on, helping out with it for a while, until the fab got too expensive. So, I think it went reasonably well. It was difficult working with the Compaq folks a little bit, because they were in Houston. We were used to working with the folks in Massachusetts. I think they had a more engineering, kind of collegial culture in Massachusetts.
Did you ever consider leaving during this time, or were you happy with what you were doing?
Yeah, we were still doing the research, and we were appreciated. We were doing things that were potentially useful and helping out the product groups every now and then. And we were still able to go to conferences and publish papers. So, for the first couple of years, actually, we joked like for the first 18 months, they didn't realize they had gotten a research lab as part of the DEC acquisition, because we didn't hear anything from them for the first 18 months. So, we were going along our merry way.
Norm, on the research front, what kind of research were you involved with at this point, the late 1990s, early 2000s?
I transitioned around '93 from microprocessor design to working on graphics accelerators. I had a lot of fun working on that with the product groups in DEC, because we were doing, at the time, our own graphics accelerators that were designed specially to work with Alpha processors. With graphics research, you could just look at a picture and say, hey, it looks better, or not. Whereas when you're designing a computer, it takes years before I'd turn the power on. So, that was fun. And then, in the very late 1990s, that's when I had my foray into telepresence. That was until 2003, until about a couple of years after the HP merger in 2002. So, 1998 to 2004, I was working in telepresence as well as doing other things on the traditional computer system research.
What is telepresence?
I was working on robotic telepresence which is one of those things that's too far ahead of its time. It was actually commercialized years later by a local company that I wasn't involved with, but in a more primitive version. The idea is that you sit in a surround screen environment. It's 360 degrees surround video, and you have a bar stool which you can sit on, so you can turn around and look behind you. And then there's a robot at the remote side, and it has a 360-degree video display of your face, and you have 360-degree bidirectional audio. We actually did an experiment where we took it to the cafeteria, and we sat at lunch time with some of the other team members. Even in a crowded cafeteria with everybody talking, because you had the 360-surround audio, you were still able to use the cocktail party effect and listen to the people at the table with you. And they could see your face, of course. You were eating your own meal because you couldn't share the food with them. But it was a pretty advanced system for 2000, and it was all driven through a remote-controlled robot. You could drive it anywhere that was ADA compliant. You know, American with Disabilities Act? And we experimented with an arm where we could push elevator buttons, and stuff.
What would have been some of the practical applications of this technology?
Well, it was during a time that we had small kids at home. I used to have to travel a lot, and I was doing this in part so I could eliminate business travel.
Or you could just have a pandemic. That'll do it also.
Yeah, but now the telepresence company, they actually had a store on University Avenue in Palo Alto, but they've recently gone out of business. It was just before the pandemic started, so like just when you need it.
How did things change with the Hewlett-Packard merger?
Well, with the Compaq merger, like I said, they didn't realize they had us for 18 months, and then they didn't really know what to do with us because they didn't even have a Compaq CTO at the time of the merger, let alone a research lab. So, the HP merger was like coming back home, because HP labs had a very long history of innovation. Bill Hewlett said -- they had a desktop calculator, and he said, "I want one that fits in my shirt pocket." So, the engineers got his shirt and measured the pocket and figured out how big of a space they had to work with, and then they went and built the first pocket calculator. And they had real physical research labs like we had had at DEC. I mean, at DEC Western Research Lab, we had fume hoods, a machine shop. We had all kinds of stuff.
Now, the advanced architecture lab, did that precede the merger, or was that something came about as a result or after the merger?
Yeah, that was after the merger. At the time of the merger, Alan Eustace was our director, and then he went off to Google. Alan's one of the best bosses I've ever had. But what I learned at HP was, because it was large and bureaucratic, to get anything done, you had to be a manager. I had avoided being a manager up to that point in my career. So, I became a manager, and then I kept getting faced with the dilemma, either you can work for this person, or you can have these team members work for you. I always chose the latter option, so every year I'd double the number of people who were working for me. So, I ended up with 130 before I left.
What were some of the major projects of the Advanced Architecture Lab when you were running it?
Well, we also had to change our name roughly every two years or so.
Was this a branding issue, or was this substantive because you were really doing different work?
A little bit of both, because the lab was growing and accreting more people. So, by the end, when we were 130 people, we were doing a lot of interesting stuff. We had two optics teams. One on near term optics, and one on longer term optics. We had people working on early cloud computing prototypes. We had more of the traditional computer architecture stuff. At one point, we had some theory people. We had a lot of different research areas, but all in advanced computing systems.
What was the relevance of optics to this work?
The two optics teams published papers. One team was led by Mike Tan, and another team by Ray Beausoleil. Mike Tan was looking at VCSELs and making buses out of hollow, metalized tubes for backplanes. And Ray Beausoleil was looking at further out things like fairly highly integrated on-chip optics with ring modulators.
Were you the founding director of the Exascale Computing Lab, or that existed before you?
No, that was one of our re-brandings.
Oh, I see. But it obviously recognized the increasing importance of supercomputers.
Yeah. So, after the telepresence project ended and I became a lab director, I had to do a lot of traveling to Washington, DC, like four times in one quarter to try to get research funding from DARPA and other places, and attend related meetings.
Did you have any interaction with the national labs that were pursuing exascale projects, like Oak Ridge?
Yes, we had collaborations with them, and I got to tour ORNL. Parts of the national labs were very interesting. We met them at supercomputing conferences and did a lot of work together. And I attended various other supercomputing conference workshops.
Was the development of exascale computing more that the technology was available to achieve these levels of computational power, so let's do it, or was it more that there were problems that were rearing themselves for which only exascale level computer power was capable of dealing with? Like a chicken and the egg kind of question. Which was driving which in these developments?
When the exascale programs were started, we were more than an order of magnitude off from exascale, because they had the petascale program, and then the exascale program. And Peter Kogge, who's a professor at -- he might have retired by now -- at Notre Dame. He was leading the early exascale investigations. We collaborated with him and hired some of his students. I was on their thesis committees. So, there was a big reach. It was a pretty ambitious goal to even talk about it because the power per bit would have to come down dramatically to build a practical exascale computer that didn't need its own nuclear reactor.
Norm, this is a broad question. I'm not sure where it fits chronologically, but while we're in the neighborhood of supercomputing and exascale computing, when does the computer industry start to grapple with the energy consumption implications of computing at this scale? From a climate change perspective, from a cost benefit analysis perspective, when do people start dealing with how to supply the right amount of energy for these computer projects?
Well, part of the exascale program was providing an exascale computing at roughly the same power as had been used in previous systems. So, a lot of it is about increasing the efficiency. I know there's been some papers that have gotten a lot of publicity about how if we train an ML model, we're going to generate tons of carbon, but Dave Patterson, who I collaborate with, is writing a paper to correct some misperceptions in those articles. Actually, ML is saving a lot of power, because what we're doing with our Tensor Processing Units would consume ten times more power on CPUs. So, we're actually providing a lot more computing for less power. So, right now it's kind of getting a bad rap that's unwarranted. But you know, the drive for exascale is there's always a need for -- every time we've provided an order of magnitude more computing, or two or three, we've enabled new things. Like, with global climate modeling, a decade ago, or maybe as late as five years ago, they modeled the whole state of Florida as one square for the computer model. Obviously, some parts of Florida are water, and some are land, so they put an average number in there. If you've ever been to Florida, depending on where you are in the state, as it gets to the afternoon, it starts to get cloudy because of the humidity. There's all these small-scale effects in the model, but the entire state of Florida is one vector of data -- so, exascale can give us higher resolution and better solutions to these problems.
Was the Intelligent Infrastructure Lab another one of these re-brandings?
Yeah. I was just going to mention that when we were talking about the other.
Is intelligent infrastructure, is there an AI component that's implicit in this rebranding?
Not really. It was more like reduce your power consumption in data centers by not running the fans to more than they need to, or have flexible provisioning of data throughout a data center by having flexible networking, and things like that. It was smart as opposed to AI.
It's another general question, but as you're rounding out your career at DEC, and then Compaq, and then HP, what was the culture and both the ethical or legal considerations with regard to patents? Patents that you apply for as an individual, but inevitably were part of what you were doing for the company.
Well, DEC had a very strong patent program, and they would actually pay for -- I went to their offices once, massive rooms of filling cabinets and stuff. But they would pay for engineers who had been with the company for a certain number of years, like five or something, to go to law school. They figured the best patent attorneys were ones that were familiar with the state of the art that they were going to be writing patents on. I always thought that was nice. So, there was a strong emphasis on patents at DEC, and when they broke up the company and had to sell the fab, they were able to monetize the patents. Compaq, not so much in terms of patents, but HP was kind of actually overemphasizing patents because some of the CEOs measured research output by the number of patents. Not all patents are equally good, right?
When did you become involved with VLSI design?
That was my first quarter at Stanford, at the VLSI Mead & Conway class.
Did you remain interested in it?
Yeah, yeah. I've always been interested in it. One thing, when I was at HP, though, I wasn't able to work on chip design. So, I'm really happy that I've been able to work on chip design at Google. I think chip design is a really interesting problem for me because it cuts across many disciplines, and it's also a zero-sum game. If you're writing software, nowadays memory systems on computers are big, you can add another line of code and even if it's not executed, no one will notice. It's okay. It didn't cost anything, really. Whereas in chip design, if you add some transistors, the die can only be so large. So, you have to leave something out, or do something different. It's a zero-sum game, which I think is more challenging than just being able to add stuff that doesn't cost anything.
Going back to the early days of Alta Vista, to what extent, before you joined Google, were you working in areas that were directly competitive with Google?
Well, I didn't work on Alta Vista myself, but my colleagues worked on that. So, I was familiar with search engines, but I wasn't working on them.
What were the circumstances of you joining Google? Did they recruit you?
They tried to recruit me -- about half of our original lab, or more like three quarters went to Google around the time of the HP merger. But at that point, I was still doing telepresence because I had three young kids at home, who are now older kids at home, because of the pandemic and the youngest doesn’t get to be in person at the University of Washington. They have to work or study from their own childhood bedroom. Anyway, going through a startup, especially in the early stages, I'd consulted for startups in the early stages, and it's very all-consuming, and I wanted to spend time with my family. I had lots of friends telling me I should go to Google for many years, but then when I turned 55 and our kids were more grown up, or I guess when I was still 54, I met Jeff Dean at a conference, and he was telling me about ML, how everywhere they tried out ML, they struck gold, using a California analogy. They were limited by how fast they could do the computation on CPUs, and they really needed an accelerator to enable these ML applications. I thought, I've always loved chip design, and this sounds like it would be fun to work in a green field, not worn-out soil. And it sounded like fun, so I went to Google. It was a good time to go for me.
What was the initial project or group that you joined at Google?
Platforms, the one that develops the infrastructure. I went there expressly to develop the TPUs, basically.
What was the onboarding experience like? I know with Google, Google has a very specific corporate culture, a research culture. To what extent does Google take pains to orient you to their way of doing things?
Well, it was unlike anything I had ever seen before. When I started as an employee, it was almost two weeks where they took and explained to you how pretty much, at a high level, everything in the company worked, and how to code. I wasn't going to be doing coding, but it was about how their software systems were set up, how their ads auctions ran, how data was stored, YouTube videos are stored. They just told you everything. I'd never been to a company where it was so open, and they tried to educate you about all the business in your first two weeks.
Norm, it might be a difficult question to answer because you weren't there from the beginning, but you joined -- between 1998, the founding of Google to the present, you really joined squarely in the center of that. Would you say when you got there, did it have more of the feel of its origins, or more of the feel of where it is now, just in terms of the relative maturity of Google when you became a Google employee?
I think it had the feel of its origins. The day I started, they still had the TGIF, so they celebrated the 15th anniversary the week after I started. The founders, Larry and Sergey, they still were MCs for the TGIF, and at that time, they talked about lots of details in the Thursday meetings. They talk about less details now because the company covers such a broader scope. So, they save more of the high-level topics for the TGIFs, and they don't have them every week anymore. Zuckerberg used to do that too at Facebook. I don't know if he still does that. I could sit in the front row if I came early, and Larry and Sergey were right there in front of me. So, it was pretty amazing, even at year 15, to have that.
What was the research culture like, in terms of comparing it with Compaq, HP, in terms of you working on the things that were sort of academically or intellectually interesting to you, versus what was relevant or deemed necessary for the company?
Well, I consciously made a decision to not be in research at Google. So, I didn't join RMI. RMI is also the one that sponsors the quantum computing research, for example. I felt that this opportunity to impact the world through ML at Google scale -- maybe in the future I'll transition back to more of a research role, but after just a few years, I was able to take my phone and use Google services, and get my voice translated to text using a chip I designed, and it was sitting in the data center. So, you just think to yourself, I'm enabling that for billions of people around the world. I think most people want to make the world a better place. I don't know if you saw The Current War movie.
Yeah, sure.
So, he says, "I want to build a legacy. I don't want to just be famous." Westinghouse said that, right?
Yeah. And you recognize, of course, that Google's reach was truly, historically unparalleled in that regard.
Yeah, so any ML advancements that were enabled there would have a global reach.
Did you see yourself as involved with Google's remarkable diversification, going from being primarily a search engine, to being involved in all of the things that Google does now?
No, the main uses of ML are still search and some of the other applications. So, I think it's foundational. It's used in YouTube, like Watch Next.
I didn’t know that.
Watch Next is the internal name for the thing that produces the recommendations for you.
Oh, in YouTube, the thing that tells you what you should, as it says, watch next.
Yeah, yeah. And when I started, sometimes they were just random things, like why do I care about this teen singer, or something, that I've never heard of? That's irrelevant to an old guy like me. But nowadays, with the ML, and it actually runs on TPUs, I have pretty eclectic tastes. For a while, I was watching all these videos about nuclear metastable isomers, and it recommended all these other great videos on it. So, it's not random videos anymore. I've watched videos about the construction of Oak Ridge, and personal histories of people at Hanford, where they've only been seen by, according to YouTube, 100 people. But it's of interest to me.
And you wouldn't have found it otherwise.
No.
Did the Alphabet reorganization change things for you much at all?
No. I think that was just to try to provide more financial transparency to the investors. Also, it was a way to set up Sundar in charge of Google, because there are really too many things going on for even two very bright, accomplished, energetic people to be up to date on by themselves.
Another sort of broad question, how has ML changed over the past decade or so?
It's been growing, compounding very quickly. We've seen 4x growth in models every three years in terms of size. The training requirements are also increasing very fast. But the capabilities are also pretty amazing now. Machine translation rivals humans for many languages. Of course, they have models that can translate 100 different languages from one model. And one human could not translate 100 different languages. So, it's superhuman capabilities.
Norm, as you say, now, you're careful not to be a manager. To what extent does Google encourage on an informal level, mentoring? What opportunities do you have to serve as a mentor?
Well, Google has a great structure where they have managers, and then they have tech leads. The tech leads don't have to be managers. In the past, Intel had two in a box for some positions, where there was an exec who was technical, and then he or she was paired with a managerial kind of exec. So, this is not two in a box, but because there are different numbers of TLs and different numbers of managers, and not one to one, but it really allows people to specialize in what their gifts are. So, I'm what they call Uber Tech Lead for Tensor Processing Units (TPUs), and there are other tech leads that don't report to me, but in terms of technical decisions, escalations come to me. So, I do work with a large team of over 100 people. The TPUv1 paper had 77 coauthors, and the number of people I work with has only increased since then. I have a lot of one-on-ones with team members to mentor them. That's what a lot of my day is, actually. At HP, we had this formal mentoring program where people without any mentor could ask for a volunteer. They just announced something like that at Google, so I may sign up for that as well. But most of what I do is mentoring.
On that basis, Norm, to the extent that the younger employees at Google obviously represent its future, what are some of the things that the people that you mentor are most interested in, technologically and intellectually?
Well, it's a high achieving culture, and a lot of people are interested in growing their career. So, besides the technical stuff, I give them career advice. They'd want to know how many projects they should be working on, and should they add a new project? Things like that. So, those are the main kind of things. I try to give people the advice that when they're young they should try to work in as many areas as they can. Hopefully, transitioning through adjacencies, so they can go from one aspect of design to a different aspect of design, or work on the compilers, or work on networking performance. Something that's kind of related, just so they get a bigger picture. One of the things that we always tried to do when I was hiring at HP was get people who had real systems expertise, building expertise, because then you have a respect for complexity that, if you haven't built a real system, you might not have. And you really have to respect complexity, because you don't want any unnecessary complexity. That brings risk and a whole lot of other problems.
Norm, is the quantum computing endeavor at Google largely walled off from the other aspects of the organization, or is that more of a porous boundary?
It's in Santa Barbara and L.A., so it's part of RMI which is headquartered in Mountain View. I'm not part of RMI, so I don't know how many presentations they give, or whatever. I think it's not totally secret, but quantum computing is pretty strange and not intuitive. So, it's not like a lot of people have any input that's relevant, really. I was involved with hiring some of the team, so I learned some things about quantum computing.
It's obviously a very provisional answer you'd have to give, since we don't even know what quantum computing is really going to look like or what it's going to be good for. But I wonder the extent to which you see what you've done in terms of your contributions as relevant to quantum computing, and inversely, how might quantum computing, if and when it's truly achieved, be impactful on the things that have been important to you over the course of your career?
I think of quantum computing as a totally different domain of computing. You're not going to use a quantum computing to serve web pages, for example, or even do text to speech or speech to text. I think quantum, because of its very nature, is best used for solving those -- you know, in computer science, there are various degrees of algorithms, like order n squared, and order n log n. Quantum computing is best used on exponential problems, like how do the atoms in a molecule arrange themselves, and stuff like that. So, I think it's not like when TV came out, they said no one was going to have a radio again. I think it's going to add to our capabilities for these exponentially hard problems that we can't solve with traditional computers, because they're basically linear. You add more chips to a supercomputer, it runs linearly faster, if you're lucky. If you're not, you won't get the entire benefit. But solving these exponential problems is where it's going to be.
So, as you say, to the extent that there's this perception out there that once quantum computing is achieved, it's going to gobble up everything else, as you see it, classical computing will always have its uses. It will always do things better than quantum computing in certain areas.
Right. Like, the scan of a database, it doesn't make too much sense in quantum computing.
From your vantage point, where are the intellectual origins of quantum computing? Do you see it coming out of the development of classical computing, or more in the realm of theoretical physics?
Well, there have been some early papers, for example from Shor at Bell Labs. But I think until recently, they haven't had some of the tools that they need to reach the levels of computing that they've recently achieved. For example, working down at millikelvins, and if there are impurities in the materials, they can cause problems with trapping. There are just a lot of things to be worked through. I think that was one of the things that impressed me with the Santa Barbara lab, that I can talk about. They had a very good engineering approach to it. Or, they have a very good engineering approach to it.
Norm, just to bring our conversation up to the present, in terms of the narrative, what have you been working on in the past few years? What are some of the big projects on your agenda?
It's been all the TPU chips. We basically can't talk about -- we don't talk about chips before they're in use. We typically talk about them after they've been in use for a while, as opposed to other participants in the marketplace where there's been rumors about what Nvidia's next year's chip is going to do for years, and then they announce it before you can get it in quantity. Most companies operate that way, but since we're using it for internal use as our primary focus -- we do offer some in the cloud, but the main thing is vertical integration within Google -- we don't have a need to pre-announce things, or even announce things when they first go into production. So, all together, I’ve worked on quite a few chips. Five are in production so far, but we've only talked about three of them. And I'm working on future generations. So, almost a chip a year.
Do you see yourself continuing on that trajectory, remaining interested in chips at this level?
Yeah, because I take the slowing down of Moore's law and the end of Dennard scaling like a personal challenge.
Norm, for the last part of our talk, I'd like to ask a few broadly retrospective questions, and then we'll end looking to the future. So, the first is, as you've emphasized, it's tough business to predict the future, but looking back over your career, going all the way back to your early interests in computers, what has most surprised you in terms of where you thought technology was heading, and where it actually ended up?
I think no one expected Moore's law and all the other innovations that have come along with it to get as far as they did. You know, I grew up on science fiction films, like 2001: A Space Odyssey. The astronauts in the Odyssey, the spacecraft, they have those tablets, but when they put them down, they're projected for movie effects. But now that we have iPads, and iPhones, and Android phones, and laptops like these, it's pretty amazing compared to the giant CRT monitors, and all that stuff we used to have. And the ML, I think, is just beginning. I think it's -- I don't know if you've heard, but getting back to the supercomputing side, a lot of the national labs now are experimenting with ML assisted scientific computation. So, instead of mapping clouds to a certain resolution, they can actually have an ML model say that if the parameters are like this, then the clouds probably look like that. It's kind of like a reverse of image recognition. So, I've seen examples where they've enhanced the resolution of climate models significantly that way. They're doing the same thing with image processing, where -- I was a high school yearbook photographer. So, I've had a long interest in photography, and there's this tradeoff between resolution of the digital camera and its light sensitivity. If you have less resolution, you can have bigger sensor pixels. There was one article that says we don't need to buy a super high-resolution camera because now the Adobe software is able to interpolate so well. It basically understands objects from an ML standpoint. It can fill in the blanks with useful stuff. I don't know if you've used Photoshop. Just the whole software is amazing progress where you can -- in the old days, when I was growing up, they used to have the Soviet Union, and they'd have some guy work in the dark room. Like, cut the negatives, and air brush and dry and eliminate some guy who was thrown out of the Politburo. And nowadays, you can just do all of that in a few seconds with the Adobe smart tool. It fills in the background with ML techniques. So, hopefully we're just beginning with ML. I think it'll have a lot of exciting benefits.
Going back to your comment about the importance of building a legacy that has particular societal benefit, what are you most proud of, or what gives you most intellectual or social satisfaction, in terms of what you've contributed to the field?
I think my current work, just because of the global scale. I was always proud of my various patents, publications, being named fellow of this, that, and the other thing, but I really derived the most from making people's lives better in various ways. I think, also, a lot of these ML tools can also help people with disabilities. That's also an important thing, and it's something that's very Google. One of the mottos is, "We build for everyone." So, they really try to be inclusive, both the hiring and workplace, but also in the applications. At Google I/O, they always have one of those heart tugging vignettes where some application has transformed the life of a blind person, or something like that. So, that's always extra special when you can have an effect like that.
Norm, the flip side to that, of course, is the ways that technology oftentimes accidentally can have a pernicious effect on people's lives, with anything from dependency to intrusiveness. What perspective have you gained over your career about putting brakes on at the right time, at the right place, in terms of where technology should be applied, and how it should be applied?
So, Google was famous in the beginning for having that motto, "Do no evil." They don't say it anymore, but it's still kind of an undercurrent there. They're very careful about -- very few people in the company can have access to the personal data of the users. They can't share it. It's a fire-able offense. So, we try not to be intrusive. Some of the content can cause problems, but they're working on that. I think it is a bigger problem for society, though. It's not like we have bowling teams anymore. You've probably heard about that book, right?
Bowling Alone by Robert Putnam.
Right. But on the other hand, one of my relatives is still in the U.P. They've put together this family photo book, and they've got pictures from the family members spread not only throughout the US, but some in other countries, and put together an amazing picture book about the history of the family. That's something that wouldn't be possible without the internet. And they also used a publishing service where they can make short run photo books. Some people get one or two photo books of their family vacation by sending off to the service, but they're making these fifty books for the fifty family members.
On the question of applying technology in the wisest possible manner, who or what do you see as some of the most effective partners for Google? Either consultants, or the government, or outside organizations, academia. Where are Google's most effective partners that will help to allow it to harness its unprecedented technological reach into the future?
I'm probably not the best person to answer that. I know they have lots of programs they're working with. They have coding camps for underprivileged communities, and stuff. At one point, when Obamacare came out, they basically paid for employees to go on sabbatical and help get the Obamacare enrollment put in. I think there's a number of things. We have a bunch of National Academy members, and we input more as individuals than as a company, in some cases.
Norm, last question, looking to the future. A simple one: what do you hope to accomplish for as long as you want to remain active in the field?
It's been an amazing ride on technology, and I just want to keep doing it as long as I'm able, but I also want to have some time for vacations. It's getting to that point in my career where I should travel more with my family, especially after getting the vaccine. I'm signed up to get my first shot in a week.
Oh, exciting.
Yeah. I think everybody's got the itch to travel again.
That's right.
But I think we have quite a few exciting generations left to go on the ML side.
Well, Norm, it's been a great pleasure spending this time with you. Thank you so much.
Yeah, thank you very much.