Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
We encourage researchers to utilize the full-text search on this page to navigate our oral histories or to use our catalog to locate oral history interviews by keyword.
Please contact [email protected] with any feedback.
Courtesy: Malcolm Beasley
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of Malcolm Beasley by David Zierler on February 5, 2021,
Niels Bohr Library & Archives, American Institute of Physics,
College Park, MD USA,
For multiple citations, "AIP" is the preferred abbreviation for the location.
Interview with Malcolm Roy Beasley, Sidney and Theodore Rosenberg Professor of Applied Physics, Emeritus, at Stanford. Beasley recounts his passion for basketball in high school and the opportunities that led to his undergraduate study at Cornell, where he describes his focus on engineering physics as just the right blend of fundamental and applied research. He describes his relationship with Watt Webb, who would become his graduate advisor, and the origins of BCS theory. Beasley discusses his work taking magnetization measurements on type-II superconductors and his thesis research on flux creep and resistance. He discusses his postdoctoral appointment working with Mike Tinkham at Harvard and the developments leading to reduced dimensional superconductivity. Beasley explains the technological implications in the fluctuations of the order parameter, and he describes the speed with which Harvard made him a faculty offer. He discusses the circumstances that led to him joining the faculty at Stanford, his immediate connection with Ted Geballe, and his work on A15 superconductors. Beasley explains the significance of the 1976 Applied Superconductivity Conference and the important work in the field coming out of the Soviet Union at the time. He conveys the excitement regarding amorphous silicon and how the KT transition in superconductors became feasible. Beasley describes his interest in thermal fluctuation limits and coupled oscillators, and he describes Aharon Kapitulnik’s arrival at Stanford and the origins of the “KGB” group. He describes the group’s work on alloyed-based model systems and his idea to study high-resistance SNS Josephson junctions. Beasley explains “Pasteur’s quadrant” and why the KGB group was so well-attuned to dealing with it, and he discusses the impact of computational theory on the field and specifically that of Josephson junctions on digital electronics. He surmises what quantum superconductivity might look like, and he describes his work as dean and as founding director of GLAM, and some of the inherent challenges in the “trifurcation” at Stanford between the Departments of Physics and Applied Physics and SLAC. Beasley discusses his leadership at APS and the issue of corporate reform, and he explains his role in the Schön commission and what it taught him about scientific integrity. At the end of the interview, Beasley reflects on some of the “forgotten heroes” in the long history of superconductivity, he attempts to articulate his love for physics, and he explains why the achievements of the KGB group represent more than the sum of its parts.
This is David Zierler, oral historian for the American Institute of Physics. It is February 5, 2021. I’m so happy to be here with Professor Malcolm Roy Beasley. Mac, it is great to see you. I’m so excited for this. Thank you for joining me.
[laugh] Well, let’s hope you’re not disappointed.
[laugh] Alright. So, to start, as I always do, please tell me your title and institutional affiliation.
I am a professor—well, I’m the Sidney and Theodore Rosenberg Professor of Applied Physics, retired, therefore emeritus, at Stanford University in the Department of Applied Physics.
Now, what’s the history, or do you have any direct connection, with the Rosenbergs and the endowed chair?
Not so much direct connection, actually, but the history is interesting. I looked up the Rosenbergs recently to get a little more information about them. The elder of the two, Theodore, passed away about ten years ago at the age of 101. As to the history, I don’t know this for sure, but I think basically it’s pretty clear. Having been a dean, I know how it goes. The Rosenbergs were great friends of Stanford. In fact, they were important philanthropist in San Francisco. They were early pioneer family kind of people, and they had a building maintenance business that must have done very, very well. So, my guess is that they were chatting with the development people, or had been called by them, and the development people said: “Here’s an opportunity where you can do something important for the university, because it was the first chair in applied physics.” And it went to Ted Geballe, which was no accident, I suspect, both because it was deserved and because they may have actually known each other from the world of the denizens of San Francisco. [laugh] So, I think probably-- but then when Ted retired, it came to me, and when I retired, it went to Aharon. So, it stayed in KGB, if you like. [laugh] As to direct connections, I think that by the time I got the chair, the Rosenbergs were beginning to have health problems, because the development people-- you know, and I kept correspondence, but I never actually sat down and met with them. I think that was probably considered not a good idea.
Mac, before we go all the way back to your origins and interest in science, I’d like to ask a moment-- a present question, and that is: in what ways has the pandemic been useful, in terms of the physical isolation allowing you to get work done that you might not otherwise have? And in what ways has the lack of interpersonal connection really been a detriment to some of the projects that you’ve been working on more recently?
Really not very much, because being retired, I have — there is nothing I have to do unless I say I want to do it.
Okay? So, I get called on to referee papers and proposals, and to write letters of recommendation. I also have some residual responsibilities in my public service work. But other than that, I control my own time. I have a wonderful study. You can see some of it in the background there. It has a nice story associated with it. When our kids finished college, my now-deceased wife and I did a lot of remodeling. And my daughter, who was in architecture, designed a new enclosed patio space for us defined on two sides by an art studio for her mom and a study for me. So, this is my study, and it’s wonderful. The project was featured in Sunset Magazine. All my accolades, [laugh] as my daughter likes to call them, are hanging on the walls—stuff like that. So, I’m really happy to work here, and it’s so easy to do things online now. I do a lot of Zooming, and I’ve gotten good at it. I don’t mind it.
Well, Mac, let’s set the stage. Let’s go back. I want to go back-- we’re going to focus on Cornell, but I’d like to set the stage before Cornell to get a sense of where you were as a high school student, in terms of the kinds of colleges that you thought were available to you, either by geography, or financial ability, or grades, and the extent to which you knew that it was going to be physics that you would focus on before you got to campus. So, let’s set the stage there.
Yeah. Okay. Well, up until my senior year in high school, as I like to say humorously—but it was true—my passion was basketball. [laugh] I was a pretty good basketball player, and we had a very good team my senior year. We went undefeated until we got to the Maryland state championship, where we lost in the finals, so we were 21-1. And modesty aside, I was a Washington DC all-metropolitan basketball player that year. But I was also a serious student. I mean, not the very best, but surely good. I liked math, and in my senior year, I took physics. And that really affected me. I mean, I somehow identified with it. And I had fixed up my car and all the things that teenage boys did in those days. So, I knew I wanted something scientific or technological. And in fact, at Cornell, I started in mechanical engineering. And I probably could have been happy there. But when I had my first physics course, which you took naturally as a freshman, I really began to wonder whether—I mean, I really enjoyed it, and I was amused—again, this is a little bit self-serving, but I was amused that all my buddies were running around, trying to look at all the past year’s exams and whatnot. And I looked at a couple, and I thought to myself, “Yeah, there’s one question on conservation of energy, one on conservation of momentum, and one that’s got a slight trick to it.” And so, seeing all that, I probably didn’t have to take the exam. That was the point, to see whether you recognized when to use those principles. And so I realized that indeed, I had a kind of a knack for physics as well.
What was that first physics class, Mac?
Oh, it’s just the standard stuff.
Physics 101. And also, my freshman year, I took calculus for the first time. I remember—now, kids are much more advanced when they get to college—so that sounds kind of quaint today. And I really liked that, too. It was mathematics that conceptually got to me. I mean, I really did think it was cool. So, with all this, the seeds were sown. And by the end of the year, I decided I wanted to go into the engineering physics program. But let me step back a minute first and say that I looked at a number of universities that I felt were good engineering schools. And I don’t remember whether I really considered the Ivy League schools, but I was signed, sealed, and delivered to go to University of Michigan, where we had-- the family historically had a summer place on a lake in the Upper Peninsula. But apparently, unbeknownst to me, there was a conversation between my high school basketball coach and the coach at Cornell. I don’t know who made that connection. But they ended up-- I had a Sloan scholarship offer, which was both for academics and athletics. And that got me to Cornell, and that was a very good result for me. I mean, it was really the perfect institution for me.
In what ways? Why was it a perfect institution?
Well, it was more-- it had all the academic opportunities—the humanities, the social sciences, and science, and whatnot. But it also just had a style of friendliness and congeniality, and the engineering physics program was a brilliant conception. I mean, it was aimed—I didn’t know at the time, because I didn’t know myself well enough. But really, I think I was the kind of person they had in mind, and it was really hard. It took five years, in which you took all the physics that the physics majors took, all the math that they took and then some, and various engineering courses. So, it was considered one of the toughest majors, at Cornell, and I think I liked that, too. I won’t call it the “downside,” but as a consequence of my choice, the basketball people were not terribly thrilled. I think it could have worked out, but for various reasons—not all of them much fun to talk about—the relationship deteriorated, and by the time I was a junior, I said: okay enough. And we had a parting of the ways. But that’s when I really knew I wanted to do physics. My passion had changed, to put it [laugh] in somewhat dramatic terms.
Did you have the sense that engineering physics was, in a sense, the best of both worlds, that you got, essentially, as much of a physics curriculum as the physics majors themselves? But with engineering, you had that opportunity to go into industry, if that’s where you wanted to take things?
No. I don’t think I thought of it exactly in those terms. The way I would put it is—and I’ll explain why I put it this way in just a second—I think it was fundamental physics and applied science, it was clear to me intuitively that it was where I belonged. But, I didn’t know at the time, and even up to the early in the part of my post-PhD career, where I belonged. I mean, was I really an applied person, or was I really one interested in the fundamentals? And I said, “I’m both, damn it.” But you know, there was always something in the language used at the time, or the way people theoretically conceptualized applied and fundamental physics, that implied they were not the same thing. Well, maybe they’re not, exactly. But something is wrong here. Along the way I discovered the book by Robert Stokes entitled Pasteur’s Quadrant. And I mean, again, it sounds sort of silly, but I just relaxed. I had my home. And if you know that book or that concept you’ll know what I mean. And then I said, “Yeah, that’s me. It’s okay.” [laugh] “I don’t have to see the shrink.” So, the point is that I was always doing both and there is a quadrant in science-technology space where that is quite natural. Looking back, I’m sure many people understood this point, but it was not effectively transmitted to the students. In the aftermath of World War II, physics departments drifted more and more to very basic things: nuclear and high energy physics and maybe astrophysics now. And a lot of the condensed matter physics, or solid-state physics, as they talked about it then, was viewed as too applied. People said, “Well, that’s ready for engineering.” But they were wrong. And so, in schools with applied physics departments, where it was possible to blur distinctions a bit, everybody had a chance to have some territory. This worked much better. And now, I think nobody would say that condensed matter physics isn’t physics. And they say, okay, it’s got an engineering side too, and they’re much more relaxed about it. And you know, there’s a whole chapter, maybe, of my leadership life where those relationships were sorted out here at Stanford. It’s good that they were.
Mac, who were some of the professors, as an undergraduate, who were particularly formative for your intellectual development?
Well, I think the answer is very clear. It was Watt Webb, who ended up being my PhD thesis advisor as well. But when I was beginning to-- when I was far enough along that I was getting to know the faculty—again, about my junior year—Watt joined the faculty, and I was very taken with him, because he was relatively young and very energetic. He was a terrific match for me. And, he had a big influence on me. I did my undergraduate thesis with him and then went on and stayed at Cornell, which in general is not a good idea. But Cornell was evolving so rapidly at that time that it was a wholly different place when I left as a Ph.D. than when I went in as a freshman. But there’s another individual, and I don’t know if he ever knew how much of an influence he was on me. It’s a kind of a singular story. But in my junior year, one of the engineering courses I was taking was aeronautical engineering—think aerodynamics—from William Sears, who was a very, very famous guy in that field. I don’t remember what they called the department, but it was in the engineering school. It was one of the engineering courses I took. And I really found the subject interesting, but I didn’t connect with the way he thought about it and presented it. And it wasn’t that he was a difficult personality or anything like that. I just didn’t quite get it. So, I decided-- but I found it interesting, so at one point, I decided: “Okay, I’m really going to learn this stuff, but it’s going to have to be my way.” And I did. And don’t ask me quite what I did, but I was okay then. And so, after the final exam, I got this note that Professor Sears wanted to see me. And I thought: “Oh, God. Here’s where I get [laugh] you know, crucified, or something.” And I came into his office, and he said, “I thought that was you.” He recognized my face. And I said, “Yes, is there something you want to tell me, or I need to do?” or whatever. He said, “Oh, no. You did exceedingly well on the exam, but I’ve never seen one written that way.” And I told him the story, and I said, “No offense, sir, but I just decided that if I was going to learn this, it had to be in a way that was natural to me.” And he thought that was terrific. And his final words to me were, “Mr. Beasley, you’re going to go far.” Wow. What a [laugh] motivating moment that was. So, after that, I knew what I wanted to do and what I wanted to be and got on with it.
Mac, what was Watt working on at the time you connected with him?
Well, he’d come out of industry, and he was in more the engineering side of materials science, I would say. He was an MIT graduate. I mean, he was well trained in science. And he’d acquired some administrative responsibilities at Union Carbide, and I think he’d decided he didn’t want to do that. He wanted to do research. So, Cornell hired him. Here he was, basically working on materials. That was foresighted on their part. So, he was new and fresh, and whatever else, and so I thought: this is exciting. And this was also when superconductivity was going through its [laugh] so called golden age. And so, he decided that one of the things he wanted to do was to work in superconductivity and see what he could do in the area of superconducting magnets. As a result, my undergraduate thesis was to build a superconducting magnet that could be used for research. And so, I was faced with all the problems that the industrial folks were faced with, trying to build these things. And there were a couple of problems. The major one was that they were very touchy, and the reason is, when the flux moves around, Lenz’s law—you know, electric fields get generated and drive normal currents, and so if not damped the magnet goes unstable. So, he and I worked out this way where we coated the wire with copper, and then we put-- it took a couple of tries to get this right. And then we put mylar in between the layers of the magnet so that it wouldn’t short between layers, and it worked. I saw he died last fall, and one of the things that they said he always was proud about was that with an undergraduate, he built the first stable superconducting magnet. I thought, “Wow, did I do that?” [laugh] And so, that’s the kind of guy he was. I mean, he was always thinking ahead, and he understood me, and I appreciated him. So, we had a good relationship.
Mac, did you recognize at the time, or looking back, that this is pretty advanced stuff for a senior thesis, and obviously this was a vote of confidence in your abilities, that he was asking you to do something that wasn’t just useful for your own education, but that had utility for real research in the field?
I did, and it didn’t take very long. He may have pulled some strings. I’m not sure. But there was a conference at MIT in the area of—they called them hard superconductors in those days, because they were mechanically hardened. And he suggested I go. He said, “I’ll send a note saying that you’re going to represent us.” So, that was my first conference. Then I realized I was in the big leagues. I mean, there was Kunzler and the other heroes of high-field superconducting materials at the time. And I could, more or less, understand what they were saying. And so, I was — when I got back and said to Watt, “Gee, that was cool,” Watt looked at me, and he said, “Yeah, I bet it was, and I’m sure you enjoyed it. But don’t expect all meetings to be that good.” [laugh] But I think the thing that he gave me, which has lasted and strongly influenced me over my entire career, is kind of an aphorism. I think that’s the correct word here. Anyway, he said that he spent — we were sitting in the lab one night, and he’d come by, and we were just chatting. And somehow, we got on the subject of how you choose problems. And he said, “Well, Mac, I spend 10 percent of my time thinking about the experiments that are going on in the lab now, and 90 percent of my time thinking about what should we do next?” And that was his strength. He was very, very good at picking problems. And later he went on to biophysics and had a super career, I would say.
Mac, let’s set the stage for the broader field of superconductivity. It’s the early 1960s. I wonder if you can explain: what were some of the experimental and theoretical advances that allowed you to do this kind of work? And looking back retrospectively, what were some of the limitations that prevented the field from being further ahead at that point than it was?
Well, of course, it was-- I came in just as what many of us refer to as the “golden age of superconductivity,” and indeed it was. I mean, if you think about it, there was BCS theory. There was Abrikosov’s discovery of type-II or high-field superconductors, based on Ginzburg-Landau theory, and then Kunzler discovered that these dirty superconductors would carry lots of current at high fields. That’s a practical materials issue. And then in my last year, the Josephson effect came on the scene. The field just was completely changed. I mean, all of a sudden there was a fundamental theory. There was a recognition that the Ginzburg-Landau theory was useful beyond what was contained in their original paper. And there was the Josephson effect, which just surprised everybody, including Bardeen. So, what are we going to do with all of this? I mean, do we really understand everything or not? What is the relationship between all of these advances, when you stand back and take your breath? And what can you do with it? What can you do with it scientifically? What can you do with it technologically? Josephson devices for new instruments? Possibly computers, or something? And magnets? Now, we have MRI magnets and things like that. I mean, it was just thrilling. Some people, the curmudgeons, said: the fat lady’s sung. It’s all over. And you know, if you were a real purist, well, maybe you could say that, but in terms of the opportunities, it was just explosive. And I think the curmudgeons were proved wrong. I mean, I think what’s happened in the last 50 or 60 years is maybe not like that era, but it has completely changed what are the important questions.
Mac, the normal course of events is that you go somewhere else for graduate school, that there’s a broader world out there. What were your considerations, and how instrumental was Watt and the prospect of becoming his graduate student — did that make the decision easy for you?
Absolutely. And again, as I say, Cornell was not in some steady state. It was expanding—both, literally and figuratively, and it was a very friendly place. So, what’s not to like?
Did you even bother applying to other schools?
I did, but I never-- well, the story I like to tell is that I applied to the Division of Engineering and Applied Physics at Harvard, and they didn’t admit me. So, [laugh] but I have a long tradition of being rejected by Harvard. But no, not really. No. I mean, I applied to other schools because it’s a good experience, and that’s fine. But it was a very-- in retrospect, I was changing rapidly. I was not ill-formed, maybe largely formed, and I knew I was in a good environment, and so I just stayed. It didn’t hurt that I got married at the end of the summer after I graduated, and my wife eventually sought her master’s degree in the School of Fine Arts at Cornell, where she had been an undergraduate, so it was good for her, as well.
Was there any consideration to stay in engineering physics? Why switch over to physics proper?
It was cosmetic. It didn’t matter. I mean, I figured if I’m going to stay, let me go over to physics and see whether that makes any difference. But it didn’t, because there was very good working relationships and cooperation between the Physics and the Engineering Physics departments. They were in the same building-- at least in the areas we now call condensed matter and materials physics, and statistical mechanics. And when Clark Hall came online when I was a first-year graduate student. It provided a modern environment. There was a lot going on, just wildly interesting. I mean, the low-temperature guys were headed toward their Nobel Prize, and whatever else. So, no, it was just a good place to be without—let me be careful here—the long traditions and pretense of some other institutions. [laugh] And so, while it was fortuitous that I went to Cornell at all, all I can say is, it was a very, very good place and very, very good to me. So, it probably would have been hard to do better. But there’s luck here, and you know, if I’d gone somewhere else, I’d be doing different things. I don’t know. But I don’t regret it at all.
Mac, as an undergraduate senior, you were ahead of the curve insofar as you knew who your thesis advisor was going to be, you knew that superconductivity was the name of the game, but I’m curious. In terms of your intellectual sensibilities, the way these things were taught-- you have applied physics, you have experimental physics, you have theoretical physics. How well-formed was your identity in terms of the kind of physics you wanted to do in graduate school and beyond?
Well, I knew I wanted to be an experimentalist. I mean, I’m a lab rat, or was in those days. And I had an interest in applications. And the climate and culture of Cornell was one in which the fundamental physics, so-called, all the way over to things that got sort of—or at least, if not applied, using potential applications as part of their motivations—all that was healthy at Cornell. And so, it was congenial to the notion of Pasteur’s Quadrant, but nobody ever described it that way. Okay? And so, it was just-- I intuitively knew I was in the right place for me, and it was challenging. I don’t want to say I was taking the easy way out. Far from that. And it was a lot of hard work, but I mean, that’s okay when you’re motivated and feel you understand what you’re doing. But there was at this time still an uncertainty in my mind about the relation between fundamental and applied physics-- well, okay. I’m glad to have grown up where it was natural, but it’s not natural everywhere.
Given what Watt had you do for your senior thesis, I imagine the transition to graduate mentee was a pretty smooth one for you.
Yeah. In fact, there were no substantial problems. I mean, I had already—since Watt was new, I had contributed a lot toward building his temporary lab. I knew the lab. And when a year later we moved into Clark Hall, we had new space. Also, Watt had an interesting set of people around him. I knew many of the graduate students who were senior to me. And the faculty in other areas both experimental and theoretical were also friendly and wanted to know what I was doing, and so on and so forth. I think part of that, frankly—and this is self-serving—may have been: one of the things the physics department did that engineering physics didn’t was that they had two qualifying exams. The first one—not really a qualifying exam, but general entrance exam that all the new graduate students had to take. The purpose was to figure out where each student was in their preparation and thereby make sure that they’re in the right place in the graduate course sequence. Apparently, I did very well on that exam. So, I was visible as a new student. But frankly, I think it had mostly to do with the tremendous undergraduate education I got. [laugh] You know, I probably had taken more physics courses than many—not to mention more specialized courses, because I had five years to do it. So, that’s a little unfair, maybe, but I don’t think it hurt me.
Now, for your graduate curriculum, were you able to jump right into thesis research, or there was coursework that you still needed to do?
No, there was coursework that still needed to be done, and so the first two years are really, really hard work. But if you’re comfortable where you are, and everybody else is going through the same torture, [laugh] if you like, it was okay. I don’t look back on it as a trauma. It was just hard work. But I didn’t—to be fair to others, I didn’t have the tensions that come with finding your thesis advisor and getting the personal chemistry sorted out in some positive way. That was all done. So, I really did take off right into research. At first, we worked on the bulk magnetic properties of the new high-field superconductors. Along the way, Watt became very intrigued with SQUID magnetometers and whatnot. Thinking back, I don’t remember how explicit he was, but as best I can recall, he said, “Why don’t you look into these SQUIDs and see whether we can do something with them?” That’s how he was, kind of Socratic. And then the next day, he’d come in with another idea, and then you had to sort out what you wanted to do. But in those days, we had the privilege to do that, or the funding arrangements permitted it. And so, I thought: “Gee, building a SQUID that works in a high-magnetic field. Is he crazy?” [laugh] You know, that’s kind of an oxymoron, isn’t it? But then it’s obvious when you think about it: what you’ve got to do is put the SQUID somewhere else where the field is very low, and put your sample in the high field, and somehow connect the two magnetically. And that is done by using a circuit that, in those days, was called a superconducting transporter. You’d take two superconducting coils, and you’d connect them in parallel. And since they are superconducting, it works even at DC. Now, if you apply flux to one of the coils, some of it ends up in the other with no decay. Voila, you’ve transported the flux from the sample in the magnet to the SQUID, which is elsewhere, well shielded, in a low-field environment. That is the fundamental concept of the high-field SQUID magnetometer. And now, you buy them commercially, and if you see a schematic, it’s exactly what I did, which is I must say quite satisfying.
Mac, what was Watt’s style as a mentor? In other words, inevitably, when you would run up against the wall and you couldn’t figure something out, could you go to him…
…or did he want you to figure it out on your own?
No, no. You could go to him, and if you had made a serious effort to figure it out, then he was always supportive. I think that if I had gone in and complained because this or that was hard, he [laugh] wouldn’t have been pleased. But no, he kept track of what I was doing, and when there were problems, we found a way to solve them. And often, it was not him telling me what the answer was. It was: “You need to go talk to A or B,” or “Let me send a letter to D or E,” whichever was the appropriate way to do it. And we’d get the help we needed. He was stunningly good at that.
How big was Watt’s group? Were you his only graduate student at that point?
No, I was one of two-- no, sorry, three.
Who were the others?
Well, one I don’t remember. He worked on the mechanics of how certain kinds of spheres fracture when you compress them. We always joked about-- he studied busting your balls. [laugh] Come on, we were what, 20 years old? No, 23 or 24. But the other was Bill Fietz who had been out in industry and decided that he wanted to come back and get a PhD. Well, Watt, having just come from industry, was sympathetic to what he wanted to do and also Bill felt comfortable with him. So, he and I were the first two in superconductivity. He finished before I did. I mean, he was the first, and I was the second. But we built up that part of the lab and did all these magnetization measurements on the type-II superconductors. And while we were doing that, I was also trying to build a high-field SQUID magnetometer to study flux creep. But at the same time, I also began thinking about flux creep from a more conceptual point of view. It was a new concept, and obviously part of the broader question of resistance in a superconductor. [laugh] I mean, how does that happen? You know, that’s weird. And I began to appreciate that resistance in a superconductor was a fundamental issue in the field. It was in the process of being addressed when I went into graduate school, but it wasn’t really completely resolved, I think, until roughly a decade later. And I was privileged to play a part of that resolution. Looking back, maybe because I was thinking about SQUIDs and the Josephson effect, I was quicker off the start, at least among students, in beginning to understand that there is a connection to the Josephson effect, where one also sees resistance in a superconductor. But here one talks about quantum phase differences and their relation to supercurrents, currents and to voltages across the junction, not the motion of vortices as in flux flow or (if there is pinning) flux creep. I mean, if you consider the I-V curve of a Josephson junction and you ask: what are the phases doing? One phase is evolving relative to the other, and you get a phase slip. Okay? In other words, if you crank the phase on one side of a Josephson junction and keep cranking, it goes into a steady state of what we now call phase slippage, which produces resistance. While it was quite some time before I grasped the bigger picture, I somehow knew that all these phenomena must be related at a fundamental level. And indeed they are, but real clarity was yet to come.
Mac, how did you know that it was flux creep and resistance that would ultimately become your thesis?
Well, it was a hot topic, and we had credentials in the properties of type-II superconductors. And I don’t think there was anybody else trying to do that. The only potential competition, i.e., the only one who was doing similar things, was John Clarke at Cambridge. He was making a SQUID voltmeter, and I was making a SQUID magnetometer. And so, I think we were the first two young people to really turn SQUIDs into devices that could do a real measurement. Of course, I know John very well now, but we didn’t know of each other then. Fortunately, John applied his voltmeters to the proximity effect, so in the end there was no competition. Coming back to flux creep, the group at Bell Labs had shown that these currents in type II superconductors were decaying logarithmically, and Phil Anderson—great man that he was—saw that there must be some connection then with where things like-- where these logarithmic decays are known in mechanical properties, in other words, in other places. And he then made a simplified model of thermally activated motion of flux bundles into the superconductor in a high magnetic field—okay, if you have a gradient in the flux, some of this flux will move in. That decays the current a bit. Well, that’s resistance. But is it really that? So, I think at the fundamental level, the nature of flux creep was no longer an issue when I was done. We could even see the individual bundles moving in. So, there was little question that Anderson had the right physical idea. But he had nothing to offer as to why these flux bundles had the sizes they did, although I am sure he appreciated that the vortices were interacting, making it a many body problem. But dealing with the many body aspect of the problem would be asking too much, because flux creep was a new idea, so I’m not [laugh] criticizing him—you know, certainly not. But Watt asked me to do something that, in retrospect, I didn’t pay enough attention to. I think it was because I was trying to finish. But he said, “Why don’t you look at the distribution of the sizes of these flux bundles?” And we did. My traces showed that the decays weren’t smooth. They were stepped, and each one of those steps reflected flux moving in. And they ranged from steps on the order of a single flux quantum up to hundreds or, whatever. I don’t remember the exact number. But the distribution was not Lorentzian or Gaussian or something we all know. And if anything, it was power law, and I didn’t appreciate the possible significance of that. I don’t think Watt did, either. But when I later read about Per Bak’s concept of self-organized criticality—think sand piles and how they gradually collapse—I could have kicked myself.
But there was redemption. There is a major book on self-organized criticality that was published recently. I read the book, and it was mostly computer simulations and whatever, how you get these algebraic decays and the distribution of event sizes. There even was a short section on experiments in superconductivity. Somehow the author had found my thesis result and quoted it. And I was totally dumbfounded. So, that’s a bit of leftover work. I think I’m going to have to do something about that. But it was really fantastic.
Mac, as with your undergraduate thesis, was your sense that what you were doing for your dissertation was relevant to Watt, what he was doing at the time?
Oh, yes, because I think he wanted to-- he was looking towards-- that’s a good question. I mean, cogent and illuminating. I suspect what he was thinking—he never said this explicitly, but knowing him as well as I do, I’m almost sure that he said to himself: I’m going to jump in now. We’re going to have fun with the science, because I know all about the mechanical properties of materials and keeping dislocations from moving, and this is going to be similar. And, indeed, it was. So, he had a long-term prospect in mind, and of course later, lots of people jumped on that idea. But he wrote some of the seminal papers on how the flux could be pinned.
Mac, given how foundational this research is, how you’re essentially present at the creation, how do you know when you’re ready to defend? What’s a complete thesis at this level?
Well, I had done collaboratively a lot of work early on in Watt’s group, which was preparatory. And then I built a new tool, and I used it to measure a new phenomenon—a newly discovered phenomenon—and illuminated it in a way that just hadn’t been done. I think everybody thought that was enough, and it was four years, so let’s get it done and out. So, it was not-- it was all natural. I mean, that was enough. It was a very substantial thesis, in that it dealt with two major things: a new tool, and using the tool. And that’s a wonderful tradition in physics. And so, no, I don’t remember-- of course, I was married, and we had had a child by that time, so there was some pressure to get out and have a more normal life.
Mac, I’ll test your memory. Who was on your committee?
Ah, yes. You mentioned that in the note you sent me, or something close to it. It was Watt, Jim Krumhansl, and Don Holcomb.
Was it a good experience, the oral defense?
Yeah, I think it was. But what was truly exhilarating was that I was asked to give what we now would call a talk, i.e., a formal presentation, in the weekly condensed matter physics seminar. And that was quite an honor. That was a bigger deal. But [laugh] let me go back to my qualifying exam, not one where I did so well. Somehow, it got off to a bad start. I mean, they wanted me to go this way, and I and I went another. And we never fully got [laugh] the inner product increased. So, I felt like a complete jerk. I mean, I think they realized it was partly their fault, and that I was doing so well in everything else that they just ignored it. Basically, it was a failure to communicate. But the next morning when I came into the coffee room where they held these exams, all my strained scribbles were still on the board. And I sat there wanting to erase it, but I couldn’t. [laugh] And ever since then, I never leave a classroom without erasing the board. [laugh]
[laugh] Leave no trace.
I cannot do it. And then when I was here at Stanford, my first couple years, Jim—with whom I had gotten friendly—I heard his voice. And all of a sudden that qualifying exam experience came back to me in a rush. Jim popped into my office—he was a very friendly and-- I wouldn’t say “effervescent,” but upbeat kind of guy. And he came in and said, “Hi, Mac! I just wanted to say hello. I’m just passing through Stanford.” And he looked at my face, and he said, “My God. Are you alright?” And I said, “Yeah, I’m okay now.” [laugh] And I said, “Let me tell you why I reacted to your voice so strongly.” And he looked at me, so I told him this story, and I said, “Do you remember that?” And he said, “Are you kidding? Of course, I don’t remember that.”
Mac, what opportunities did you see that what you had researched was publishable, even as a graduate student?
Well, there were really three publications that I think I can call my own. I mean, of course, they weren’t totally mine, but where I was the driving force. And the first one was actually fairly early on. We were studying magnetic hysteresis loops in the high field superconductors, not with a SQUID, but with a conventional magnetometer. And there were always-- you have a hysteresis loop, and in the region where the applied field is approaching and crossing zero, there were always instabilities. And of course, instabilities were a major problem in superconducting magnet performance at that point in time. And there was-- the hysteresis was understood in terms of a critical state model, which was built around the concept of a critical local field gradient in the magnetic flux at which the vortices would move. More specifically, the vortices are pinned, so you have to have a current density large enough to get them over the pinning sites. Okay? Well, that requires a gradient in the field from Maxwell’s equations. Okay. So, there’s a gradient in the field. That’s a current that then forces the flux in—fluxoids, vortices—the spatial profile of the flux evolves as you increase the field and then bring it down. But when you get back down to low fields, you’re transitioning from the flux wanting to be up to down, that is, there an interface between up and down vortices where they annihilate one another and generate heat. So, I calculated—which had never been done—calculated with the critical state model at what values of applied field should this annihilation occur. More precisely, when did you first get this interface? And when we plotted the calculated contour on the data, it perfectly lined up the observed region of instability. So, that was my first real original contribution to physics. And, I was very proud of that.
Mac, I know that when you look back on the Cornell years, you talk about discovering your voice. What do you mean by that?
Well, what I mean is that I was reaching a point where I was becoming comfortable in my own skin. Where I was confident in what I was doing at both the experimental and theoretical levels, and in a way that is natural to me. And I had growing confidence in my ability to pick new problems to study. I was past learning how to play the notes and could begin to make music. And I was working in areas where there was fundamental issues and practical issues, opportunities. All in all, I was in a good place for me. So, “I found my voice” is a metaphor for knowing who I was, and what I wanted to do, and that I was on the right track. I didn’t have to figure out any more deeply who I am, although I still had some frustration with the seeming dichotomy between fundamental and applied research.
And that included specifically the idea that even though this research clearly had industrial or even commercial potential, for you, it was always going to be basic science. It was always going to be an academic path.
No, that would be going too far. I think when I graduated from Cornell, I had two possibilities. One was the postdoc with Mike Tinkham. The other was a member of the technical staff at Bell Labs. And I hate to call [laugh] Bell Labs “industry,” except in the most generic meaning of the word. But I think then—I mean, either was meritorious, if you like. Either would have spoken to things I’ve perceived myself good at—learning, still, of course. But I think I just liked the university life. This is, of course, so common among graduate students. It’s hard to know whether it’s just kind of [laugh] late adolescence or really deep. But I said no, I want to be in a university. So, I went with Mike, and it was a special opportunity, I mean, to learn. I taught him, and he taught me. It was really a very nice thing. But then when I left Harvard, I was faced with this—[laugh] “left Harvard”—it was a choice between Stanford and Bell, and I chose Stanford. But that was easy at that point, and the Bell guys knew that. There was never any ill-will.
Mac, what was the connecting point between you and Mike at Harvard? Was it Watt?
It could have been. It may — the reason I paused is — I mean, surely, yes. But I think it may have been a little bit more than that. I was for some reason being introduced to some of the great men of that era. For example, in the spring of my last year, so toward the end of my thesis work, I was working in the lab, and John Wilkins came into my lab with a gentleman, elderly—well, more elderly than John—and he said, “Mac, this is John Bardeen, and I thought he might appreciate talking with you.” Now, Watt may have been in the background of that. But I was-- well, what can I say? [laugh] I mean, you know, I managed fine, and he was interested in what I had done, so it wasn’t in any way painful. And he was not as taciturn as was his reputation. And I told John Wilkins later that day, “Don’t ever do that to me again.” [laugh] Well, he did, because a couple of months later or something like that, he came into the lab with another gentleman, and he said, “Mac, this is Mike Tinkham, and I think you two guys should get to know one another.” Now, I have a feeling Watt may have set that up too, and John Wilkins just wanted to stick me a second time. So, Mike and I talked, and we obviously-- he was certainly interested, and I think we had a very natural chemistry. So, that ended up-- I’m sure that meeting was the origins of how I got the postdoc.
And what was Mike working on at that time?
Well, he was still-- you know, he was in superconductivity, but it was more type I superconductors and all his famous work in the electrodynamics of superconductors. And he had some work going on in magnetism. But nothing of what we would call macroscopic quantum properties of superconductors, and Ginzburg-Landau level kinds of things. It was more BCS, because his claims to fame earlier in his career were tests of BCS. But BCS doesn’t really describe, or is not so useful, about the macroscopic quantum properties. That’s the Ginzburg-Landau theory. Okay? So, he was more into the quasiparticles, which reflected some residual of his earlier work. But I think he hired me no doubt in part to bring in some of these new directions that I had begun to experience as a graduate student: SQUIDs, vortices, and type II superconductivity, and the more exotic macroscopic quantum properties that come out of Ginzburg-Landau theory. Just look at what we did after I got there. We took a SQUID magnetometer and measured superconducting diamagnetism above TC due to fluctuations. This was right after-- I mean, the honor for the first appreciation that there even were superconducting fluctuations above TC goes to Larkin for predicting fluctuation induced conductivity. But given Larkin’s work, I said to myself, “Well, so, there’s fluctuation-induced conductivity, then there has to be fluctuation induced diamagnetism, or the world is not right.” I did a little calculation that I won’t go into and saw, yeah, that’s really right, or could be. I mean, my calculation was heuristic, as people like to say. It’s a fancy word for kind of faking it, in my definition. But that’s what I did, and Mike was incredibly excited. He hadn’t thought of it. And we did the experiment, and there it was. So that started a whole new path. The whole field was beginning to appreciate that thermodynamic fluctuations were important in superconductors. Although I hate to say it, before this point most theorists were too fixated on critical fluctuations to see this coming. You know, when you’re looking where mean field breaks down, the idea of studying fluctuations in the mean field does not seem exciting. Nothing fancy, in a way, therefore not interesting. Well, from their point of view, that was completely true, but if you say, “Yeah, but do they happen, and do they do things that are interesting and striking?” They missed that one, as Larkin showed, and then we came in right after that and showed that they’re in the diamagnetism as well. Now I don’t want to say some others weren’t beginning to recognize the importance of fluctuations. Below Tc, Anderson had invoked them to explain flux creep. Langer and Ambegaokar invoked them to predict thermally activated resistance in superconducting filaments. Looking back, it is clear fluctuations in superconductors has been one of the major issues if the field of superconductivity over my entire career, ending up with notion that fluctuations in the phase of the macroscopic quantum wave function can limit the superconducting transition temperature. Well, as you might imagine, this conclusion does not sit well with some people. The lure of room temperature superconductivity is just too great, and the BCS notion that superconductivity arises when the normal state becomes unstable due the formation of Cooper pairs is too ingrained. It follows that the way to raise Tc is to increase the strength of the pairing interaction. Before we began studying fluctuations in superconductors, nobody asked the question: what if I’m in the superconducting state, and I consider the fluctuations, which are inevitably there — can they destroy superconductivity? And if so, is it actually occurring? It’s the analogy of saying, okay, if I have a ferromagnet, we know how it loses its ferromagnetism. The fluctuations just destroyed the order, but they don’t destroy the spin. And then on top of that, the Kosterlitz-Thouless theory of phase transitions, when applied to a superconductor manifestly shows that the transition temperature—which is when the resistance goes to zero—is below the temperature at which the pairs form. So, the notion that pairs can form and then at a lower temperature lock their phases to make a superconductor has strong theoretical support. Ok, pairing and phase ordering may happen at the same temperature, as in BCS, but they don’t necessarily have to. Well, in my opinion, there is a fundamental limit to the transition temperatures of superconductors. And it follows, ironically, that you’re better off in three dimensions and high-carrier densities, because that makes the phase fluctuations smaller. On the other hand, if you look at the cuprate superconductors, empirically you would say that two dimensions and low carrier density is the path to high-Tc. Well, yeah, maybe it is, but you’re paying a heavy price, because first of all, they’re layered, which is great for ease of doping. But that makes them quasi-two-dimensional, which in turn make the phase fluctuations bigger, and to get the Coulomb interactions to dominate, you have to lower the carrier density, otherwise the Coulomb interaction is screened out just like in a conventional metal. And on top of this, if you look at the new reports of superconductivity in the hydrides, which, while still controversial, has a lot going for it—okay, suppose it’s right. A) it’s three-dimensional; B) it’s got very high carrier densities; c) it was predicted by theory. Eh, not predicted precisely, but the idea came from density functional theory. Well, now you have a familiar area of friction: [laugh] many-body theory versus density functional theory. I mean, these edges, which have been raw throughout my whole career, may be healing somewhat. The traditional theories of electronic structure have demonstrated that they have a role to play in correlated systems and certainly as a tool in the search for new materials. Precision is not necessary if the qualitative guidance is helpful and provides insight. In short, one needs increasingly to pay attention to both points of view. And so, these are — I wouldn’t call them “cultural changes,” but changes in perceptions that are occurring within superconductivity research community now.
Mac, to broaden out a bit from Harvard during this time, how well developed was reduced dimensional superconductivity?
Oh, it didn’t exist. I mean, that — fluctuations drove the interest. Well, I shouldn’t quite say that. Reduced dimensionality was also central to theoretical ideas about excitonic superconductivity—how do you raise Tc? Credit for this idea goes to Ginzburg and Bill Little, and even Bardeen. The idea is you have-- take a film or a thin layer of a metal, and you put a highly polarizable material on top of it-- electronically polarizable. Okay, so one electron comes along, and it polarize that outer layer, and then it leaves, and then a second electron comes by, and feels that polarization. That’s just like the electron-phonon interaction but where the polarization is in the electrons. And it is achieved by juxtaposing two materials, and it’s only-- there’s absolutely nothing wrong with these ideas. Whether you can make them work in practice is another matter. But, in any event, these ideas highlighted that reduced dimensionality was important. And so, those two things, I think, just made us appreciate how important fluctuations were and how their magnitude was strongly influenced by dimensionality. And on the materials side, the things that you could do when you brought two materials together with the right properties to achieve superconductivity, as in a bilayer. I mean, it showed the potential of reduced dimensional materials. Put layers on top of one another and build a material up. So, both from the synthesis side and from the fluctuation physics side, both were focused on dimensionality. And when the theory of Josephson-coupled layered superconductors by Lawrence and Doniach came out, we had a good phenomenological theory to work with. I first read their paper when I was still at Harvard, and when I looked at the equations, I thought to myself, “Well, it ought to be pretty easy to calculate the upper critical fields, when the field is perpendicular and when it’s parallel.” So, I got together with Alan Luther and his student Dick Klemm, and together we made a proper calculation. What it showed in simple physical terms is: if you have a layered superconductor, and you apply the field parallel to the layers and ask what’s going on in the material, well, if you’re near TC where the Ginzburg-Landau coherence length is large—specifically, larger than the layer separation—a layered material is just an anisotropic superconductor. They’re interesting too, but not so different. But as you go down in temperature, and the coherence length gets small, eventually the vortex cores fit between the layers and the critical field diverges. Later, this new type of vortex became known as a “Josephson vortex.” And when our initial paper came out, it got a good bit of attention because it was the first paper that showed the physical properties of layered superconductors could be really different. And now this gets us into another one of the historical tensions, this time with people who were not sympathetic to condensed matter physics. And I’m not trying to be cute here. They said complex materials are just more complicated. “Why are you wasting your time? You’re not learning anything new.” It’s “schmutz” physics. Well, that’s just wrong, because when you go to these more complex materials, they are more complicated perhaps, but they are not just more complicated versions of what we already know. They have new properties that we didn’t know. And it’s hard to articulate that to people who don’t want to hear it. But it’s true. And this was when I realized, in my personal career, that if I’m going to go with all of this, I’ve got to go somewhere where they know something about materials.
That’s a valid point.
That was even before I met Ted.
So, that was a fortuitous intuition. I mean, what did I really know?
Mac, a little bit more detail on some of the excitement surrounding thermally activated resistance, and what were some of the technological challenges at the time?
Well, I mean, once the Bell group—both Kim and Anderson, take them together—showed that the non-equilibrium screening currents in a superconductor with pinning decay, logarithmically. Okay? Then, you couldn’t hide. And moreover, it was obviously very important. Resistance in a superconductor? So, there was a lot of interest in understanding how that happens. And okay, we already knew about losses in a moving vortex core-- but I mean, it was just so fundamental that you just wanted to know as much as you could about it and get your mind around it, so to speak. My thesis began to address some of these questions, at least in the case of flux creep, which is obviously technologically important. So, that said, now your question: [laugh] where does all this go, beyond the obvious technological implications?
Form a fundamental point of view, fluctuations of the order parameter were now in vogue, or beginning to be in vogue. And so, we had looked at fluctuation induced diamagnetism above Tc. But others were looking at what we’ve—how do I say this—the decay of persistent currents when there was no pinning, if you like, in superfluid helium and superconductors. There was a paper by Michael Fisher and Jim Langer on helium. They were doing this work when I was at Cornell. I was not intimate-- I mean, I knew them, but we hadn’t made a connection, because they were talking to the helium people. And then there was later Langer and Ambegaokar, who showed that in a filamentary superconductor you should get this fluctuation-induced resistance as well. Ok so you need a thin wire. Well, is that-- atomically thin, in which case that’s a nice idea, but help me. But no, these were in ordinary wires, because coherence lengths are big in type-I superconductors. So, if you just made a thin enough wire, you could see these effects, and there’s various ways of doing that, and of course, coherence lengths diverge as you go to the transition temperature. So, it was not so hard to make a wire in which these effects should be visible. That opened up the field of one-dimensional superconductivity. Moreover, you didn’t have flux bundles. You had these single, isolated thermally activated events in which-- let me put it this way. If you look at what the pair wave function looks like in a one-dimensional superconductor, it’s a helix. Okay? And it was known in Ginzburg-Landau theory that if you increase the current enough so that the pitch of the helix gets to be on the order of the coherence length, the superconductor goes unstable defining the critical current. But what Langer and Ambegaokar added was: well, okay. Suppose I’m at a lower current, and the pair wave fluctuates down locally. Then the amplitude of the wave function locally can also go unstable, the local twist in the phase tightens and produces a phase slip. Okay? And after the phase slip, when it comes back up, one of twists in the wave function is gone. One of the lengths-- and there’s a picture in the narrative I sent you that shows that. And that’s what you’ve got to understand, to understand a phase slip. Okay? The idea was brilliant, and to calculate it was not trivial. So, we started an experiment to test this idea. But unbeknownst to us, so did my thesis advisor, Watt, with Jim Lukens, back at Cornell. And Watt, showing his materials genius, used tin whiskers. [laugh] If you take a tin can and squeeze it, tin whiskers grow out from the edges. I forget how we did it. But it turns out that what Langer and Ambegaokar predicted, was seen, quantitatively. So now, taking a broad view, you see what had happened is, we had gone from systems like for flux creep, where it’s a bundle of vortices, which is a complicated concept, and it moves, or the flux flow resistance in cleaner superconductors, where the lattice of vortices move coherently as a whole generating flux flow resistance. But these earlier experiments didn’t reveal explicitly what happens is phase slippage. That is, fundamentally all these phenomena are the same thing, even if on the surface they seem completely different. I don’t think that’s well appreciated, even today. And then there is the question, how does all this relate to the finite voltage state in Josephson junction biased above its critical current? But wait a minute and think about it. You’ve gone from a bulk superconductor to a one-dimensional wire to zero-dimensional Josephson junction, which now isolates the phase slip process for all to see. It is the essence of the dynamics of a Josephson junction. Okay? So, what one has here is a set of steps down in dimensionality to the core of the phenomenon. Now, one of my frustrations to this day—and you can blame me if you want, among others—but that unity, that extraction of the essence, is not what you read in books. I mean, it’s there implicitly. But nobody has said: hey, let’s look at all these things as a whole. This is how it works, and we can build up the whole from one simple thing, the Josephson effect. Now, this is a common stage in how physics evolves. We see complicated things. We get models for each of them, and only later do we realize that they’re all related. But it’s usually even later yet when somebody writes if all down in a pedagogical way: “Okay, let’s start from the simplest case and deduce what happens more generally.” Alas all the creative work it took to get there gets lost or made to appear a mere consequence of something more fundamental. This is pure reductionism. Such unification has not yet happened in the pedagogy of superconductivity since BCS. The books that everybody reads now are great books, but they’re 50 years old. And there needs to be some unification—about which I have mixed emotions. The distillation to the essence is at the core of what physicist do, but along the way, we are not kind to our history.
Mac, where in the narrative do you make the transition from postdoc to faculty member to assistant professor?
The year after I went to Harvard.
Did you have any sense, when you joined as a postdoc, that this was in the offing?
How did that come together? Who made the offer? Whose support was there that allowed this to happen?
Mike. I mean, I think he had it in mind all along. But he didn’t-- he was too much of a fine human being and a gentleman to say, “Well, what I’m going to do is throw you out there and see how well you do.” [laugh] No, he didn’t do that. But I think he hoped that that something new and good would happen. But he wasn’t prepared to seek a faculty position until he had some time with me, which is perfectly reasonable.
One year is short for a postdoc without a faculty offer. You must have been pretty surprised when this was offered to you.
Yeah, I was. But you see, we’d already-- to look at it, if you’d like, from Mike’s point of view, I mean, after a year, we’d already come up with two really great ideas. And they had—I don’t want to say I initiated it all, but it was using the experiences that I was—the new knowledge and set of tools and whatever that I was bringing to the group that made it possible. And then I think when he saw that this was really going to work, he went for a faculty position. There are all these stories about how bad it can be for junior faculty at Harvard, but I did not personally have those experiences. I think that had a lot to do with my local environment, and not some of the issues of Harvard as an institution, which we can come back to later, if you want.
Mac, given how important and new and exciting this research was, how much were you publishing during these years?
Oh, quite a bit. No, there were-- I was trying to think of how many papers we published. I’d have to go look. But let’s see. I was there seven years, six on the faculty. Oh, it must have been easily two or maybe three publications a year.
And I’m very proud of the fact that they weren’t one thing elaborated. They were different things. And I think that—again, in response to one of your topics—I think that these were interesting things. They were published in places where they got visibility. They were not things that Mike was noted for doing. It was probably pretty widely known that I had built this SQUID magnetometer that was new. So, I was not lost in the flurry. So, I think that people realized something different was going on in Mike’s group, and there’s got to be a story there. And that, I think, didn’t hurt me at all.
Mac, at least in the early years, when you first got the offer and accepted it, how naïve were you, or not, about Harvard’s culture of not promoting from within?
Oh, I was well aware. But I thought I was going for a postdoc. And because I, too, was excited about the possibilities that-- I can say, well, I seeded several new experiments, but Mike contributed enormously to the thinking through what it all meant, and teaching me how to do physics by combining experiment and phenomenology. “Now, don’t run off with your Greens functions, young man. Stay where the theories are physical.” And so, it was-- the scientific opportunities overweighed the risks. And I didn’t worry that the world would end if didn’t get tenure. But that’s a different story. I mean, that’s probably worth talking about, but maybe later. I think Mike and I—he’s passed, but I know him so well—I think we both were so excited for what our partnership, if you like, had made possible, and we liked each other, and we respected each other. I wasn’t some little flunky down in the lab overseeing his graduate students. There was no question from the point of view of physics that this was a great opportunity for both of us.
When you joined the faculty, did you take on graduate students of your own?
It didn’t work that way. People came into the group, and they — maybe it’s kind of shades of—premonitions of KGB—but students came in, and they found their natural place between Mike and me. It was a group, okay, and you were in the group, and we were the leaders of the group. But students could turn to either one of us. So, it wasn’t a question of whose students was whose. When you get to Stanford, for a variety of reasons, that was better defined. But there, I was just starting to get some funding and whatnot, so I depended on Mike in the beginning. But I think it was that students knew how to be with both of us, and some leaned a little bit my way, and some leaned a little bit his way. There was never any tension over that.
Mac, what was the research or the collaboration that brought you to France?
Oh, you mean the conference there? It was a good opportunity to summarize the work we had been doing that was relevant to a conference on superconducting microbridge microwave/terahertz detectors. I mean, by the time-- towards the end of my stay or time at Harvard, we had moved on to studying more than just thermally activated phase slip resistance in filamentary superconductors. We had moved on to looking at the I-V curves of the filaments, which is to say: what do they do when they’re continuously phase slipping? And we found a whole new element in the phenomenon of phase slips in in one-dimensional filaments. Specifically, the I-V curves had steps in them, and the steps were integrally related—the slopes, the differential slopes were progressively increasing by factors of one, two, three, four, and so on. We knew immediately that we must be successively putting in running phase slip centers. But the discreteness of them and the distance between them in current was such that there had to be a long length scale in the problem. It couldn’t be the coherence length. Okay? If that were the case, they would just flood in. It was a much longer length. Well, long story short, it turns out to be that when you have one of these phase slips centers, you’re injecting currents into the quasiparticles channel, and now you’ve got to bring the quasiparticle relaxation physics into the mix. And they have long length scales. So, that was the reason. But to your actual question about the conference: we suspected that the same physics governing phase slips in filaments must also going on in these little microbridges. And so, there was this conference in Perros-Guirec, which was basically addressing the physics of microbridges. And since I was on my way to Stanford, it was also sort of my swan song about my work at Harvard. So, I went over to France and reported all our work on filamentary superconductors. The idea was to see if our results were relevant in the eyes of experts on superconducting microbridges. History has proved that they were. For example, it was subsequently shown that if you take a superconducting filament and put big fat leads on it—I mean, literally—and then progressively shorten it, you go continuously from a filamentary superconductor with its helical macroscopic quantum wave function to a Josephson junction. So, we were right that there had to be some connection between our one-dimensional filaments and these microbridge devices. Of course, for complete understanding, you have to throw in the quasiparticle relaxation physics.
Mac, what was your major funding source during your Harvard years? Where was the money coming from?
The NSF and the Office of Naval Research, and Harvard of course provides more — uses more of its endowments for the support of graduate students than any other place I know. So, in some sense, the local funding played a role.
What were some of the undergraduate courses that you taught as an assistant professor?
Well, for first three years as I recall, I taught undergraduate electricity and magnetism, which was really a-- it’s such a wonderful subject. It’s beautiful as well as useful. The mathematics is lovely. So, I enjoyed that immensely. And I was young, so I neither had the aura [laugh] nor the pretense of a theoretical physicist at Harvard. I just taught it my way, which was great. And I learned a lot, with lots of exotic examples from superconductivity. The students liked it. So, that was fun. Later I taught a required graduate lab course, where I would now admit I was a bit “mischievous”. Some might say a bit naïve. Look, Harvard is a theoretical place. I mean, that’s kind of the way it operates. So, some of these theoretical students who had to take this lab were not thrilled. But I got along with them, because I was young, and they could bitch at me, and I’d tell them, “Tough. You’ve got to do it.” But I was always available to answer questions and make suggestions, but never do this or do that. And I must say that there was a very good set of experiments. They were classic Harvard experiments, like NMR, low temperature physics, along with non-linear optics and some others I have forgotten. But that’s just to show you the Harvard stamp. They were truly good experiments. I mean, they were challenging conceptually and technically. But all the equipment that you needed to do them was available in the teaching lab. So, I decided, in my overly enthusiastic way, that I wasn’t going to tell them what to do. I was going to say, “Here’s what you want to do. This is what the experiment’s about, and this is where you’re headed. You can read about the physics in this book. You can read about the experimental side over here,” in some paper or something. And I said, you know, “See what you can do.” And I also said, “And of course you’re going to run into problems. That’s normal. That’s part of experimental physics. So, I’ll always be available to answer questions and give suggestions. But I think you need to-- at this point, if you’re a physicist, you ought to learn what experimental physics is like.” So, it was an—implicitly, at least-- I don’t know if I thought about it that way—but implicitly, it was very Socratic. I just asked questions, or made suggestions, or whatever. Well, it was hard for them, not only because some of them were theorists, but even some of those that were more experimentally inclined had never experience working outside a structured environment. To deal with the real world and make something work was not something with which they had much experience. Today, that would not be true. Undergraduate students do experiments and research much earlier. But at the time it was very good, because they had to begin to learn how to think in the experimental framework of physics. [laugh] How do I make this thing happen? What are the conceptual problems, and what are the technical nuisances or whatever you want to call them? And so, for the most part, students were a little frustrated. They never got mad at me, because I was always accessible. But it was hard, and they weren’t used to having things be that hard. You know, think about it. Somebody that gets into graduate school in physics or applied physics at Harvard is pretty damned good. And all of a sudden, there’s something where they’re a little bit like a fish out of water. And I knew that, and I said, “They ought to learn this now.” And in the end, most of them succeeded. Some came by and said, “You were a bastard, but it actually was a good experience.” [laugh] I responded that I wanted [laugh]—what’s the word? —I wanted to leave a lifetime impression. I guess I did because 10 years [laugh] after I had left Harvard, I ran into a student who had taken that class—I remembered him—and he said, “Remember that lab course?” And I said, “Of course I do.” “You were a bastard.” But he smiled as he said it.
Mac, before we move on from Harvard, I want to ask a question that is perhaps in some ways speculative, but I think it’s important for you to reflect on it. And that is — and perhaps you’ll be humble, so I’ll say it for you: despite the culture of not having a large possibility of getting promoted from the assistant level to tenured, if not you, who? Given the importance of your work, given what you had done—I’ll answer that question myself, because of course, Howard Georgi was in the process, or had already demonstrated, that it could be done. And so, I guess my question there is: to what extent was the decision reflective of the lingering biases in the world of physics regarding experimentation, regarding solid state? We’re not so far away from Gell-Mann calling it “squalid state.” Right? To what extent was the decision not to tenure you not about you, but about sort of the hierarchy within the department at the time?
Well, although Harvard has a sort of theoretical feel, look at Purcell and at Horowitz, who wrote a famous book on "The Art of Electronics”. They were also there. And Pound, who never got a PhD. I mean, there was a strong experimental physics tradition. Okay? But it was very fundamental experiments. It wasn’t so much working out the properties of matter. But in the division, that was not a problem. And the condensed matter theorists in the physics department—Paul Martin, Alan Luther—knew me, and regularly asked me what was going on in the lab and whatever. I even had a joint publication with Alan. And in the division, there was—for want of a better term—a kind of Pasteur’s Quadrant view of the world, you know, that the interplay between fundamental and applied research was natural and mutually beneficial. I mean, that kind of value system was there. But still the institutional traditions of Harvard permeate the whole place. We want to be the greatest university in the world. Not a bad aspiration; don’t get me wrong. “So, what we’re going to do is, we’re going to find and hire the very best people we can, the best in the world, and we’re going to bring them to Harvard.” Well, that worked for a long time. But during this period, not because I was there, but just in the larger flow of history, that approach wasn’t working so well anymore. Not everybody was coming when called. Okay? And there was, to my sensibility, beginning to be a little worrying. I mean, I don’t want to say, “a crisis of confidence.” The place is too good for that. That would have been silly. But they were wondering. This is not working the way we want it to. And so, I think that it took a while for them to appreciate that, no, they weren’t slipping. Rather, other schools were beginning to play at their level. Okay? And it’s perhaps normal that when you feel somebody catching up with you, you think your slipping. But no, maybe they’re just becoming good, too. Okay? And I think now they fully understand that. And I don’t want to say that they were silly or arrogant, or whatever. That may happen at Harvard, but not where I was. And they just were beginning to worry a little. So, when I came up for tenure, I think people recognized that I had done some really good work and had great future prospects, if you like. But no, “The way we do it is, we define some areas, and we go out, and blah, blah, blah.” So, they did that, and they went out, and they found-- I don’t know where I was in the ranking, but they found somebody that had a bigger reputation than I did, and was somewhat older, and they went with him. In the end he didn’t come. So, I think—and all modesty aside—my interactions with Harvard, subsequent to that denial of tenure, have made it clear that they feel they made a mistake. And I appreciate that.
And so, in the division—certainly the applied physics part—I think my experience was imbedded, by an accident of history, in a kind of a Gaussian in time where soul searching was emerging.
Now, was the writing on the wall there long enough where you were well positioned for your next move? Did you have feelers out to places like Stanford and Bell?
Yes, it was. I mean, they usually made the tenure decision when you finished your assistant professor appointment. They didn’t have many associate professors. You were a professor, and you had tenure. You were promoted, if you like, from assistant to full. They used the associate professorship, like in my case, where they wanted to continue the process, or do it better, or whatever. So, I was appointed as an associate professor without tenure at Harvard for two years. And it was in the second of those two years when the final decision was made. Now there is a kind of epilogue to this story. After I’d been at Stanford for some time. I don’t recall how long exactly. But anyway, I was invited to serve on a search committee for the new dean of the division. Paul Martin, who had become the dean after I left, had run his term, or wanted to step down, or something. And so, they were running a search for a new dean. At the initial meeting of the search committee, the assistant dean, whose job it was to review the history of the division—it’s now a school—went through the time frame of my tenure decision. I don’t know whether she knew—I know the dean did—that among the decisions made during that epoch, I was in that mix. In any event it was made clear that it was about the time when they began to see-- had some worries about: “Why is this not working? Why are we not getting the people?” [laugh] “We’re not keeping the good ones, and we’re not getting better or equally good ones from the outside.” And I sat there. It was an out-of-body experience. It really was. And the only thing I — I want to say it was cathartic. It was satisfying, not that they were having a bit of a problem, but just that — okay, that’s what I thought was going on, and I’m better off at Stanford anyway. I mean, it was just really that their procedures were not delivering the results they wanted, and they were aware.
Now, did Bell reach out to you at this time, once it was known that you —
Oh, Bell-- yeah, no. Bell, they made me an offer, too.
And so, it was a replay of the one before. I think I would have gone and probably been happy. But I wanted the academic. I mean, Bell was an academic place in terms of research, but not the full academic thing.
And I had this hunch that new materials were going to be important, and important for me, even if I didn’t want to do materials work directly. But I could do that collaboratively, and frankly, I don’t think I would have been good at it on my own. But I understood how it works. And so, that’s when-- so, it was very-- well, as I like to joke, I went to Stanford because I wanted to be able to work and have some kind of relationship with Ted Geballe, not because of the palm trees. [laugh] And that turned out to be true, and I was under no-- I mean, there were some concerns expressed to me that all was not well in the physics community at Stanford, which was true, and I knew that. But there were no problems in the part of Stanford where I was going. Not in Applied Physics. Applied Physics was not where the tensions were. The tensions were between Physics and SLAC
You mean, between particle theorists in the department and SLAC?
Well, I would say the physics department and SLAC, not just theory. But the problems did eventually get sorted out. There are various versions of this story, but the way I remember it is that during the time I was chair of Applied Physics and Sandy Fetter was chair of Physics, we discussed whether the time had come to begin some rapprochement. It turned out that Burt Richter, who was then the director of SLAC, was thinking the same thing. There are also versions of this story that maintain that the Provost and the Dean of Humanities and Sciences were concerned about the situation as well. In any event, for sure Burt, Sandy and I got together to begin to talk. Eventually, we added David Leith from SLAC and Bill Spicer from EE, who had connections to SLAC through SSRL and was essentially a member of the broader physics community. We analyzed the sources of tension and began conceptualizing solutions that evolved into new arrangements and a new spirit. If you like dark humor, there was certainly and element that if you wait long enough, the problems go away, and then you can put it back together.
Mac, did you know Ted personally before you got to Stanford, or only by reputation?
Oh, no, no, no. We had met, and he had sent a graduate student to work with our group at Harvard, and that’s [laugh] another nice story. We had done this fluctuation diamagnetism stuff, and one of the—we talked about this a little bit before—that dimensionality comes in, in a major way. So, you can turn that around and say: if I want to know the dimensionality of the superconductor, I can measure the fluctuation diamagnetism. So, Ted was making these layered compounds from the transition metal oxides, intercalating them with organic molecules. This work was motivated the original ideas of Bill Little, and to some degree, Ginzburg. And Ted, with his encyclopedic mind about materials, conceived of a way to make layered materials to test these new ideas. And as a way to establish the dimensionality of the materials, they did some experiments that amounted to looking at the fluctuation diamagnetism and concluded that that they were two-dimensional. And I, being pretty [laugh] knowledgeable about fluctuation diamagnetism, felt that something was wrong in their analysis. But it was a very interesting thing to do. Using fluctuation diamagnetism to establish the dimensionality? I liked that. I liked the idea. I thought it was clever, but I had some technical issues. So anyway, the APS March meeting that year was held in Cleveland, and I was there. I went up to Ted, whom I had never met. I mean, I knew who he was. And I said, “I would like to ask you some questions and share some thoughts about that experiment.” And Ted, as he always is, was cordial, and as he always is, was very interested. So, I had my say, and he said, “Well, let’s get to the bottom of this.” Then he said, “If I gave you some samples and sent a student to work with you, would you be okay with that?” And I said, “Of course.” Okay. So, a student came, and we did the experiments, and it turns out that [laugh] I was right. But anyway, that doesn’t matter. It started at a very-- it was a big issue, and we sorted it out. Ted’s student was Bob Schwall, and Dan Prober was the student on my end of the deal. Bob and Dan then also went on and looked at the critical fields the sample and saw the diverging critical field phenomenon that Alan Luther, Dick Klemm and I had predicted. So, that was a lovely experience. So, after all this, Ted invited me to visit Stanford. I had a sabbatical coming, which I split between Stanford and MIT. So, my family—wife, two children, and a dog—took the train from the east coast to San Francisco. And I must say, it was fun. And we came back on the train from Los Angeles. And when we later moved, we took the train again, only this time, we didn’t go back. But the other little funny story about my coming to Stanford is that Ted tells the story about the Cleveland meeting a little differently. He says, “Well, this guy Beasley came up and told me I was wrong.” [laugh] I said nothing of the kind, but the mythology is good. I suspect that I was considered a target of opportunity. But there was broad support. They wanted to have superconductivity here at Stanford. To do that they needed somebody with a different, complementary style, or whatever. Also, I connected with electronic applications, which were of interest to EE. I don’t think we conceived at the time how close our relationship would become. It was the Ted Mac Amateur Hour before there was KGB. But that just happened. I mean, it was the opportunity, the chemistry was good, and the intellectual space was distinct. So, that’s what happened.
What was your first project with Ted when you got to Palo Alto?
Well, he had a program that involved materials work and deposition techniques to make superconducting tapes for power transmission lines. This was pre-high-TC. So, I put my shoulder to that project, while building up my lab and getting some funding of my own. But the first thing we did that was clearly new was to do tunneling work on the A15 superconductors, at the time the highest TC superconductors known. Bob Hammond, who Ted had previously brought in, was a very, very creative and good thin film person. And so, we were also making thin films of more difficult materials, like the A15s, using co-evaporation. And they were smooth and beautiful. John Rowell was around, and he said, “My God, these films should be great for tunneling.” An indeed they were. It was the first real major scientific project that emerged after I arrived. It was truly collaborative, including John. And it was just the fact that thin films gave surfaces that were far better than bulk crystal growth would give you. And the tunneling densities of state were good enough that we could do McMillan-Rowell inversion analysis and extract the electron-phonon spectral function, which showed the phonon structure and from which you can get the BCS electron-phonon interaction parameter λ. Ultimately, we studied all the high-TC A15s and established definitively that the mechanism for their superconductivity was the electron-phonon interaction -- end of subject.
And what was exciting about this? What was the larger optimism in the field about this?
Well, the driving issue was, are they different? Was the mechanism of the superconductivity in these materials different, or were they really the same as the elements? Answer: they’re like the elements only better. Now, one cannot say that all materials in this class had been studied, but the implication, and I think the consensus in the field, was that all the elements down through the transition metals with d-electrons, and their binary alloys and binary compounds, were electron-phonon coupled superconductors, with no known exceptions. And then as if on cue complex oxides appeared on the scene. The first high-TC oxide superconductor discovered was doped BaBiO3 with a maximum TC of 30K, and then later, the cuprates were discovered. We should come back and talk about these materials. They are both correlated materials, albeit in different ways and to different degrees. In this sense, they are not like the elements. And we should not forget the materials with f-electrons, which have brought us heavy fermion materials and other novel types of superconductivity with perhaps spin-mediated interactions.
In what ways, beyond Ted, did it feel right for you to be at Stanford?
Well, my interest in the physics and applications of Josephson junctions was completely new to Stanford and of interest to both Applied Physics and Electrical Engineering. So, I had a by-courtesy appointment in EE, and I taught a course in superconductivity and its applications in the EE department for a couple of decades. And taking advantage of the materials expertise in our group, we fabricated the first integrated circuit SQUID, complete with the transporter coils, out of a high-TC A15 superconductor, which was technically a challenge. For example, you’ve got make both junctions and crossovers. So, our success was a major achievement at the time. But right after this, the cuprates with their much higher TC ’s were discovered, and essentially all the materials people in the field went in that direction. And all of a sudden, our students who were steeped in both the physics and material science of complex materials were in high demand. Many groups wanted that kind of mix of experience. It was unusual—and it still is.
Mac, what was so significant for you about the 1976 Applied Superconductivity Conference?
Well, a number of things. I know the one you’re thinking of, and I’ll come to it. We were trying to-- with my coming, we wanted to increase the visibility of the work in superconductivity at Stanford. I guess one could say that we were more ambitious. And so, Ted volunteered—and he was far senior to me—volunteered to run the Applied Superconductivity Conference. And so, I could not say, “No, I’m not going to help.” And, it was good, and I think it was a good thing to have done, if just for the branding [laugh], if you like. But the story I think you’re really referring to is that we decided: well, if we’re going to do all of this goddamned work, let’s do something really fun and interesting. And in those days, running an international conference was different. Okay? For example, there was always Russian — Soviet Union participation. But the Soviets determined who would come. As a result, sometimes you’d get interesting people. Sometimes you’d get people who were unknown. Whether they were just there to be educated or [as] part of the KGB, I don’t know. And I don’t want to be too flip about it. But you weren’t getting the ones about whom you would say, I’d love a chance to talk to A, B, or C. So, we decided to see what we could do. And I had a person in mind, Konstantin Likharev, who was-- his work was known to many of us working in Josephson junction applications and device physics. And he seemed extraordinary, and he is. But at the time, we just didn’t know. So, I nominated him as somebody that we should explicitly invite and see what the Soviet leadership would do. Well, he came. I don’t know how he would tell the story from his side [laugh], but then in the US it wasn’t so hard to do. I mean, we just invited him, and he got word back that he was coming. But I did, of course, do some checking before that. I mean, after nominating Likharev, I spoke with some folks whom I suspected might know his work. Specifically, I contacted Sid Shapiro, being, perhaps, the most distinguished in this area at the time for the AC Josephson effect, and he-- I said, “Do you know Likharev?” He said, “No, but I know his work.” And then I said, “I do too, and I think he’s terrific.” And he replied, “Yes.” And I said, “If we can get him to come to the conference, would you support it? Moral support?” And he said, “Absolutely.” So then, I went ahead, and it worked. It turned out [laugh] he really is an interesting guy for sure. His English was—he had an accent, but amazingly good. And he was knowledgeable about American culture. I mean, he just was full of interesting perspectives and all that kind of thing. But then, he was also clearly the sensation at the Josephson junction sessions, because he really was as good as we suspected. People didn’t know his work, and whatever else, and he’s cocky, but in an attractive way. So anyway, I got all these requests about whether he could come visit their labs. And I said, “Well, I know he’s got a short visa.” But I said, “Let me talk to him.” So, I did, and I said, “Would you be interested?” And he said, “Well, yes. But I’ve got this visa issue.” And I said, “Well, let me see what we can do.” We were in my office, and to get to the phone, I had to turn my back towards him. Then I called up the woman at the State Department, with whom I had worked out his visa, and asked if he could stay a little longer, and she said, “Yes, you can do that, but there are some conditions.” And I said, “Well, what are those?” She said, “You’ve got to give me a schedule at every place he’s going to go, and there has to be an American citizen who will take responsibility for him. And there can be no open dates.” And I said, “Yeah, I think we can do that.” I mean, I knew that I could get two weeks’ worth. I don’t know if it turned out to be that much, but it was a substantial time. So, I turned around, and I saw Kostya, who was speechless. And as I like to ask my friends, I mean, can you imagine Kostya speechless? [laugh] And they say, “No!” And I said, “Kostya, are you okay?” And he said, “That was amazing. It would never have happened in my country.” And I thought that was a wonderful moment, and sadly, I don’t think we could do something like that now. But it worked, and he was, I think, forever appreciative—once he knew what happened on our side. It was a wonderful thing. I haven’t ever gone back and asked him about that moment, but we’ve become pretty good friends. And he’s—he ultimately came to the US permanently, and he’s the father of the most promising superconducting digital technology. Not quantum, but traditional electronics, which is still a pretender for replacing some aspects of CMOS, which is limiting-- Moore’s laws is being challenged. So, he’s deservedly very distinguished now. He’s been an interesting guy to know, that’s for sure.
Mac, what was the state of SQUID integrated circuits at this time?
At that meeting?
Yeah, the state of the research.
Oh. Well, I think at the time of that meeting, there was no large-scale integrated circuit technology, really. Well, IBM was just getting into their superconducting computer program, which involved real integrated circuit work. But it was not yet an established technology like it is now. Today there are even foundries where anyone can get their circuit designs fabricated. But it was different than the IBM capability—much smaller. And mercifully, it survived the termination of the computer project at IBM.
Mac, let’s zoom out a little on superconductivity. What was going on during this time with regard to transport via localized states, and Josephson coupling through them? And how did Leo Glazman fit into the picture?
Well, there was-- in the aftermath—I shouldn’t say “aftermath.” Toward the end of the work that we were doing tunneling into the A15 superconductors, you might say we discovered that-- we found that we could use deposited amorphous silicon barriers, just a single layer of amorphous silicon, deposited on these tricky materials, to make very good tunnel junctions. Now, amorphous silicon barriers had been used previously by Josephson computer group at Speary Research to make Nb SIS junctions. But their mandate was technological, and they didn’t get deeply involved in the physics of what was going on. But they did note that at high bias voltages the I-V curves of the junctions exhibited very large upward curvature. And using the standard theories of tunneling, they analyzed what the barrier height would have to be to show the observed degree of curvature. Their analysis yielded unphysically low barrier heights, well below the band gap in silicon. That just couldn’t be right, so we decided to look at the matter more deeply. It was the beginning of a very rewarding odyssey. What we found was that if you varied the thickness of the barrier, the tunneling conductance fell off exponentially with a characteristic length that was consistent with the band gap of Si, just as one would expect. But then we noted, as we made them even thicker, the characteristic length halved, making it look like the barrier was half as thick. Well, it didn’t take us too long to realize that it must be due to tunneling via localized states in the silicon. It was well known at the time that amorphous silicon is loaded with localized states. So, we were happy, but we didn’t know what was going on at high bias. Like the Speary group we saw very large curvature in this region. It seemed reasonable that the curvature was related to the localized states, but we didn’t have any concrete mechanism. And then, out of the blue, out of nowhere as far as I was concerned, Seb Doniach, who had been visiting in the Soviet Union, came back with a note and a preprint from a young Russian theorist Leonid Glazman. And I said, “Well, this is interesting.” And I mean, our work was reasonably well known at that point, but not widely known. And yet he had picked up on it and produced a theory of transport via localized states—inelastic transport—that accounted for our data quantitatively. I mean, it was one of those times when you just say, wow! And so, we were very pleased. And you could easily understand what was going on. And as you increase the bias voltage, more localized states become available for tunneling. And in turn this permits a process in which an electron tunnels elastically on to a localized state, emits a phonon and tunnels inelastically to second localized state and then elastically to the counter-electrode through, i.e., it is a three-step process. As the bias voltage increases further, more multi-step channels successively open up, leading to the distinctive upward curvature. Now in his letter, Glazman also asked if he could come visit Stanford when he was in the United States. We of course said, yes. A few months later he came. There was, however, one important issue. In those days, any visitor from the Soviet Union expected that the host would take care of local logistics. It was a simple matter of reciprocity to them, because that's what they did if you visited the Soviet Union. So anyway, before he arrived, I told him, “We can put you up in a hotel, or you can stay in our home, if you’d like.” Well, he chose the latter. [laugh] I think he was curious about American family life. On the day of his arrival, he knocked on the door, and he—I can still visualize him now. He was tall, lanky, ruffled dark hair. Taller than I am, which always gets my attention, because I’m pretty tall, at 6’4. And he came in, and he was both charming and interesting, and as I like to tell the story, even our dog was taken with him, because the dog slept in his suitcase in his bedroom each night that he was with us. So, he was a big hit in the family, and he is a wonderful person. Well Leo, as we then called him, was not done with the physics of transport via localized states. The physics of these localized states involves Coulomb correlations. Specifically, the localized states in the barrier below the Fermi level are singly occupied due to the large on-site Coulomb repulsion – a so-called positive U state. Those above the Fermi level are unoccupied. Working with one of my students, David Ephron, Leo and a collaborator were able to see consequences of these Coulomb correlations in the magneto resistance of amorphous silicon tunnel barriers. Even more subtle is the question of Josephson pair tunneling through the localized states. Because of the large U, it is not possible to have two electrons on the state at the same time. But Leo found that you can get Josephson tunneling via these states if the energy width of the localized state is sufficiently broad that the tunneling time is short. That way a second electron can come along after the first and avoid the Coulomb repulsion. This is essentially the equivalent of the retarded electron-phonon interaction in BCS superconductors. So, if the tunneling is via localized states above the Fermi level, where the states are unoccupied, you get the standard tunneling result. By contrast, if the pair tunneling is via the occupied states below the Fermi level, you get a π-junction. What that means is that Josephson coupling occurs, but the coupling energy is negative, if you like, due to correlations with the pre-existing single electron on the site. A π- junction is equivalent to a regular Josephson junction but with a phase shift of π. I concede that all this is not trivial, but it’s what happens. Also, there’s a corresponding phenomenon in quantum dots, and I think they-- I’m embarrassed to not know in detail, but I think they’ve seen these effects. But hold on, the riches of amorphous silicon were not exhausted. There is a whole other chapter relating to what happens when you dope amorphous silicon with Nb. Specifically, around 11% Nb, amorphous Nbx-Si undergoes a continuous insulator/metal transition. We were aware of this work, and it occurred to us that if the transition is continuous, one could get a metal of arbitrarily high resistivity, which if true might enable a high resistance SNS type Josephson junction. Such a junction might have technological importance, as an alternative to the tunneling Josephson junction of the existing technology. So, Adrian Barrera, a visiting graduate student from the University of Mexico, tested the idea. From a thin film fabrication point of view, it wasn’t really a challenge. Just deposit a Nb film, then the amorphous NbxSi barrier and then a Nb counter electrode. It worked and we were able to achieve good SNS Josephson junction with technologically usable device characteristics — high resistance, acceptable critical currents, and classic magnetic diffraction patterns. And of course, thick barriers with characteristic Josephson coupling lengths of roughly 10 nm, a factor on 10 or more larger than a typical tunnel junction. This suggest that barrier fabrication tolerances should be greatly relaxed. I was confident that the technologists would rapidly adopt these new barriers, but in fact we had to wait 20 years. Our savior was the group and NIST Boulder, where they’re famous for their voltage standard based on the AC Josephson effect. In order to get a standard that works at high voltages, they need to make very large arrays of Josephson junctions, which is technologically demanding. And they were using the integrated circuit technology based on Nb tunnel junctions, which requires an external shunt resistor to damp the usual hysteresis of tunneling Josephson junctions. Okay? One might say that all we did was to introduce an internal shunt. But this new technology permitted NIST to fabricate working arrays with of order 240,000 junctions on a single chip, a world’s record by orders of magnitude. So, that was great. But the more I thought about NIST’s success, I began to appreciate that we had only had a good—one might say, lucky—idea based on empirical results by others. There weren’t even the beginnings of a proper theory for these devices. So, I went back and looked more carefully at what was known about the metal/insulator transition in amorphous NbxSi. Bob Dynes and others at Bell Labs did the most important work on this system, upon which one might begin go develop a device theory. This metal/insulator transition is non-trivial. Motivated by Dynes’s work, McMillan at Bell developed a scaling theory of the transition incorporating both Anderson localization and the electron-electron interactions emphasized by Mott. In short, is this an Anderson localization or a Mott insulator transition? It turns out that they both play a role, but as the transition is approached from the metallic side, a soft gap arises in the density of states. It’s trying to be an insulator due to the Coulomb interactions. And then the obvious question arises regarding what role Glazman’s multi-step tunneling processes may be playing on the insulating side of the transition. Well, if I were younger, I would jump all over this problem. Imagine, a very successful device for which there is no theory and where Anderson localization, Mott insulation, the Josephson effect and Glazman’s filaments all seem to be part of the picture. Wow!
Mac, the KT transition in superconductors had a bit of a rollercoaster experience.
How do you mean “rollercoaster”?
Because it seemed like it was not feasible, and then it was.
Oh, well. Come on. It wasn’t a rollercoaster to me. But anyway, yeah. That is a fun story, and our work on this transition constitutes what I regard personally as my most important contribution to physics. Well, the Kosterlitz-Thouless theory is profound for sure. Nobel Prizes were awarded. And the problem that they solved-- they were trying to solve the problem of phase transitions in two dimensions. And they had used superfluid helium films as an archetypical example. What they predicted was that the transition to dissipative flow in two-dimensional helium arises from the unbinding of plus and minus vortex pairs. That is to say, at low temperatures, the vortices—which are activated due to thermal fluctuations—are initially bound together. Let me be more explicit. Consider helium with no vortices. Now, turn on thermal fluctuations, and what happens is that, out of the “vacuum”, come bound vortex-antivortex pairs. Okay? So, there’s this sea of bound vortices that are thermally fluctuating. It is not a totally inappropriate analogy to say that that’s like quantum fluctuations, electron hole pairs excited out of the vacuum state. I mean, there is a deep connection. One’s thermal; the other quantum. But that’s a detail. And as you heat the system up, the distance between the bound vortex and anti-vortex pairs increases on average. And finally, at the KT transition temperature they begin to unbind, beginning at infinite separation and progressively shorter and shorter separations as the temperature is further increased. The unbound vortices now can produce dissipation analogous to the flux flow resistance of moving vortices in a superconductor. So, the bottom line is that dissipation comes from the unbinding of vortex-antivortex pairs. But why then does the analogous process not occur in superconductors? Well, all this magic arises in the case of helium because the circulating currents around the vortex drop off as 1/r. Okay? In a superconductor, they also drop off as 1/r near the core but then cross over to an exponential decrease at large r. The reason is because in a superconductor there’s a magnetic penetration depth, which reflects the fact that superconductors are a charged superfluid. So, in their paper, Kosterlitz and Thouless correctly say that their transition does not apply in superconductors. And this is true. What they said is absolutely true. And, yet, there was something about it that bothered me. Fortuitously, just after the KT theory came out, there was a Gordon conference with the goal of getting the superconducting and superfluid helium communities together to exchanging ideas. A big topic at the meeting was the confirmation of a critical prediction of the KT theory. The predicted discontinuity in the superfluid density at the KT transition had been observed. I was anxious to hear about all this, as it related to my earlier work at Harvard on thermally activated phase slips, and all that kind of thing. So, I listened to the talks with great interest. Every speaker carefully—as in a litany, again and again—intoned that while this is true in superfluid helium, it doesn’t apply to superconductors. But I said to myself, “Wait a minute that might not be so true.” I had been interested in vortices in thin film superconductors and their possible connection to the Josephson effect in superconducting microbridges. And I knew that thin films don’t shield vortices as well as bulk superconductors. It’s thin and the shielding currents are weaker. And in fact, as had been shown beautifully by Pearl, the magnetic penetration depth in a thin film – the so-called perpendicular magnetic penetration depth λperp = λ2/d -- is bigger. A lot bigger when the film thickness d is much smaller than λ. Okay? And I knew that. And I said to myself, “Well, I ought to think through the implications of this.” So, I was-- if you know the Gordon conference routine, you have sessions in the morning and the evening with afternoons off. So, that afternoon while I was down putting my feet in a lake, this question was still on my mind. And, later as I was walking back up the hill to go to dinner, I estimated these lengths in my head, and λperp was of order a hundred microns even without the divergence near Tc. And as you go up in temperature, the penetration depth gets even bigger and bigger. And then I knew what had been bothering me. I said to myself, “Okay, the theorists are right, but in the end, it doesn’t matter.” Okay? It’s just like having a finite size sample. I mean, okay, there will not be in a phase transition in the purest sense of the word, but the phenomenon of thermally activated vortex-antivortex pairs has to happen. I admit that I was pretty pleased with myself. So, when I got back to Stanford, I mention this to Hans Mooij—who was visiting on sabbatical—and a student of mine, Terry Orlando. They both got all excited, and so we sat down and did a more thorough analysis that allowed us to express both λperp and TKT/Tc0 solely in terms of the mean-field Tc0 and the sheet resistance of the film. Nothing more. We also went to Seb Doniach and showed him what we had found. And then within a couple of days, he and Bernard Huberman took the version of the KT theory that applies to melting of two-dimensional lattices and applied it to the vortex lattice of a superconductor in two dimensions. It was a big effect. And, so then we submitted both papers to PRL, and very quickly they were published back-to-back. So, it was pretty amazing. And I think now it is fair to say that the KT physics is part of the canon of superconductivity. It’s not just-- it’s something you’d better know. But there’s also a more humanistic side to the story. It was 10 or 15 years later — and I can’t remember why we were discussing it, but Dan Prober, my student from the fluctuation diamagnetism days, who was a professor at Yale, said to me, “Did you know who the referee of your paper on the KT transition was? Remember the paper?” And I said, “Yeah, I remember the paper.” He said, “Well, it was Thouless.” And I said, “Oh, God.” [laugh] Thouless, who was at Yale at the time, got it to referee, and apparently he had come to Dan and asked, “Do you know this guy Beasley?” And Dan said, “Yes.” And Thouless continued, “Is he any good?” [laugh] I don’t know what Dan said, but anyway—so, I’ve always wished that I kept the referee report. But I remember it. Thouless accepted our basic theoretical point, but he quibbled a little bit with some of the possible experimental evidence that we pointed out. And to be properly appreciative of him, he was always attentive to us in his discussions of superconductivity, as an example of the Kosterlitz-Thouless transition. I mean, in his London Prize lecture, he attributed that to us, and in his book on topological defects in quantum systems, he reprinted our paper, and it’s also mentioned in the Nobel summary of his work. Because he was having some dementia issues at the time, I don’t know whether he wrote that or to what degree he may have influenced it directly. Either way it was very nice.
Mac, what were some of the research interests around thermal fluctuation limits?
You mean, the Tc?
We talked a little about this before, but there is more to the story. Indeed, there are probably many ways to tell this story. I mean, people-- there are different perspectives and different reactions. But I’ll give you mine. I’m not saying it’s unique. But for me, it started with the Kosterlitz-Thouless theory itself, because it shows that the resistive transition and the temperature at which the pairs form need not be the same. Okay? You can form pairs, but superconductivity requires the pairs to form a macroscopic quantum state. The pairs all have the same quantum phase. Okay, so in BCS, the pairs form, and they lock their phases at the same temperature. That’s because it’s a mean field theory, and the coherence length is so big. And everybody knows if you have big coherence links, you don’t have critical fluctuations. Okay? But again, I didn’t buy that because we had already been through that point of view with the fluctuation diamagnetism. Even classical fluctuations arise and can have major effects. A good example of this point is a 2D magnet. When you heat it up, you lose the magnetism, and everybody would tell you that’s because the ordering of the spins is destroyed by thermal fluctuations. But you don’t destroy the spin. Thermal fluctuations break up the phase coherence. Okay? And in liquid helium, the temperature at which you form the bosons is some very large temperature at which the helium atom comes apart. Now in the case of superconductivity, the pairs form at the mean-field BCS transition temperature. Not as high as in helium but no matter. The principle is the same. If the pairs are not phase coherent, the phase of each individual pair fluctuates wildly because of the number-phase uncertainty principle, which comes out of quantum mechanics, thank you very much — means that the phase of that pair in any direction you want. There’s no definite phase of a single pair, if you like. And superfluidity only emerges when the phases of the pairs lock, as in the Josephson effect. Now to return to the question of limits on Tc, we have prima facie evidence that phase fluctuations destroy superconductivity, or can destroy superconductivity, not the pairs coming apart. So, if your whole methodology of finding new superconductors, or high-Tc, is to make the interactions bigger, that may not be enough. If the properties of the materials are not favorable for reducing thermal fluctuations, you’re not going to get a high-temperature superconductor. You’ll get some kind of correlation, which you might call a pair, at high temperatures. But they won’t be phase-coherent. Now to be fair, I am not the only one to adopt this point of view. I remember a paper by Mike Tinkham, who did a kind of-- his typical kind of physically motivated examination of the issue of resistive transition in high-Tc cuprate superconductors, and concluded that the temperature at which the resistance goes to zero must be lower than the nominal transition temperature. And Steve Kivelson and [??] did some early work on this question. They took a heuristic approach and argued that when thermal fluctuations of the phase became of order 2π, the phase order would come apart. And based on this criterion, you can figure out what physical parameters affect those fluctuations. It turns out there’s really only two important ones. The first is dimensionality. You’re always better off in a higher dimension. The second is that you want a high pair density, which makes the system more ridged against phase fluctuations. Using parameters appropriate to the cuprate superconductors they concluded that the Tc’s for some of the higher-Tc cuprates are limited by phase fluctuations. Phil Anderson, in his very touching last paper, on the ArXiv, came to the same conclusion. And empirically the most robust – high critical currents -- of the cuprates is 123 YBCO. This is no accident, as it is the most isotropic of the high-Tc cuprates. Contrast this with the bismuth compounds, which are much more two-dimensional. Now, if you want to say, take these ideas and ask yourself: If I’m worried about these phase fluctuations what kind of material would I need to get a high-Tc. The answer is clear: You want high enough pairing interaction strength in a three-dimensional material with a high electron density. Okay? And that’s what the hydrides may represent. Now there is another issue that bears on the maximum Tc under earthly conditions. If you look at BCS theory, you can work out the size of the Cooper pair. It’s called the BCS coherence length ξ0 = hvF/kTC0. Now, to make things simple, let’s ignore the dependence on the Fermi velocity. Then the coherence length should drop off like 1/Tc. I mean, it makes sense, right? The bigger the energy scale of the pairs the smaller they will be in real space. In any event, empirically this what is seen in real materials. Now let’s ask where the observed coherence lengths reach the size of a unit cell—a few angstroms or whatever—to define a maximum Tc0 feasible with terrestrial materials. The resultant extrapolation yields a maximum Tc0 a little above room temperature. Okay, yes, this is a crude estimate. Maybe it’s above room temperature, but it’s not a thousand degrees.
[laugh] Mac, where are coupled oscillators in all of this?
Oh, [laugh] that’s a-- I’m going to get you for that question. Historically, it’s just my-- I had gotten to know Albert Libchaber, because he was running this conference at Perros-Guirec that we already talked about. And toward the end of that conference, he invited me to spend my next sabbatical at Ecole Normale in Paris. The invitation was flattering, as we had only just met. So, when my first sabbatical at Stanford came up, I decided to take him up on his offer. It was a propitious time to join his group for a few months. He had just recently observed the period doubling route to chaos in a thermal convection cell of liquid mercury that obeyed Feigenbaum’s famous universal scaling relation. It was a stunning result for which he ultimately won the Wolf Prize. This result along with other developments initiated a renaissance in nonlinear dynamics. So, I said to myself: well, Paris is nice, and clearly a sabbatical at Ecole Normale was a great opportunity to steep myself in these exciting new developments. So, off we went to Paris for six months. For this sabbatical we traveled by air, as no matter how good the trains are in France, they don’t come to the US. Once we settled in, the issue was what should we do in the lab. We only had six months. The one nonlinear dynamical system I knew a lot about was the Josephson junction. And as is well known its equation of motion is isomorphic to that of a pendulum, the nonlinear dynamical aspects of which have antecedents that go back to the 17th century. Moreover, Bernardo Huberman, who was also in Paris at the time, had already made close contact with the eclectic group at University of California, Santa Cruz, who were into chaos and nonlinear dynamics. Bernardo and Crutchfield at Santa Cruz had shown that Josephson junctions could be chaotic. But they looked at a particular case related to parametric amplification. So, Albert, Bernardo and I, along with a student of Albert’s, Dominique D’Humieres began to brainstorm. What would be interesting to do? And I suggested, “Well, look. I know all about Josephson junctions. Bernardo has just shown they’re chaotic. Let’s look at them more generally from a fundamental perspective, not a specific application and see what comes out.” Everybody liked that idea.
So, Mac, let’s go back to Paris. Where were we?
Well, I was saying that Albert, Bernardo, Albert’s student Dominique D’Humieres and I, all agreed that doing something motivated by Josephson junctions might be interesting. But doing experiments on real Josephson junctions, making them and the cryogenics and whatnot, I mean, that’s just too much. The ideas are relevant, but we needed a simpler system to actually do experiments on. Now, you can say: well, you’re saying it’s isomorphic to a pendulum, so build a pendulum. But that’s [laugh] not so easy, either. But in this modern day and age, both Dominique and I were aware of the fact that if you take what electrical engineers call a phase-locked loop, it is, in fact, also isomorphic to a pendulum. It’s used in locking the phases of oscillators, so it sounds like the right kind of thing. Anyway, it is mathematically identical. So, what we did was to build a phase locked loop and instrumented it so that we could display its dynamics on an oscilloscope. Now, young people today often don’t know what an oscilloscope is, but I can’t help them. [laugh] But anyways, think of a TV screen. Use of an oscilloscope turned out to be very important, because we could see in real time what this system was doing. So, we had-- I’ll use the pendulum language. We had this pendulum, and we were driving it with periodic torque and watching it respond. And since you could see it in real time, it was easy to map out the dynamical state of the pendulum in the space defined by the drive frequency and the drive amplitude. At low drives, we found periodic motion around the vertical of the pendulum along with period doubling and full-fledged chaos. At higher drives we found spinning states and various forms of chaos along with period doubling. But the most interesting thing we found, which greatly surprised us, was that if you drive the pendulum so it’s still pretty much in its potential well around vertical—think of a pendulum, and you start swinging it just a little bit, and then more and more, and more and more, but it is not yet spinning over the top—we noticed that as we increased the drive we saw period doubling, but it was always preceded by dynamical symmetry breaking in which the pendulum would swing more to one side than the other. So, imagine a pendulum, and you’re driving it, and it’s swinging symmetrically. And then all of a sudden, it starts to go to one side more than the other. Well, we fooled around for a long time trying to find out what was wrong with our circuit and finally had to say, no, damnit, that’s what a pendulum does. And in fact, that was right. But you know, we’re experimentalists, and we didn’t want to try to go sell something that was just a quirk of the circuit. So, we wrote all this up and sent it to Phys Rev A, which was where you sent statistical mechanics kinds of things in those days. It’s different now. But anyway — and two things: one was that within a few months after that paper came out, we got a communication from Kurt Wiesenfeld, a student at Berkeley at the time, who had found an explanation for the dynamical symmetry breaking we saw. Essentially, he proved that if you start with a symmetric potential, period doubling must be preceded by symmetry breaking under essentially all conditions. Well, the pendulum is clearly symmetric in the sense that the potential of a pendulum V(θ) = V0 (1- cosθ) is 2π periodic, which implies that V(θ + 2π) = V(θ). But there is also a lower symmetry V(θ + π) = - V(θ). And as shown by Wiesenfeld and Swift, it is this lower symmetry that must be broken before period doubling can occur. Well, that’s really interesting, because it provides a constraint on what is necessary to have the universal period doubling sequence found by Feigenbaum. Our paper was highly cited for first demonstrating symmetry breaking in the forced pendulum. Now, it turns out that others had seen this dynamical symmetry breaking as well. But we published first. And that always gives you a little bit of an advantage. Also, they published in journals that were specific to their subfield of physics, not general statistical mechanics, which had the effect to limit the impact of their work. And so, that’s why our work had so much more impact – a lesson in being first and choosing your audience with care. After I came back to Stanford, I still had the bug for nonlinear dynamics. Fortunately, I had a student Peter Hadley who was also interested in working in this area. And so, we thought, well, there may be another problem motivated by Josephson junctions that we could examine, and we came up with the mutual locking between coupled oscillators. Consider the AC Josephson voltage standard that we were talking about before. It uses extremely large number of phase-locked Josephson junctions. But in this application the locking is induced by an external clock. The question is: can they do that by themselves, all on their own? Can they lock their phases? And it was known they could, under special conditions. But apparently no one had looked at the more general case. We decided to take on the challenge. So how might such a thing be done? It’s easier to describe with a picture on a piece of paper, but let me try to explain. In our motivating system, the oscillator is a Josephson junction current biased above its critical current. In the pendulum language, it’s spinning. Now consider a group of Josephson junctions. They’re all “spinning”. How can we lock them together? Without initially thinking about it in those terms, we thought the simplest circuit to study would be a series array—that is a line of junctions all in a row each connected to its upper and lower neighbor and the whole line then shunted by an RLC load. You know, passive devices in the load. Well, that’s a loop. But then you think about it. It’s really very interesting, because what it says is, each junction affects every other junction equally, because if the ac current it generates doesn’t go up through all the other junctions and around back down through the load and up, you can’t complete the circuit. So, we have created an array of identical junctions with all-to-all coupling. In fancy language, what we have done is to use the circuit topology to achieve all to all coupling. Conventionally, one does this by connecting all the junctions together using multiple wires from each junction to all the others. Note also that this system also has a permutation symmetry under exchange of any two junctions. The end result is that the response is governed by the specifics of the load. Right? What’s left? So, we studied this system, this time with computer simulations rather than analog electronics. And we found, roughly speaking, that the system would lock in phase, in which all junctions had the same phase, when the load was inductive, and when it was capacitive it locked to an antiphase state, in which the relative phases of the individual junctions were different. In some cases, they were equally spaced between 0 and 2π. In others clumped up into groups. For a purely resistive load, they would not lock their phases. For a resonant load, the system would respond to that corresponding to the overall impedance as a function of frequency, jumping to one mode or the other depending on whether the load was above or below the resonance. Well, the anti-phase, or splay states, were not known previously. And so, that behavior drew considerable theoretical interest in the applied mathematics community, because—and I’ll come back to this later if it seems appropriate—but for coupled oscillators, the theoretical world is not so much physicists as applied mathematicians. So, we were all of a sudden working in a new world. Anyway, the applied mathematicians were surprised that a system with all-to-all coupling was so rich in its response. Apparently, their instinct was that the response would be a mean-field with all the oscillators locked in phase to the mean-field. In order to stay in the game, we were fortunate that Kurt Wiesenfeld was just finishing his thesis at Berkeley. In the fall he had a postdoc arranged with Per Bak at Brookhaven. And so, I invited Kurt to come down for the summer. And he came, during which time he worked with Peter to study theoretically the stability of our in-phase state for various loads using [??]stability theory. Their work gave insight into the nature of the splay states but did not provide any specific results related to their structure after they formed due to an instability of the in-phase state. Kurt did another important thing. He introduced us to Steve Strogatz. These were two young Turks in nonlinear systems and the coupled oscillator world who have both gone on to distinction. And they kept our results in their domain of interest continuously since our work together that summer. For example, together they showed that the force pendulum under the kinds of conditions that we were studying could be mapped onto what’s called the Kuramoto model of coupled oscillators. Okay? Now, the Kuramoto model is considered the “Ising model” of coupled oscillators, in the sense that it is the simplest model that captures the essence of the phenomenon. It’s a very simple equation. It was developed about the time of our simulations but was not out in the public literature until after our work was published. Once Steve and Kurt had shown that our all-to-all coupled Josephson junction problem could be mapped onto the Kuramoto model, interest in our problem ramped up again. I called Steve recently and asked him whether there was anything further to the story than that, and he said, “Oh, yes. A student and I have shown that this is all equivalent to a motion on a mobius torus, blah, blah, blah, blah”. And I said, “Well, thank you, Steve, very much.” [laugh] So, I went and looked at their paper, and it’s interesting, but it’s — I haven’t mastered it. But my point is not a confessional, rather it is how this simple system turned out to be so rich in what it could do, and unknown to the applied mathematics world. Now this is kind of stunning, that-- so, I’m really quite proud of this work. It was a lot of fun. It really tweaked the theoretical world, and we started from zero. Putting this into perspective, look at it this way: in tenure considerations, there’s a bit of black humor that says, well, if you want to get an assistant professor position, you must have done something that really impacted the field; and if you want to get tenure, you have to have done it again. Okay? That is to say, sometimes you’re lucky once, but it’s unlikely that you have been lucky twice. And, we were lucky twice. Now, I’m not saying I deserve double tenure, but it raises in my mind a very deep question as to why that happened. Is it because we were lucky? If so, I can live with that. Or is there something about the way experimental physicists think about things? For example, many of us like to ask what might be new or interesting to do that is outside the mainstream of the field, especially when there is no pre-existing theoretical understanding. And certainly, in condensed matter physics, it’s common that an experiment defines a theoretical problem that the theorists might or might not have considered previously, because that’s where you find the interesting things—what nature actually does. Now, whether that part of condensed matter physics culture is transferrable to non-linear dynamics, where the theoretical side is really owned by the applied mathematicians, which means a whole different set of tools are involved, I don’t know.
And Mac, you’re saying that this is unique to condensed matter?
Well, not unique, but it is—it grew up—it is fundamental to the dynamic of the field that theorists are instructed by experiment, not just the beauty of the equations or something like that. In other words, the experiment really is an important generator of interesting problems for the field. Okay, not the only way, but a very important one. And model systems are hard to come by. I mean, you don’t have the analog of hydrogen atoms as in atomic physics. Atomic physicists are proud of their very clean systems, and that’s understandable. But there’s a certain richness that happens when you have more complex systems, which I think is more the territory of condensed matter physics. I don’t know whether you like that or not. And I’m not saying I think it’s true in the case of these coupled oscillator problems, but it’s caused me to consider it, because we were successful twice, and that makes me wonder.
Mac, let’s move on to the origins of the KGB group. And my first question there is: we already have the G, and we already have the B, so when does Aharon Kapitulnik enter the picture?
Well, after I had been at Stanford for about ten years, the Ted-Mac amateur hour was well established, and we were doing well scientifically. But there were no younger people in the department, and condensed matter physics was very small. There already was Walt Harrison and Seb Doniach on the theoretical side, but effectively only Ted and I on the experimental side. In any event, the department ran an open search seeking the best young candidate we could find, irrespective of area, but it was pretty clear that the greatest need was in condensed matter physics. As the search progressed, I had my eye on Aharon among the group of strong finalists. He was a student of Guy Deutscher at Tel Aviv University, whom I knew quite well. And as a matter of course, I kept an eye on what was going on in his group. As a result, I noted some interesting new work that was different and particularly interesting and path breaking. I thought to myself, there has to be an interesting story here. All of a sudden Guy was doing some new things that reflected his style but were very different from what he had been doing. And when I looked more carefully, I discovered Aharon. Even then he had written all kinds of interesting papers with good experimental work on novel materials and exhibited a theoretical side that was at a level comparable to many young theorists at the time. It really was quite impressive. So, we offered him a position on his merits. At the time of our offer, he was a postdoc with Alan Heeger at Santa Barbara, working in polymer physics, so again, forefront statistical physics. But for reasons I’ve sort of forgotten, he had some reasons why it would be better for him to do something else. We were disappointed, and after we got ourselves back organized, the physics department had a senior candidate that they were pushing. I was more eager to have a young candidate, and to ensure the needs of Applied Physics were being met. So, I wasn’t against this other candidate, but he wasn’t my first choice. In any event, that offer failed too, and so we were really out of options. Luckily I said, “Well, let me talk to Aharon again.” And I did, and this time round, he was interested. Six months later, he arrived. And we now know that he is very, very talented, very original. He recently won the Buckley Prize of the APS. We judged well.
And Mac, what were you looking-- generically, before you settled on Aharon, what were you looking generically, in terms of what needed to be added to the Geballe-Beasley collaboration? What was missing?
That’s not exactly what we were looking at. As I said, we were looking at what would strengthen the department as a whole. I mean, it could have been somebody not in superconductivity. We weren’t trying to specifically do more for our group, which was doing pretty well [laugh] on its own. But I think Aharon saw it as an environment in which he could work well, because he had an interest in superconductivity, and he was pleased to have a colleague who had pointed out that [laugh] Kosterlitz-Thouless theory applied to superconductors. It appealed to his theoretical tastes. And like me when I came, he understood the importance of new materials and model systems. And he appreciated the thriving Jewish community in the area. In any event, we just sensed that he was going to do something important in physics. There was no idea, no plan, that he would join our group.
Even though his research was really relevant to what you and Ted were already doing?
Well, I would say it was — yes, nicely juxtaposed to what we were doing. More theory motivated, and focused on disordered systems, percolation, localization eventually, and things like that, as opposed to superconductivity as a goal in itself. But when he came, I think he then saw that for some of the things that he was eager to do, the materials that we were making were truly excellent model systems.
Mac, what was the initial project or collaboration that went from Aharon just being good to join the faculty, to what would become KGB?
Interesting question. I think the collaboration that was most important was the use of amorphous Mo-Ge and Nb-Si alloys as model systems. Amazingly, all this work was going on while we all were also working on the high-Tc cuprates. I’d have to go back and check on the dates, but I think that even before Aharon arrived, we had done some experiments using amorphous Mo-Ge/amorphous Ge multilayers as model systems. But that work was limited in scope and aimed at confirming the theoretical work I had done at Harvard with Luther and Klemm on the parallel critical field of quasi-2D layered superconductors. This was all very nice, but interest in anisotropic and quasi-2D layered superconductors took off with the discovery of the high-Tc cuprate superconductors. And so there was high interest in their fundamentals, and here we were blessed with an ideal model system to study the fundamentals of such superconductors for their own sake and to compare with the cuprates. We were perfectly situated with some unique advantages to be in the center of the action. Also, from a more pragmatic point of view, I think Aharon recognized that the resources, both financial and human, were around that he didn’t have to build all that up. He just had to pay his dues, which he certainly did, and [laugh] get the rewards. Perhaps even more important, the traditions that ultimately became the values of KGB began at this time: Everybody has their own students. Everybody raises their own money. It’s not that there’s somebody controlling the agenda, but by putting those resources together, we can get-- do more. And it was fun. I mean, when you have a group talk where an issue comes up, and the students can hear Ted’s take on it, which is materials-oriented, and mine, which is more phenomenological and sometimes applications oriented, and then Aharon’s, in which you can really hear the snap of a theoretical whip. It was just exhilarating for them. You know? In physics, you don’t have to be good at everything. You just have to have good relationships and a common passion for physics. It may sound corny, but that’s the way it was.
What was some of the initial work on alloyed-based model systems?
Well, there are two parts to this story. The first is how we got started with these materials systems in the first place, and the second is how they played a role in the emergence of KGB. In the 1980’s the importance of understanding homogeneously disordered materials was beginning to be recognized as an important and challenging issue in condensed matter physics. And it remains a challenge to this day. Of particular interest were amorphous materials because they remain structurally random down to microscopic or atomistic levels, as opposed to in homogenously disordered materials where one has grain boundaries or phase separation or whatever on more macroscopic length scales. In the case of superconductors, homogeneous random disorder affects both the formation of Cooper pairs and the macroscopic pair wave function, albeit at different length scales, the BCS coherence length for pairing and the GL coherence length for the macroscopic pair wave function. The questions raised by such materials stimulated Ted and Artie Bienenstock to study amorphous alloys, specifically Nb-Si and Mo-Ge alloys, using the thin film synthesis facilities of the NSF-funded Center for Materials Research here at Stanford that had been built up by Troy Barbee. Their goal was to see what their properties were in combination with quantitative information of the structure, in the form of radial distribution functions determined by X-ray scattering at SSRL. And of course, Ted was especially interested in their superconducting properties. While not directly involved, I was just watching with interest, to see what new opportunities these materials might make possible going forward. Ted and Artie were, appropriately, using traditional experiments to establish the basic materials parameters of the materials they were making. And when I saw how easy—after they had done all the hard work—it was to make these materials, make multilayers of them, with the amorphous Si or Ge being the insulating layer, I was hooked. It was clear that these materials offered broad potential as model systems for a range of superconducting physical phenomenon and even some electronic applications. Now, back to Aharon and the birth of KGB. Aharon’s interests are more on the physics side of things, Anderson localization, correlation gaps and Kosterlitz-Thouless transitions etc. and of course disordered superconductors in general. But rather than looking in more detail at the critical fields of layered superconductors, we focused on the effects of disorder on Tc for its own sake and on transport measurements as a means to study vortex dynamics in very homogenously disordered 2D superconductors. It turned out to be a very productive strategy. Let me illustrate with a few of our results. About the time Aharon arrived at Stanford, a student, John Graybeal, and I found that disorder dramatically reduced the Tc of thin films the amorphous Mo-Ge and Nb-Si alloys. Here disorder is customarily parameterized by sheet resistance Rsq. At low Rsq, the observed Tc’s were dropping a factor of ten faster that would occur due vortex/anti-vortex unbinding as predicted by the Kosterlitz-Thouless theory. This was a shocking result, as it violated the so-called Anderson Theorem that says in BCS theory Tc was independent of disorder. On the other hand, our experiments agreed with predictions of Fukuyama and coworkers that in the BCS theory as extended to include the effects coulomb interaction in the regime of weak localization. Simply put, the coulomb parameter μ* increased with Rsq. While the Fukuyama prediction worked well at low Rsq, it vastly overestimated the effect at high disorder where Tc gradually approaches zero. It took over a decade for theorist to account for the behavior at high Rsq where there is ultimately a quantum phase transition. On the other hand, there is still not uniform agreement that all issues are closed. For example, there is a school of thought that the superconductivity become inhomogeneous as Tc decreases. The firsts three-way collaboration was between Aharon and me, and a graduate student Jeff Orbach. Here we measured the heat capacity of multilayer of amorphous Mo-Ge/Ge multilayers, which is technically challenging due to the small mass of thin film samples. The results showed in agreement with Kosterlitz-Thouless theory that the vast majority of the entropy removal comes at the mean-field transition temperature where the amplitude of the macroscopic wave function forms, and only a very small remainder comes out when the phase order at the actual superconducting transition temperature TKT where the macroscopic quantum phases begin to order. Indeed, the resolution of our experiment was unable to detect any change in the heat capacity at TKT. In in our next collaboration, this time with a student Whitney White, we measured the zero-bias resistance as a function of temperature at several perpendicular applied fields. The data clearly demonstrated that as the temperature was reduced below Tc, the zero-bias resistance was the same as that measured in a single layer film, but crossed over to a faster decrease at lower temperatures, which we interpreted as the onset of onset of Josephson coupling between the layer. Magnet coupling was estimated to be too weak to provide direct coupling between the vortices, because of the very large transverse penetration depths in such thin films. All the results were found to be in accord with theory. And similar behavior in the cuprate superconductors was noted. In the next paper, with a student Ali Yazdani, we looked for the Kosterlitz-Thouless 2D melting transition in the superconducting vortex lattice in an applied field. In this transition thermally activated dislocation/anti-dislocation pairs cause the lattice to melt. This transition could be seen in mutual inductance measurements of the complex conductivity of amorphous Mo-Ge thin films. But there was an interesting twist to this story. By measuring the complex conductivity as a function of frequency, it is possible to study the melting at various length scales. The results showed that the KT theory broke down at very large distances of order the Larkin length Rc at which the random disorder in the films leads to loss of long-range order in the vortex lattice. This result was very important as it resolved the question as to whether the KT transition could occur even when there was some disorder in the vortex lattice. In another paper with Whitney White, we studied the dynamic resistance dV/dI as a function of applied current of thin films of amorphous Mo-Ge at various perpendicular applied magnetic fields. Somewhat surprisingly, we found that the results depended on the width of the sample. The effect can be understood as the result of edge pinning dominating in the narrower films, and bulk vortex core pinning in the wider ones. Edge pinning arise due to the attraction of image vortices necessary to satisfy the boundary conditions of an edge. Importantly, the field dependence of the activation energies two sources of pinning was found to be very different, H-2/3 for edge pinning and ln(Ho/H) for bulk core pinning. These results explained the observation of these two regimes—but with no interpretation—that had been reported previously. Our cornucopia of results on the vortex dynamics in 2D superconductors continued in a study carried out by another student Monica Hellerqvist. Here we studied again the dynamic resistance as a function of current but this time at various temperatures. The results clearly showed that the character of the motion of the vortices as a function of applied current crossed over from plastic flow to that expected for a ridge motion of an ordered vortex lattice. This crossover had been anticipated theoretically and our work provided the first clear demonstration. Finally, let me mention the work in this series that has perhaps had the most profound implications. Here the principal student involved was David Ephron. In this work we again measured the temperature dependence of the zero-bias resistance of amorphous Mo-Ge thin films at various applied fields, but this time with very high sheet resistances, of order 1000 ohms, and to very low temperatures, of order 100 mK. An Arrhenius plot of the zero-bias resistance showed that at high temperatures it dropped with inverse temperature to reveal an activation energy proportional to ln(Ho/H), in agreement with all previous work. However, at the lowest temperatures, the drop plateaued to a temperature independent resistance suggesting a metallic state existed as T à 0, which in turn suggests quantum dissipation in the vortex state of a highly disordered 2D superconductor. This stunning result has naturally drawn considerable interest, both experimentally and theoretically. Presently, there is a controversy over whether the result is valid or just consequence of the temperature of the film saturating as the temperature is reduced due to heating. In our original paper we gave arguments why this did not seem to be the case, but subsequent work on other materials seem to show a heating effect. As always, time will tell. But nobody denies the importance of the result if it can be taken at face value. Thanks for giving me the opportunity to tell this story in full. It is so characteristic of how KGB operates – and indeed emerged. But before leaving the early days of the high-Tc revolution, there was another event of historic significance, at least for me. It was a chaotic time, as I have already noted. And with this came some unusual surprises. I was sitting in my office one afternoon when a person previously unknown to me came unannounced into my office and said, “Hi. I’m Donn Forbes and you and I are going to start a publication called Supercurrents profiling some of the interesting people working on high-Tc superconductivity.” He showed me a copy of his first issue, which was dedicated to Phil Anderson. We talked for about half an hour, and I must say that I was intrigued by the project but even more about the man. I went home that evening and reported the event to my wife. I said, “A man named Donn Forbes came into my office today. He may be a bit crazy, but he is really interesting.” Little did I know how prophetic that observation was. Long story short, I agreed to help him, and when Supercurrents had run its course, I helped him get some scientific writing work for the DoE and APS, as I recall. Building on this, Donn then created a highly successful consulting/writing business with industry and later universities to help organize and write research proposals for government funding. As I write, he is working with Kam Moler, a KGB graduate no less, who is currently the Vice Provost and Dean of Research at Stanford and co-leading the initiative at Stanford to create a new school focused on sustainability teaching and research. To complete the picture, I should note that Donn was a Stanford undergraduate English major who then went on to study choral directing. We are now very close personal friends. Ok, David, next question.
What were the origins of the idea that to study high-resistance SNS Josephson junctions? Where did that come from?
That was me, being annoyed at tunnel junctions, because they’re hard to make, and they have hysteretic I-V curves, and so: why are you guys making your lives so difficult? But to put it more seriously, [laugh] and less glibly, I’ve always kept an eye on these Josephson technologies. And so, I know how they work and what their shortcomings are. I sometimes use the term—but I don’t particularly like it—doing research that is fundamental to a technology—that is to say, if you look at a technology and you see it has a shortcoming, there are two possibilities: it is intrinsic, and therefore just tough tidbits; or it something for which you might find a solution and therefore a candidate for innovation. So, since in the existing Josephson junction technology, circuit designers were shunting their tunnel junctions with thin-film resistors to damp the hysteresis, everyone agreed that it would be nice to have that resistor in the junction itself, but if you use a conventional normal metal — copper, or something — the resistance is too low to be practical. So, what one needed was a very high resistivity normal metal. Okay? And so, here we were, with these amorphous materials, which already have high resistivities. And we knew that they also have a continuous metal/insulator transition at which the resistivity continuously diverges. So, as I told you earlier—I mean, we thought to ourselves: maybe if one uses such a material as a barrier layer, it’ll work. And it did. But what also happens is you are in a regime in which transport is in the quantum regime. When you reach—now, let me specify it in terms of mean free paths—when you have mean free paths that are large, one can treat transport in metals pretty much as classical diffusion. I mean, classically in the sense of, not quantum. Don’t even worry about the quantum mechanics. The existence of a Fermi energy, okay. But the electrons are diffusing around classically. But, when the mean free paths get of the order of the Fermi wavelength, the classical picture breaks down, and the wave nature of the electrons comes into play. Interference between the electrons wave functions begins to matter. The effects of this interference are profound. For example, the electrons localize due to destructive interference. This is called Anderson localization. So, if you want to study Anderson localization, you want a highly disordered metal. And you certainly if you were looking in 2-D, you’d want a thin film. Well, if you make a thin film of a conventional metal, it’s resistivity will increase as you make it thinner, but that’s all from boundary scattering. By contrast, in an amorphous metal, the mean free path is uniformly very short, and remains unchanged as the thickness of the film is reduced. Importantly, this means as one makes the film thinner one can go continuously from 3-D to 2-D at constant mean free path – voila, a model system. However, the full situation is more complicated. As argued by Mott, as the electrons localize, Coulomb interactions start to mater and a Coulomb gap may form in the density of states. This is called a Mott insulator. In the end, we have here at a personal level a friendly competition between Mott and Anderson. [laugh] I mean, how’d you like to compete with either one of them? But as to whether Coulomb interactions win out, or localization wins out, is still an active question in general. But in the material of interest to us for high resistance SNS junctions, amorphous Nb-Si alloys, there is clear experimental evidence that a soft gap forms in the density of states as the metal/insulator transition is approached from the metallic side. In the fancy language of modern scaling theory, renormalization, the system flows to a correlated insulator, not to simple localized states. There is an important caveat here, however. The results I just described apply only at long distances, not locally. But, the barriers in our SNS junctions are thin, on the order of 1 to 20 nm. So, we are simultaneously making a new device and studying localization of interacting electrons at short length scales. That is largely uncharted territory, certainly experimentally.
Mac, you’ve written that KGB dealt with a perfect problem for Pasteur’s quadrant. What is Pasteur’s quadrant?
[laugh] I may not be the “perfect person” for that question, but let’s go ahead.
What is Pasteur’s quadrant, and what’s the perfect problem?
Alright. First Pasteur’s quadrant-- well, let me go back. Historically, in the ‘30s and certainly after World War II, people would have said there’s fundamental research and there’s engineering. And innovation occurs through the transfer of fundamental knowledge to technology. Think of them as two ends of a line. Applied physics would be considered to be in the middle. The important point is that the flow is presumed to be from left to right, from fundamental to technology. But history simply does not bear this out. Much fundamental work arises on the technology side of the line and flows back to the fundamental side. Thermodynamics is a classic case. Modern information theory is another. Not to mention the quantum hall effect and fractional charges in semiconductors. And of course, superconductivity. The problem is that the applications-oriented stuff is not intimate with the more fundamental stuff, and that just doesn’t make sense – not in general nor to me personally because I do both. So, you know, I’m not in the middle. I’m not on one end or the other. I live on both ends, so there’s something wrong with that linear picture. The linear picture goes back to Vannevar Bush and his famous report, “Science - The Endless Frontier” written just after World War II. Now to be fair, Bush was making a case to continue federal support for science going forward. He was doing God’s work. The problem, at least in my mind, is that this point of view got absorbed into the institutions of physics and their value systems. Fortunately, Robert Stokes, challenged this view and introduced the notion of Pasteur’s quadrant. He proposed that the straight-line concept was insufficient to capture the realities of science and technology. He argued that a better pictorial representation is a square divided into four quadrants. Consider a square. Divide it into quadrants. Okay? Now there are two axes. One axis, say the y-axis, reflects how fundamental an activity is, increasing as you go up. And the x-axis reflects utility. See? There is now an upper right-hand corner that reflects both a high degree of utility and a high fundamental nature at the same time. Welcome to my home. Now Stokes also had the wisdom and literary good taste not to describe his four quadrants with dry prose. He took a more poetic approach. The upper left quadrant was name after Bohr, the lower right after Edison and the upper right after Pasteur. And the motivation was to show that there is a quadrant where—let’s put it this way—practical problems and fundamental understanding come cheek-to-jowl. And his exemplar was Pasteur, who discovered microbiology by trying to figure out why the milk in Lille, France, was spoiling-- a true story. Recently the name of this quadrant has to my taste become bastardized. it’s called “use-inspired research,” which I think is an unfortunate term. It clearly is less elegant. But more importantly it suggests that motivation comes from some existing “usage” -- a problem dictated by a known technology. It misses the motivation to do some fundamental work because it seems to have potential utility, particularly where no previous technology exits. Put most ambitiously, the potential to create whole new technologies. In addition, today there is profound intellectual cross fertilization between the Pasteur and Bohr quadrants. Take quarks. Will we ever create a technology using them? Well, maybe not, but certainly in condensed matter physics, we are regularly discovering new states of matter and physical processes that inform high energy physics, and vice versa. Two examples are the quantum Hall effect, which exhibits “particles” with fractional charge, and superconductivity, in which topological defects, vortex-antivortex pairs, fluctuate out of a “vacuum’ state. It’s just a bit too much to be accidental. Interacting systems have some common theoretical aspects, be it high energy physics or interacting Fermions in condensed matter physics. And there is a striking irony here. One school is the archetype of reductionism, while the other is touted as a bastion of emergence. My colleague Bob Laughlin deserves much credit for taking up the mantle of Phil Anderson in making people face up to this irony. And to bring astrophysics and gravity into the picture, I recently refereed a paper that was demonstrating that some of the phenomena at the surface of a black hole have analogs in classical nonlinear transmission lines. The equations are the same. The physical interpretations are very different, obviously. [laugh] You can call it prosaic if you want. But I don’t buy it. It’s interesting and may end up illuminating both black hole physics and nonlinear dynamics. Now let me end this polemic with some personal observations. Frankly, I am tired of the seemingly constant squabble about what is the most important subfield in physics. I suspect it is a fool’s errand. Wait and it will change. We need to evolve the institutions of physics to be more catholic and agile. In the meantime, in my opinion, physics as a body of knowledge has been enriched by the creation of applied physics departments in addition to physics departments. At Stanford they are even in the same school, the School of Humanities and Sciences. This separation has allowed each to do its own thing, and when they need to talk, they can and now do. And when they have something to teach each other, they can and now do. There are now faculty who are in both departments. But, it is probably just a bit too much to unify these departments under one roof unless you have a very big department. MIT does it that way, and it works. I mean, don’t get me wrong. But here at Stanford, we’re far better off having an applied physics department that can go out and do crazy things—you know, like creating a group doing biophysics, and earlier include materials physicists as well as condensed matter physicists or whatever else that makes traditional physics departments a bit nervous. “But is it physics?” And as in my personal case, nonlinear dynamics and coupled oscillators where theory is largely done in applied math departments. One view that I can sign onto is that “physics is what physicist do.” Full stop.
Mac, you’ve talked about KGB’s work in high-temperature superconductivity as “creative chaos.” The creative part I get, where is the chaos?
Well, certainly the early work was. Look at it this way, when some large percentage—[laugh] like 75%, or something—of the condensed matter and materials physicists in the world all of a sudden converge on a problem, and nobody has good, well-characterized materials, nor knows quite what to do, but are doing what they know how to do, there is a lot of spinning of wheels. That is to say, when there’s so many people doing the same thing at the same time on dubious materials, it tends to be a little chaotic. Okay? It is also true, however, that eventually understanding comes out. There starts to be decent if not perfect materials and plausible consistency in physical measurements. Undisciplined theories and wide-spread hyperbole start to get reined in. Also, new physical characterization tools are developed, perhaps most notably angle-resolved photoemission. New routes of synthesis are developed, particularly for thin films. Electronic structure and many-body theorists begin to listen to one another, if still somewhat awkwardly. Computational theory comes of age. Also, the story these early days would be incomplete without consideration of applications. Here, let me tell that part of the story from the perspective of KGB. In the early—chaotic—days, many groups were trying to establish the current carrying capacity of these new superconductors. This was critical to establish their utility. The first measurements, including some of our own, found very low critical current densities. Clearly this was not good news. But these disappointing data were taken on polycrystalline materials, and it rapidly became clear that the culprit was their grain boundaries. For some reason the grain boundaries were not transparent. Being steeped in thin film growth of complex materials, our group recognized that epitaxial growth of these materials should produce largely grain boundary free material. Relatively quickly, we were successful in growing nearly-single-crystal films of 123 YBCO and found that they exhibited high critical current densities. Praveen Chaudhari and Jochen Mannhart at IBM were concurrently exploring epitaxial films and independently demonstrated their high current carrying capacity. Both papers were published immediately by PRL. Everyone breathed a sigh of relief. In principle the cuprates were of technological interest. But who’s going to use epitaxial films in a magnet or a power line? Well, Bob Hammond in our group—our expert on thin-film deposition—rose to the challenge. With a materials science PhD student, he developed—and I’ll just give you the short version—that it was possible to deposit buffer layers suitable for epitaxial growth of 123 YBCO on practical substrates, like stainless steel tapes. He used Ion Beam Assisted Deposition, so-call IBAD. If aligned along one of the channeling directions of the buffer layer in single crystal form, the ion beam strongly favors single crystal growth by etching away all misaligned nucleating grains. Kudos to Bob. His process is now used commercially to produce kilometer lengths of essentially single-crystal 123 YBCO tapes. So, that’s one of the good stories about how out of the chaos a really important result emerged. To bring this story up to date, there is a new idea in the nuclear fusion world that because of the very high magnetic field capability of 123 YBCO at low temperatures, it becomes feasible to consider very small fusion reactors that can still isolate the plasma, in marked contrast to the usual systems like the tokamak being built at ITER. They are of course thinking in terms of the YBCO tapes that Hammond’s work makes feasible. A former employee, Brian Moeckly, of Conductus—which is the company that we, along with some colleagues from Berkeley, formed to commercialize high-Tc superconductivity—is now building up a manufacturing facility in Milpitas, California, for one of the two small start-up companies seeking to commercialize small fusion reactors. So, that’s another connection back to KGB. Brian was a PhD student of Bob Berman at Cornell, my undergraduate and PhD alma mater. But it was at Conductus that he got involved in IBAD and has been one of its campions ever since. And so, it’s an interesting set of connections. In any event, if you had told me in the early days of high-Tc that technologists would be making kilometers of a cuprate superconductor for a small tokamak reactor, [laugh] I would have said you’re nuts.
Mac, what’s been the impact of digital electronics on Josephson junctions?
I’d put it the other way around: what’s been the impact of Josephson junctions on digital electronics? In highly specialized applications where speed is of the essence, there’s some work going on. Hypres, a small company north of New York City, sells an Advanced Digital-RF Receiver for both government and commercial markets. The leader of this effort is Deep Gupta, who was a post doc in KGB before he went to Hypres, working on novel Josephson junction circuit concepts. There are almost certainly NSA-type applications about which I have no details. NSA has always supported high-speed superconducting electronics as a promising technology for high-speed electronics. The problem in the case of more general digital systems is that they need lots of memory. And Josephson digital electronics does not have a high-density memory technology. And that’s presently its Achilles heel. Of course, many groups are working on possible solutions. Stay tuned. Of course, in the quantum computing domain, Josephson circuits are leading the pack, in no small part because there is extant a highly developed large-scale integrated circuit technology.
Mac, a broad question about the search for new and better high-temperature superconductivity. What have been some of the most exciting advances, both in materials, new physical phenomena, that have made this a real research focus for you and your collaborators? And what are some of the fundamental physical limits to high-temperature superconductivity that might put a cap on some of that optimism long-term?
Well, as we discussed before, I think it’s phase fluctuations. And I think it’s a very serious--and fundamental—problem. Okay? And if you accept that, the next obvious question is how can I best minimize that limitation? The answer is pretty unambiguous. I discussed this point at length in a special issue of the MRS Bulletin on superconducting materials. One wants materials that have a sufficiently high, but not too high pairing interaction—high pair density—therefore high electron density—and that are three-dimensional. Now to be fair, this result assumes a single-band superconductor, like the cuprates are in effect. There are some very innovative proposals that seek to find ways around this problem. For example, my colleague Steve Kivelson, who is one of the original articulators of the phase fluctuation problem, did a little model calculation that I think is brilliant. What he proposed is a bilayer geometry in which one has a normal metal layer on top of an insulating layer—or it could be conducting, but anyway—an insulator in which there are what is called negative-U centers. If positive U is repulsive, negative U is attractive. It’s just a way of getting superconductivity without worrying [laugh] about whether you’ve got phonons or excitons, or whatever. Just something that would cause the electrons to pair up. So now, what could happen? Consider an electron moving through the metal layer. It can hop – tunnel – on to, and then off, this negative U center, and then a second electron comes along and does the same thing. The net result is a net attraction between the two electrons. The key here is that the region of high electron density and that which has a strong pairing interaction are physically separated, you may have the best of both worlds. It’s known as proximity effect pairing. Interestingly, this model system is basically equivalent to tunneling between bands in a two-band material. Or if you prefer thinking in real space, imagine a unit cell in which at its center there is a negative-U site and on the surface of the unit cell the conducting bands are located. My fingers are crossed. Finally, I follow with great interest the claims of room temperature superconductivity at high pressure in the hydrides. Ok, presently they may be impractical, but from the fundamental point of view, I note that they meet all the criterion to minimize phase fluctuations.
Mac, long-term: quantum superconductivity. Is this going to be a game-changer?
Well, we touched on quantum computing earlier. It certainly could be a big deal. But you ask a different, very interesting question. Since the Golden Age, superconductivity research has slowly but inexorably evolved into the quantum regime. Recall that BCS is a mean-field theory of pairs. It does not include quantum fluctuations. By the number-quantum phase uncertainty relationship, where the operative number of pairs is small, fluctuation of their quantum phase will be large. It follows that small bits of superconductors will not have well-defined macroscopic quantum phases. It follows further that small Josephson junctions will be subject to large quantum fluctuations. The key parameter here is the capacitance of the small junction, which acts like the mass in a quantum formulation of the dynamics of a Josephson junction. Small mass, large quantum effects. An interesting aspect of this regime is the presence of Fermions in the system—normal electrons, shunt resistors—which can decohere the system. The theory of such effects was masterfully worked out by Caldeira and Leggett. The understanding of such systems is relatively well developed, and of course, is one of the foundations of the quantum computing applications. But quantum effects also manifest themselves in superconductors with small BCS coherence lengths, which is in effect the size of a Cooper pairs in the material. Small pair size means that the number of pairs inside the size of any given pair is small and hence quantum fluctuations are important. And, like with thermal fluctuations, dimensionality is also important. To be more specific, the focus at present is largely on the issues of quantum phase transitions in superconductors. In 2-D thin films, one such is the superconductor/insulator transition, particularly in a magnetic field where it appears on the superconductor side the pairs are mobile and the vortices localized, whereas on the insulator side, reflecting duality, the vortices are mobile and the pairs localized. More controversial is the purported superconductor/metal transition, where there remains some uncertainty even at experimental level. In the world of fluctuations and Tc, there is much evidence that as the pairing interaction increases, the pairs become localized and hence Tc reduced by large quantum phase fluctuations. These and related issues are now center stage in superconductivity research and will eventually be resolved. With the understanding so generated, it is anybody’s guess how much further quantum effects will impact our understanding—or applications—of superconductivity as laid down in the Golden Age. Perhaps we will speak of some sort of “Quantum Age” of superconductivity. I certainly hope so.
Yeah, that’s why it’s interesting.
Mac, let’s move on to the administrative side of things. I’d like to ask sort of an overall question about your tenure, first as department chair and then dean, and then dean again. That is —
No, I wasn’t. I was only dean once.
Okay. I’ll have to double-check my notes here.
No, I know what you’re saying, and I should have said something to you. I apologize.
There is a change in your deanship along the way.
Yeah, there was a deanship, but hold on just a second. Let me-- yeah. I think where the confusion arises is that I was effectively the founding director of GLAM and then later formally the director of the lab after I stepped down from my deanship.
And then other things outside of Stanford.
Okay, now, ask your question again. [laugh]
So, this service, was it meaningful to you? Did you enjoy this role?
I mean, some people become chair, and then even dean, because they’re tapped to do it, and they just say “yes.” Others see an opportunity to change things. I’m curious where you fit on that spectrum.
Ah ha, now I get it. I understand. I think the situation is a little – a little more complex though, at least in my case. So, let me walk through the stages. I’m answering your question, and I will try not go on [laugh] too long, I hope. Being a department chair is a duty you do for your colleagues because it’s important for the department to be run well, and after you have been the chair, you go back to your research or teaching or whatever else you want to do. So, you need good chairs. You need chairs that want to do the job, but mainly you’re paying back for those who did it when you were younger. It’s part of the natural flow of things, like generations in a family. Okay? And you identify with the family, and what it represents, and all that good stuff. Administratively you report to a dean. In my case to the dean of H&S. To start an interdisciplinary lab—to actually found that lab—that’s harder. Okay. It’s harder work, because now you’re now entering university politics -- and you’re talking about a lot of money. You’re talking about new space and new administrative structures, both cost money. Here you’re negotiating with the provost. And since GLAM was a multi-disciplinary lab, not a department lab, there is a need for diplomacy with the deans of other relevant schools and department chairs. In our case, the relevant departments were physics, applied physics and chemistry in H&S and material science and electrical engineering in the School of Engineering. So, now one’s really in deep water because there is rarely complete buy in from the leadership of all the relevant administrative units. And I think it is fair to say, the dean of the School of Engineering was never enthusiastic. He was not a big supporter of materials physics, nor of interdisciplinary labs in general, which administratively report to the Vice Provost and Dean of Research, not a school dean. And frankly, I did not have a good personal relationship with the then Dean of the engineering school, although we knew each other well. However, the really fundamental issue here is a matter of culture. The School of Engineering has a top-down culture, whereas those of us from the School of Humanities and Sciences were used to and liked a bottom-up culture. And you know, when it’s all over and you have moved in, those issues don’t just simply go away, they’re built in, and one just has to learn to work with them. It is not that one side is right and the other side wrong, only different. Vigilance and, in our case, success as a lab, helped in the long run. In the end, it is possible to feel good and enjoy it. And I do, and I have. When you become a dean, it’s at such a high level that the benefits of what you do are highly distributed and very long term, and not necessarily related to your parochial interests. But they can and must be aligned with your sense of academic values. You need to relate to and value the school with which you are entrusted. You must want to make your school the best of its kind. And if you don’t, you should not be the dean, even if you aspire to be. So, it gets harder as you go up, and it gets more removed from the things that you love to do. But at the core It’s something you do so those that follow you will inherit a better school. While I believe what I have just said, it is all a bit abstract, and certainly very high level. More interesting, perhaps, is what actually happened at each of these levels. The Applied Physics Department was created by some very exceptional, far-sighted people who were naturals in the sense of Pasteur’s Quadrant, in my parlance. It has been very successful. Just consider the number of members of the department who have founded independent labs, served as school deans or associate deans, or vice provosts of research, not to mention presidents of the APS. To be chair of the department was not an exercise in crisis management or anything like that. The department had, and still has, no laboratory space of its own and all faculty do their research in one or another of the university’s Independent Labs—indeed founding many of them. So, the job as chair was enjoyable. Your job was to maintain quality in the faculty and attend to diplomacy with each of the Departments and Independent Labs with which Applied Physics had strong connections and intellectual interests, and sometimes future prospects. Many of the faculty had joint appointments with various departments. However, as time went on, it became clear that the trifurcation of the physics enterprise at Stanford between Physics, Applied Physics and SLAC was causing some overall “corporate” problems. Historically, the most serious tensions were between Physics and SLAC. But Applied Physics had a major presence at SLAC through SSRL, so we were not indifferent to SLAC. Our relations with Physics were by and large good, except for their tendency to forget to involve us in their planning that impacted us, such as teaching and faculty appointments. I would describe it as something akin to “benign neglect.” There were no joint appointments. More serious were the growing overlap of interest in Physics and Applied Physics in the condensed matter physics area, in atomic and optical physics and even in theoretical physics. At this time, Sandy Fetter was chair of Physics and I was chair of Applied Physics. We got together and agreed that it was in everyone’s interest to address these problems in the larger physics enterprise at Stanford or suffer the consequences. Burt Richter, who was then the director of SLAC, had independently come to the same conclusion, and we formed a group including the three of us to assess the situation and propose solutions, many of which amounted to good ongoing communications, particularly over faculty appointments. So far, all this is working very well and there is good community esprit de corps. There is much more here to offer to other institutions trying to reform their physics enterprise than is appropriate here. I will only mention that a two-part video of the history, Growing Pains of Physics at Stanford, is available on the website of the Stanford Historical Society. I had a new colleague in the department who had watched the video say to me, “Wow, you guys did not mince your words.” I guess he found it interesting. And now, the formation of GLAM. It was really an imperative in my mind, and Ted’s and Aharon’s as well, that if we were to really have a world-class condensed matter physics—call it whichever quadrant you want—effort at Stanford, we were going to have to create a new laboratory. The Ginzton Lab, where we were, was supportive. They wanted to grow, too. I mean, the quantum electronics people were strong and needed growth, and we were strong and needed growth, and it was just going to require new real estate to do it, and interfaces with a wider set of departments. So, they were supportive, as was the Applied Physics Department as a whole. The problems weren’t there. The most serious problem was that we were pushing materials research, and the dean of engineering wanted to build an integrated circuits fabrication lab. He had a different priority and was openly not supportive. The university could not afford both. In the end, the president and provost yielded to the dean of engineering, so we were stuck with a fait accompli. But shortly after this, Ted, Aharon, and I decided that if we were to get the successful condensed matter physics group that we wanted, we would have to move and do the best we could. So, when the new integrated circuits lab was built, that cleared out a lot of space in the McCullough Building, where the NSF funded Center for Materials Research already existed. We agreed to move there, and the provost agreed to renovate the building. We won a $1 million NSF construction grant as well. A healthy cohort of faculty in physics, materials science and chemistry expressed interest in joining this new lab. It was a start, at least. And then our fortunes really changed. The Packard foundation offered Stanford a gift to enable the construction of a wholly new Science and Engineering Quad, the so-called SEQ. The McCullough Building would be at a corner of this new quad. And then it got even better. The Moore family—Betty and Gordon Moore—donated money to create a new building for synthesis of advanced materials that would be adjacent to McCullough and part of the new materials laboratory. Trust me, there was still much hard work to do, but that there would be a new lab was clearly established. Eventually, it was named the Geballe Laboratory for Advanced Materials, common referred to as GLAM. While the renovation of McCullough Building and the construction of the new Moore Building were going on, the university began a search for a new Dean of the School of Humanities and Sciences. I was too busy with construction to pay much heed, but I did get a heads up that I was pretty high on the list of candidates. And when I was asked to meet with representatives of the search committee, I had to pay attention. Apparently, my credentials were attractive to them. I was by then a member of the National Academy and had a good track record in administration. And I was visible by virtue of our success in creating a new lab. There were some concerns that I was an applied physicist and that I would not be sympathetic to the Humanities. That last concern led to a good chuckle at dinner with my wife, who was a successful regional artist. At the same time, I had served the school in various academic capacities such as the Appointments and Promotions committee. On the other hand, some of my friends in the engineering school advised me not to take on such a dysfunctional school. And then I remembered that once at a meeting with the provost, she seriously described the school as a ward of the state. I do not want to be gossipy here, but there were clearly some problematic perceptions of the school. I said to myself, “This is not good—maybe I can help.” I accepted the deanship. What I didn’t know, because no one knew—not even the provost—was that the school was on a path to very deep financial problems. This became clear, however, during my first year as dean after a review of the budget for that year, which predicted a structural $5 million deficit that would grow to $30 million a year after five years. At the first sign of a problem, the provost insisted quite correctly that some of her budget people come in and be part of our review. So, there was no question about the validity of the results. There was a big structural deficit emerging. The underlying problem was that a significant part of the school’s income had traditionally come from a large number of small sources within the university that were drying up, just as the cost to maintain—never mind enhance the quality of the school—were going up, due significantly to the cost of hiring the best new faculty—salaries, housing assistance, renovations, start-up funds etc., in an increasingly competitive academic market. It simply costs more the better you get. And, not surprisingly, as the dean, I was in a tough spot. I was urged by the provost to cut the size of the faculty. I said, that's not fair right after the university provided a significant number of faculty billets in the school, with no associated budget increase, to encourage the faculty to contribute teaching effort to the new initiative in undergraduate education. I pledged not to grow the school but to hold the faculty size at its present level. There never was a formal agreement on this matter, but it is what the school did. We also managed not to run deficits in the operating budget, which means it was a tough budgetary time. Well, the faculty were not happy, as you can imagine. Many said why can’t we fill the billets that we were promised. To which my mantra became, “Billets are only a piece of paper, what matters is whether we have the financial resources or not.” Then, these faculty argued, surely the university has the resources to fix the problem – demonstrating a naïve view of how fungible the resources of the university are when you’re talking about continuing budget base increases, as opposed to one-time expenses. As part of our financial review, we also did an analysis of our finances compared to those of the Faculty of Arts and Sciences (FAS) at Harvard. It turned out that the total faculty count in the two schools was about the same. Moreover, they were similarly distributed between the humanities, the social sciences, and the natural science and mathematics. But we also found that the annual operating budget of FAS at Harvard was roughly $30 million dollars a year more than that of H&S at Stanford—hence roughly equal to our projected budget short fall. Throughout the period we were carrying out these analyses, I reported the evolving situation to the H&S Council, the principal, high level advisory group for the school. Its membership included influential friends of the school and several present or past members of the Stanford Board of Trustees. Walter Hewlett was also a member of this group. I must say they were incredibly supportive. One former Stanford trustee who had been on the H&S Council a long time, confided to me that he had suspected that there was a problem with the finances of the school but could never get his finger on the matter. I really appreciated that gesture. And then, suddenly, it got a whole lot better. After the meeting of the Council at which I gave my final report on our financial analysis of the school, Walter Hewlett asked if he could come by and chat. Walter never said much at the Council meetings. He’s just quiet by nature. But he always came to me afterwards and shared his thoughts and asked me questions. Walter came in and said—and now I turn to direct quotes as I will remember for the rest of my days exactly what was said. Walter started: “Mac, I want to commend you on the work you’ve done to understand the school’s finances.” And I said, “Thank you, Walter.” And he said, “Well, we’re just going to have to do something.” And I thought, “Okay, Walter. We’re going to have to close down four departments soon?” [laugh] “What do you have in mind?” Now bear in mind that the school’s development person was also there, just the three of us. And Walter said, “Well, I think there needs to be a gift from the Hewlett Foundation to the school.” And I said, “Walter, how much are you talking about?” And he said, “Well, we’ll have to look sharply at the numbers, but several hundred million dollars.” Well, first of all, I looked over [laugh] at the development guy, whose face was absolutely white, and then I said, “Walter, I think you need to talk to the president.” In the end, they ended up giving a gift of $400 million, $100 million of which was to go to the undergraduate education innovations that President Casper had introduced, and $300 million to H&S, to be matched—unrestricted, to be matched. Well, 3 times 2 is 6. Take the rule of thumb for endowment payout, conservatively 5 percent. It doesn’t take a genius. Five percent of $600 million—excuse me—is $30 million. How’s that for a happy ending? I was exhausted. But it’s a nice legacy. And there is another member of the H&S Council, Lorry Lokey, who came to our aid. In addition to our operating budget deficit, the school was also facing major problems in its capital budget needs. There was a pressing need to create modern, code-meeting facilities for the Chemistry Department and also provide relief in the space pressures in the Biology Department. In one of the Council meetings during this time, I also mentioned these needs on the capital investment side. Lorry is among the most generous donors to higher education both at Stanford and other universities. After this meeting, Lorry pulled me aside and said that he wanted to help with the need for a new building. It was ready-made for his style in philanthropy. His generosity led to the construction of the Lokey Laboratory, conveniently located adjacent to the chemistry and biology departments in the Biology and Chemistry Quad—the so-called BCQ. Now to be honest, I’m not sure that H&S would have been the provost and the president’s first choice for such large gifts. On the other hand, I don’t doubt that they were committed to solving the school’s problems. Also, as I look back there had been some indication that the Hewlett Foundation was thinking about a gift. They were very discreetly asking questions. But throughout my deanship I was getting it from both sides. I had used up my silver bullets, and eventually the president and provost asked me to step down. And to be clear, I was not treated disrespectfully or anything like that. And then later, when it was announced publicly that there was a major gift coming to the school, I think some of my strongest critics may have felt a little chastened. And I confess I had some trouble holding it together when the provost acknowledged me before announcing my successor to the Faculty Senate, and that body stood and applauded. But, I was ready to go. The ward of the state was stirring.
[laugh] And soon enough, you become involved with APS in corporate reform.
Yeah, but that was not driven by a financial crisis.
Mac, what were the origins of corporate reform, and were you involved from the beginning of those discussions, given that the presidential line of succession gives you a sort of long gestation period until you become president?
That’s right. Yeah. Just so you will understand what I’m saying, do you have the clippings from the APS News that I sent you?
Are they handy?
I think-- let me-- it might be easier if you get them out. It’s much easier to tell this story with graphics.
I can, yeah.
Alright. Thanks. Can you go to the page with thoughts from past APS presidents?
Okay. From these quotes, you can see that there had been appreciation of the need for reform going back quite some time. But for various reasons, it just wasn’t the right time to tackle it head on. The quote from Myriam is important. She captures what I think was the most common, and valid, caution about corporate reform. In any event, the issue began to come to a head when Bob Byer was president. During that time frame, Barry was the past president, Bob Byer was the president, Michael Turner was the president elect, and I was the Vice President, at the bottom of the line. [laugh] Okay. Within this group there was a clear sense that APS needed change. So, Bob Byer began some initial planning. He initiated some studies, got people together to openly talk about it and hired some consultants to advise us. But there was still some resistance on the part of the APS Executive at the time—I’ll come back to that. Also, in my mind the consultants turned out to lack the sophistication that we needed. So, in the end, it was a start, but we needed to regroup. In the next year, when Michael Turner was president, he picked up on Bob’s efforts and put out another RFP for some consultants. This time round, the winning group had a very good proposal, and they were terrific people, who somehow had a good feel for the physics culture and played at a high intellectual level. And so, we were more emboldened. Finally, Michael said, “Okay, I either push the button, or I don’t.” He was right. The time had come to decide.
It was a binary choice, you’re saying.
Yeah, it was a binary choice. You either do it or you don’t, or you try to deal with it slowly. But nobody felt that slowly was a good option. The consultants were strongly urging that if you’re going to do it, the quicker the better. In any event, the consensus in the presidential line was that now is the time, or it’ll be a very long time before it happens. And so, to his credit, Michael made the decision to have corporate reform—with clear support from the rest of us. But he’s the one who made the decision. And then, [laugh] shortly thereafter, he moved on to being the past president, and the job of execution fell to me. In the end, it was the two of us who carried the biggest burden of making it happen. And we turned out to be quite effective together with great mutual respect, despite—or perhaps because of—very different styles and personalities.
Okay. Now look at the figure of the proposed governance structure. For reasons that I will come back to, before corporate reform, the APS was organized as shown in the left box. So, let’s talk about that first. We are a membership organization. Okay? So, the members are the “shareholders” to whom the APS leadership is ultimately responsible. And then there was the old APS Council, and below it in the hierarchy was an Executive Board, which was necessary because the Council was so big. The Council represents all the APS Divisions, Units, etc. I mean, there had to be some smaller group that looked after things on behalf of the Council at a more detailed level—budgets and strategic planning for example—but the Executive Board did not bear fiduciary responsibility for APS. That legally lay with the Council. Moreover, all three in the executive line were ex officio, voting members of the Board and the Council—an awkward arrangement for oversight of the three executives. Now, my understanding is that the structure shown was adopted in earlier times—when there was a CEO—and many people worried that the position had become too powerful. I think it is fair to say that Myriam’s concerns reflected in part that history. And I also think that the desire to keep APS a member-driven organization was universally supported. At the same time, in my opinion—and I was not alone—the governance structure as shown in the figure was incapable of providing good governance. It was not structured to ensure strong fiduciary responsibility, nor effective oversight over the executive. How can a group of 80 people bear fiduciary responsibility and oversight in any meaningful way? Okay? All modern corporations, including nonprofits such APS, have boards of directors with explicit fiduciary responsibility. The new governance structure, approved by vote of the APS members, is shown on the right. There are two top-level pieces, a Board of Directors and a Council of Representatives, which has the top responsibility for issues of science and science policy. There is by design significant overlap in leadership between the two groups to ensure good coordination. All voting members of both groups must have been elected by the APS membership. Finally, there is an elected Treasure on the Board of Directors with the special responsibility to oversee all financial operations of the APS and to ensure that the Board always has good understanding of the financial situation of the APS. So that's corporate reform flying at 50,000 feet. For the sake of history, let me now fly lower and describe what were the most serious issues that arose in the reform, and how they were resolved. First, as corporate reform was proceeding, I was frequently asked to give some examples of where and how the present governance structure was inadequate. My response was to point out that it is not so much a matter of what was wrong, but whether we were in a position to deal effectively with what was coming. It may sound a little quaint now, but I stressed that we were entering a period of rapid change and that our present structure did not have clear lines of authority to react agilely and coherently, nor to think strategically in a unified manner—not to mention more conscious execution of fiduciary responsibility. Similarly, the rapid pace of internationalism in science required an APS-wide vision that included all its principle functions, like membership, publications and meetings. Also, of continuing concern even today, is how to manage the, in my view, inevitable transition to open access publication, which has large implications for the financial model of APS, which in turn depends on income from publications to pay for other member and public services provided by APS. I also drummed home the point that our current governance and even our sense of ourselves date back to the post World War II era when the times were very different, and therefore it was not surprising that some modernization was in order. I think most people got this last point but were anxious not to lose our souls in the process. Fair enough. The transition back to a CEO was not without its critics. Also, the three members of the then triumvirate executive line were understandably sensitive. My personal view was that we needed a CEO, or the Board of Directors would inevitably have to get their noses more into the executive’s business, which would not be good nor fair for either. Better that the board deal with a single CEO who would be held accountable. Also, with good oversight by the board, any tendency of the CEO to become too powerful could be dealt as a matter of good governance. I hope this is at least clear.
I’m with you.
Now there is no doubt that the most contentious issue to resolve was the relationship between the Editor in Chief and the CEO, and both of them with the Board of Directors. Strict logic would dictate that the EiC would report to the CEO, who would be the sole executive nonvoting, ex-officio member on the Board of Directors. But as argued by the then EiC, the publishing operation has always held a special place in APS due to its size, financial importance and its role in the one of the declared missions of the APS: “To advance and diffuse the knowledge of physics for the benefit of humanity.” He felt that the EiC should not be administratively under the CEO and should be a nonvoting member of the Board. He felt this so strongly that he sought members outside those officially part of the reform process to support his arguments. In response, I formed an ad hoc committee to consider the matter and make a recommendation. As is often the case in human affairs, words get interpreted differently by different people. Differences of opinion continued on between the EiC and the first CEO about this matter even after I rolled off the presidential line. This led ultimately to the departure of the then EiC. For the record, my understanding was that it was the intent of the corporate reform approved by the membership that the EiC was administratively responsible to the CEO in all matters, but that in addition, because of the importance of publications to APS, the EiC would have a non-voting ex-officio seat on the Board. If on occasion, this anomalous situation led to problems, it would be up to the Board in good faith to resolve the issue. Let me also note, that at the time in researching the governance of some or our sister organizations, I found that it is often the case that the EiC has a unique position in the governance structure of that organization—for example, in the AAAS with regard to the editor in chief of Science Magazine.
Where is APS today as a result of corporate reform?
Well, AIP shares a home with APS. You may know better than I. Seriously, there was a little Zoom party for Kate [Kirby] just as she was actually going out the door. In truth, she was at home in Cambridge, sitting in her kitchen. Even familiar figures of speech need to be interpreted imaginatively in this era of Covid. Now seriously, as far as I can tell, being on the outside looking in, corporate reform has been a success. There seems to be a strategy regarding open access, and a healthy introduction of new journals in response to changes in what are members are doing. The CEO showed great courage to convert that March APS meeting to a zoom meeting at the last minute due to Covid. The staff showed the ability to pull it off. This is impressive. Also, the Council of Representatives appreciates their new and important role in the governance, whereas meetings of the old Council were seen as having been boring. It has also taken the lead in establishing a code of ethics for the APS—an issue near and dear to my heart after my experience in the Schon scientific misconduct investigation. Also, the new position of an elected Treasurer on the Board has led to more attention of some of the long-standing financial issues in APS, such as the underfunding of the endowments of the various APS prizes and awards, that were suffering from lack of attention in the past. Communication between the Board and the Council also seems good, no doubt due to an artful overlap of membership. There’s also a kind of coordinating group, consisting of the Presidential Line, the Treasurer, the CEO and Chair of the Council of Representatives that I initially set up to deal with short term issues. All in all, corporate reform was a good thing to have done. But before leaving corporate reform, there is another feature in the organization of the physics enterprise in the US that warrants some discussion. There are two organizations providing leadership in our community. Two publishing operations as well. I refer here to the APS and the AIP. Again, as I understand the situation, the reasons for these two organizations is historical and goes back to the 1930’s and the post-World War II era. Physics was the equivalent of the Covid vaccine heroes of our day. We all know the stories of how physics and physicists help win the war. As I read the history, the nuclear and emerging high energy physics people wanted a more academic APS. The AIP was there to represent industry and the physicists who were developing communications technology. I don’t know if there was any formal or informal agreement between the two organizations, but clearly that’s how it played out. And whereas APS is a membership society, AIP is more a federation of physics membership societies including the APS. There was an implicit division of labor between the two societies that seems to have worked well for many decades. However, like the need for change in the present era within APS, AIP also is undergoing change. Both organizations underwent corporate reform, more or less at the same time. As a part of these reforms, both organizations were seeking to define themselves appropriately for the times in which we now live and/or are moving into. However, these internal reflections also uncovered areas of tension between the two organizations. David, I hope that I have been accurate and fair in my characterization of your employer.
But certainly, there was something like that going on. Allow me to describe the situation from the APS perspective. The APS leadership could see that, going forward, a growing fraction of our young members would be entering non-academic positions, mainly in industry. At the same time the to me overly simplified understanding of the relationship between fundamental and applied physics was breaking down. APS naturally wanted to represent all physicists more broadly than perhaps it had done in the past, including industrial physicists. Similarly, APS wanted to expand the range of its journals to reflect the full range of interests of its members. Some of these were in areas that AIP had traditionally focused. Good will and coordination can deal on these issues to some degree, and it has. But more problematic was the desire of both societies to see themselves as speaking for all of physics. AIP viewed all the members of its constituent societies as members of AIP. APS rejected that position saying only member societies can speak for their members and that AIP was a highly valuable federation of member societies. In any event, the notion that AIP is a federation is now prominent in how AIP describes itself. I hope that such good will and coordination persists, but I am too far out of the loop to know. It also must be said two gems that were created and reside in AIP are its Statistical Research group and its History Programs—where my illustrious interviewer resides. But for me, the good news in all of this is the comfort of knowing that both organizations are moving with the times.
Mac, you’ve done so much in the world of service and advisory. Before we get to the last part of our talk where—I know you want to speak retrospectively from the heart—I want to ask specifically. Your involvement in the Schön Commission-- what did you learn about that in terms of integrity in science that you might never have thought about before?
A number of things. Let me begin with investigations of scientific misconduct and then return to the larger question of integrity in science. Investigations of scientific misconduct are initiated when there has been a formal accusation of scientific misconduct. It is the responsibility of the home institution to take appropriate action. Okay? And in the case of universities that take federal monies, there are federal guidelines that must be honored. That document lays down what constitutes scientific misconduct, the principles for how any investigation should be carried out and some common-sense strictures to ensure fairness. It is a very thoughtful and useful document. It was developed in the Office of Science and Technology under the leadership of Artie Bienenstock, a former APS president and a colleague of mine at Stanford, [laugh] with whom I shared an office for a while—so a little Stanford story there. The Federal agency that funded the research in question receives a report on the investigation, and that constitutes a form of public oversight. In the case of the Schon investigation, we assumed that Bell would choose to make the report public and wrote it accordingly. Of course, any investigatory committee is bound by their charge, which comes from the home organization. In any event, Bell did make the report public and the bottom line was that Schon, and only Schon, was guilty of scientific misconduct. It was not a close call. We had the full electronic files used to plot the figures in many of the papers in question. These files revealed unambiguously that data had been fabricated or deceivingly combined. Under questioning by the committee, Schon admitted that he did those things, while maintaining fundamentally all the results were true, and changed only to make the results more “beautiful.” For those who would like to see how we carried out the investigation, I recommend reading the report itself, if only for its methodology and human interest. It is available online. But there were some important ancillary issues that warrant discussion. However, before doing that, let me note that the National Institutes of Health (NIH) has its own guidelines. I don’t claim to know what is best for NIH, but I would have found their procedures very problematic. They are highly proscriptive, whereas the OSTP guidelines provide the principals that must be honored and provides some good common-sense guidelines to ensure fairness. It is definitely not proscriptive, “You must do this,” – so they give an investigative committee flexibility to deal with cases that are out of the ordinary, as was the Schon case for sure. We had 20 or so co-authors to consider. Now, how are you going to be fair to 20 people, where everybody has to be treated exactly the same? Doing so would have made an already huge task likely unmanageable. Besides, we knew from the first assessment of the facts that everybody wasn’t the same. Of course, once you have made that initial judgement, you have to focus on the remainder. And we did just that. We said, “Okay, these people are not guilty of scientific misconduct. They may have gotten into troubled waters via a collaboration, but they’re not guilty of scientific misconduct.” And then there was a smaller set that you had to really examine. I was scolded when the report came out by somebody at NIH—I don’t remember who—who said: “You didn’t do it right.” What he meant is: “You didn’t do it their way.” I listened politely and reminded him that we were not required to do so and that I thought their guidelines and were unworkable in cases like ours. Then I went home and figuratively kicked our dog and literally had a drink. But let me be clear, a more dispassionate view is that there are serious differences between the medical/biological community and the physical science community in how the manage their affairs. I have experienced such differences in other venues as well and worry that in our eagerness to create bridges between the medical/biological and physical science communities, there is tendency to underestimate the challenges. But I digress, now on to integrity in science.
The reaction to our report in the relevant physics and materials communities was in nearly universally positive. On the other hand, we were criticized by a few for overstepping our bounds in singling out one co-author, Bertram Batlogg, for further discussion, after we had declared him free of scientific misconduct. Right?
Batlogg was Schon’s supervisor at Bell. He was also the senior author on many the papers in question. But as I just said, the committee had cleared Batlogg of scientific misconduct, but it did not stop there. We felt—and here, admittedly we were in the gray area with regard to the Federal Guidelines—the committee felt that it would be irresponsible of us not to express our concerns of a failure of good oversight by the senior author in this case -- oversight that likely would have uncovered the problems before publication. In effect we crossed the line from misconduct to the larger issue of scientific integrity. We were in trouble waters where there were no lighthouses to guide us. Our goal was to foment consideration of scientific integrity in the larger scientific community. Lacking guidance, we established some broad criteria by which to proceed. For example, while it is certainly true that trust is an important ingredient in any collaboration, on the other hand spectacular results demand special examination-- confirmation. And such considerations, if only as a practical matter, lies with the senior author or authors. Furthermore, we were uncomfortable with the position taken by some that all authors bear full responsibility. Clearly, in practice not all coauthors are equivalent and surely the primary responsibility lies with the senior author(s) to ensure appropriate examination of the results was made. The important point here is that the Schon case highlights an important reality: in practice, investigations of scientific misconduct are restricted in what they can investigate to the strict issue of scientific misconduct but not the larger issues of scientific integrity. I am not saying we should change how we investigate claims of scientific misconduct, only pointing out that there are important issues that are not addressed via such investigations. There were other criticisms-- well, perhaps not criticism as much as passionate expressions of frustration, with which I was sympathetic. One was: but why didn’t you look at the higher Bell Labs management? Don’t they bear responsible for some of this? And the answer was: no, we did not because it was not in our charge. Moreover, I would not feel qualified to lead such an investigation. I mean, you have to examine what the management/governance structure of the organization is and what are the appropriate levels to examine. Also, certainly the ultimate responsibility lies with the board of directors of any organization to get to the bottom of why something of this magnitude happened. Such internal considerations are not typically made public, even in universities. This aspect of scientific misconduct investigations has been recognized – and sometimes criticized – over the years, certainly going back certainly into early in the 20th century. Anther frustration, which came mostly from earnest young people, was how they feel inadequate to assess deeply what everyone had done in a large collaboration. I touched on this point already, but I mention it again here to emphasize the dilemma young people feel over what may be expected of them: How do I protect myself? How do I do the right thing? The situation is further complications when the co-authors come from different countries, which may have differing policies and traditions. And I must confess that when it was over and I thought back about my own life, I realize now in retrospect that I saw scientific misconduct two times earlier in my career-- neither in any of my home institutions. In both instances, I warned management but did not make anything like a formal complaint. In both cases the individual was fired but nothing beyond that, for which I might perhaps be fairly criticized.
I mean, there’s an element of human nature here. If one cares about an organization as a manager, one doesn’t want to see it get slandered, even maybe when it should get some criticism. We just don’t. [laugh]
We don’t want to do that. Okay. It seems the easy way out to just get rid of the person, but then does that do the right thing?
Well, it potentially ignores the culture and the system that allowed for this individual —
Yeah, it is kinder to the person, but it just may move the problem somewhere else. Ok, that's the story. I tell it here simply to illustrate how complex these issues can be.
But let me end on a more upbeat note. I was blessed with an excellent investigation committee. They were in fact selected by Bell, not me. But I don’t-- I salute the job they did. And-- let’s see. Oh, yes. I did a couple of things that proved helpful to all of us on the committee. I found a book that was written about scientific misconduct by two science writers at The New York Times back 30 years ago or so, and I recommended that everybody read it. It discusses many of the things we learned. As often is the case, there is nothing new under the sun. What was different in our case was the size of the questioned misconduct and the strong public interest. I also contacted some faculty members at Stanford that I knew from the days as dean, and a researcher in lying at UCSF that I had met at a dinner once. The occasion was a social dinner with my wife and one of her artist friends whose husband was the expert in lying. Small world, eh. There are two lessons I learned from these contacts that have stayed with me. First, we all lie. Whether that is bad or not depends on the context and common expectations. Sometimes it is just a reality of the transaction, and we just compensate. Second, don’t forget that sociopaths exist. They do not have the normal sense of right or wrong that most of us have. This does not mean that they should not be held accountable for misconduct, but it does mean one does needs to feel some compassion.
Mac, for the last part of our talk, I want to go to things that I know are very important to you, retrospective ideas. And I first want to say: it’s a testament to your generosity that it’s important for you to talk about some of the forgotten heroes in superconductivity. I mean, for you, between your accomplishments, between the people that you’ve worked with, your name is synonymous with superconductivity. And yet, I know it’s important for you to talk about some of those names that are very important in the field that others might not know so much about. So, who are some of those forgotten heroes for you?
Well, one of my heroes, Fritz London, is not forgotten, as his name is on one of the important theories of superconductivity, but I don’t think his full contributions are appreciated by younger people. I mean, he was the father of the classical theory of superconductivity. Most people know this. But what they don’t know is how he thought so deeply. He anticipated that superconductivity was a macroscopic quantum phenomenon. He knew it had to be something like that. And so, when you read what he did, in his own words, it’s humbling. Of course, the Ginzburg-Landau theory captures the macroscopic quantum nature of superconductivity at a phenomenological level so well and in such a useful form that its authors will always be remembered. And BCS will be remembered. It is the microscopic theory. Actually, it is also somewhat phenomenological when you think about it, but nonetheless, it captures all the essence. Also, Josephson will be remembered because his name is on a device. But I don’t think the profundity of what he did is not the way most people remember him.
And then there are the more-humble folks whose blood, sweat, and tears went into studying all the elemental superconductors that informed BCS—for example, the isotope effect—because they are not now where the action is. The two major exceptions are Onnes and Meissner for obvious reasons. And then there are those courageous people who sought new superconducting materials. We celebrate Bednorz and Mueller, but how many of our younger colleagues remember Bernd Matthias, a charismatic champion of the search for new superconductors. I mean, it’s just the way it is. Don’t get me wrong. But it still bothers me. And coming close to home, I suspect that in a decade the young people working in GLAM will not know how much we owe scientifically to the namesake of the lab. And now coming up to the more modern era, let me take an example where I was a participant, just to illustrate my point: the issue of resistance in superconductors, where I have already argued some unification and distillation of the fundamental ideas is needed pedagogically. But once that need is met, the roles of the various people who illuminated the family of effects that provide the input for a unified theory will drift from our consciousness. As is so often the case, unification of ideas in physics comes at a high price of the adventure that was required to get us there. As a historian of physics, I suspect you know what I mean. Does that help?
Absolutely. Absolutely. Mac, over the course of your career, your passion for physics has been so obvious, both in terms of your love of being in the lab, your concern over your colleagues, your willingness to do all kinds of service and advisory work. In reflecting over your life, where does this all come from? Where are the seeds for your love of physics? Where does it come from for you?
That’s a good question. There is no—it’s not simply genetic. [laugh] I mean, it never is, but let me go with that anyway. In my ancestry, there is no scientist on either side. Both of my parents were very successful professionals in the social sciences. My dad was a high-level civil servant in the Social Security Administration, and my mom was—at least, early in her career—was the first professor of sociology at the University of Wyoming. At that time, my dad was regional director of the Social Security Administration in Denver. That’s where they met. So, they are people people. And when I was an adolescent, maybe even longer, I mean, I really liked science, but I also swore that I would never do anything that meant dealing constantly with people. You know, I wanted to figure out how the physical world worked, and that would be enough. So, I’m sort of singular in that regard, in my family. But I suspect that some of my social [laugh] conscience, and perhaps even some of my talents, such as I have, to run organizations and do it well, I owe to my parents, [laugh] frankly. This, and my innate frustration with organizations that don’t work well, completes the picture. But first and foremost, I think it was-- I was interested in “things” and how they worked. I know it sounds like a cliché, but this one happens to be true in my case. And this interest was manifest early in my life. There is a charming story in the family lore about me building a milking machine out of blocks one day in kindergarten, because we’d gone to a dairy farm [laugh] over the weekend. So, I always was mechanically, or perhaps better put, function oriented, even at an early age. And I think that’s what drew me to engineering initially. But you asked about physics. Remember that in high school my passion was basketball. Even so, as I got into those parts of mathematics that are more advanced—algebra, geometry—and then in college, first-year calculus. I was drawn more and more to abstract and fundamental ideas for their own sake. I had that same feeling again somewhat later when I first appreciated what linear vector spaces were and how the special functions are all, you know, the same thing as orthogonal multidimensional vectors at some abstract level. I really liked that. And, it gave me a pictorial way of understanding. In any event, by the end of my freshman year, my interest began to turn to physics for its own sake, with perhaps a practical bent. It was the one place where everything came together for me. Also, I was beginning to understand how I think, how I understand things. And this goes back to the William Sears story. I now appreciate that he was thinking in terms of equations, whereas I think in terms of pictures, images, and the like. And that’s how I understand the world, and how I solve problems. Then I do the formal math when necessary. Let me relate another example. When I was in graduate school, everybody was learning about Green’s functions. And it was this mystical thing we had to know, but to me it was so ephemeral, I couldn’t get ahold of it. But at some point, I realized that it is just a correlation function, fundamentally like the ones I had learned about in an undergraduate engineering course in signal processing and noise. It wasn’t the concept of correlation that made Green’s function complicated, it was using this concept where the underlying dynamics was quantum mechanical. Similarly, I was also familiar with the existence of theorems, like the Wiener-Khinchen theorem that relates correlations in temporal space with spectral functions in Fourier space. With this understanding under my belt, Green’s functions, along with linear response theory, were comfortable for me, and more useful. I understood what they did and what they were good for, but I didn’t need to master all the manipulations. Okay, once you have the Green’s function, you can figure out the dressed mass of a particle due to interactions, making precise the notion of a cloud of correlations moving coherently with the particle – voila, a quasiparticle. Now that is interesting, and profound, even if one can’t turn the theoretical crank. The physics was clear. Besides, I have enough trouble figuring out — solving — acquiring the arcane knowledge you need to do a new experiment, even more so when invention or development of a new tool is involved. And even with phenomenological theories like Ginsburg-Landau theory, which are more physical, I still think in terms of pictures. That’s it. And I’m just good at that. Having said all this, I must acknowledge that my daughter, who is a successful architect, puts me to shame in her spatial abilities.
Mac, for my last question: last year, Ted Geballe turned 100. If there was ever anyone who exemplified the dictum that physicists never retire, it would be Ted. Between his amazing longevity and just the entire story of KGB, the magic behind it, had you realized at Harvard, even before you knew you weren’t getting tenure, that there was more about materials that you wanted to know—the amazing good-naturedness of wanting to bring Aharon in before you even realized how great he’d be for your collaboration with Ted. There’s really something quite special about KGB, and I know you’ve been in a reflective mode on the occasion of Ted’s 100th birthday. So for posterity, for the broader audience out there that isn’t so keyed-in to the accomplishments of KGB and what they’ve contributed to superconductivity and everything else: what do you want people, broadly conceived, to understand about the nature of this collaboration and what you’ve been able to do with these two remarkable colleagues?
I have an answer. I’ve thought about it. There’s good news, and there’s bad news. Oh, no. That’s not what to say. There are some important ingredients, and there is some poignancy. Let me put it that way. KGB will someday no longer exist. It probably effectively doesn’t exist now. I mean, the camaraderie lives on—there’s still a lot of love, [laugh] okay? But with Ted and me now so removed from the day-to-day, it leads one ask whether it could be recreated. I tried in my comments at Ted’s 100th birthday party to describe the ingredients of KGB, what is special, its values and how it works. I can’t say it any better here, and I urge those interested to ready those comments. They are appended to this interview. And I suppose that document might be viewed to some degree as a check list for what is needed, but that would miss an essential point. KGB was not created. It emerged. The difference is critical. Even GLAM, which started with a KGB-like spirit has evolved toward being a more typical lab. The imperatives of our time for the universities to foster interdisciplinary research ironically mitigate in some ways against KGB-type groupings. This comment is in no way intended to criticize the desire and need for more interdisciplinarity, nor the obligation of the universities to meet the needs of society. I am on board with all of that. However, I am also deeply committed to quality and creativity. The question is whether these desired ends are best achieved in bottom-up or top-down organizations. Some would say, the train has already left the station, and that may be true. And, I do not claim to know the answer to this dilemma. But I urge those leading these new organizations to not forget the universal lessons of history, even in this era of needed change, and seek to achieve balance—equal opportunity. Only if you do, will there be the needed fertile ground out of which the KGBs of the future can emerge and blossom. Now let me close this interview with a final anecdote. Very recently, Ted announced that he was giving his last KGB group talk. After his talk, Aharon captured the moment beautifully. It was a kind of benediction. He said, “Well, it’s been a great run.” Indeed, it has.
Mac, this has been amazing. This has been an epic journey across your career. It’s a testament to—coming in as prepared as we did, which I think really made all the difference—I want to thank you for spending this time with me, for sharing all of your insights over the course of your career, and this really and truly is going to be a historical treasure. So, I’m so glad we were able to do this. Thank you so much.
Okay. Well, you’re quite welcome. APPENDIX The KGB Group On the occasion of Ted Geballe’s 100th birthday M.R. Beasley January 20, 2020 For those who don’t know what KGB refers to here, it is the Kapitulnik-Geballe-Beasley research group at Stanford, which, before Aharon joined, was known as TMAH – the Ted-Mac Amateur Hour. In late 1991, the Soviet Union collapsed, and I received an interesting email from a Russian colleague. It said, and I quote, as best I can remember, “There used to be two KGBs. Now there is only one, the good one”. And only a few years ago, I gave an invited talk at the annual meeting of the Physical Society of Taiwan, after which in the question period, out of the blue, I was asked, “What is it that makes the KGB group so successful?” Well, while it is nice to be admired, I am not sure I know, even after all these years, exactly what the magic is. But in honor of Ted on his 100th birthday, I will give it a try. One thing is certain, if Ted had not come to Stanford and be all that he is, KGB would never have happened. Of course, both Ted and Aharon no doubt each have their own view on this subject. Given the nature of KGB, it could hardly be otherwise. So, why would three independently successful scientists (now all members of the US National Academy of Sciences) voluntarily form a group? It is not common. One thing is clear, the cast of characters is diverse and therefore collectively broad: Ted is a native Californian, a chemist turned physicist, who is kind and generous, who has countless friends, including the 103 elements in the periodic table, and who maintains to his core that the search for new materials based on chemical and structural intuition will lead to new physics – and hopefully, of course, higher temperature superconductivity. One has to admire his track record. Aharon is an immigrant from Israel who is brilliant if sometimes argumentative, who is as good at theory as many theorists and better at experiment than most experimentalists, and who accepts no programmatic constraints. From the discovery of the 248 YBCO cuprate superconductor to quantum phase transitions to tests of gravity at short distances, all are fair game. Oh, and he has almost as many friends as Ted. I’m a migrant from the East Coast, who others claim is thoughtful and far thinking (hmm, as opposed to intuitive or brilliant?) and who lives in Pasteur’s Quadrant in Science/Technology space. If you don’t know what Pasteur’s Quadrant is, shame on you. It is that quadrant in Science/Technology space where the pursuit of fundamental understanding and technology exist symbiotically, mutually motivating each other. Think of Pasteur studying the spoiling of milk in Lille, France, and discovering microbiology. And apparently, unlike Ted and Aharon, I need solitude on occasion. And even outside of science, we are diverse. In music: one of us likes symphonic music; one of us likes jazz; and one of us likes rock & roll. In sports: one of us plays soccer; one of us plays tennis; and one of us is a fly fisherman. And on the culinary front (think beach parties): one of us is a Jew who roasts whole pigs; one of us is a WASP who bakes challah; and one of us is a connoisseur of the local products of Pescadero, CA. Make no mistake, this diversity is not a problem. On the contrary it is a virtue. It may even be the point of KGB. As Aristotle first said, “The whole is greater than the sum of its parts”. Today, with less eloquence, we talk about synergy. And there is little doubt that KGB reflects this truism. The range of things that come out of the group is impressive. Personally, I like the analogy to a successful jazz combo. A jazz combo is small with no conductor. Each player independently has great chops but listens to what the others are doing and incorporates what he hears into his own voice. In physics speak, the individual elements in the system interact through all-to-all coupling. What emerges is music that a priori could not be predicted nor done alone. And when this toing and froing is at its best, there is joy in the air. At the same time, each player is free to perform individual solo concerts. The rub lies in creating an environment in which it happens and that persists over decades. There is no overall plan, unless you accept Ted’s characterization, “We plan for the unexpected”. There certainly is no programmatic management structure. Each of us picks our own projects and finds funding for them. The magic arises by virtue of the intimate communication between us (and all the students, postdocs and visitors) that occurs in the Friday KGB group meetings, and by having a lot of shared space and equipment, which fosters interaction. And we must not forget the annual KGB Beach Party. The rules are simple. If you are a member of the group, you must speak in turn about your work, some new idea or a possible new experiment. At the same time, any member of the group can approach any other member, seeking expertise, relevant knowledge or possibly a collaboration, and in the case of the students and postdocs, career advice. In addition, in the case of KGB there seems to be some common characteristics that I believe are important:
• A shared passion for physics of all kinds, with a special affection for superconductivity.
• A devotion to seeking excellence, individually and collectively.
• A determination to be leaders, to be creative. Belief in the adage, “The best waves to ride are the ones you make yourself.”
• Deep mutual respect and good personal chemistry. We really do like each other and keep each other’s back. And it is likely good that we are well separated in age.
• Have smart, strong wives and healthy loving families. (Seriously. Think about it.)
Ok, I hope this helps, but in the end there still is magic. It’s like great art, hard to define but one knows it when one sees it.
Happy Birthday, Ted KGB on a mountain near Eilat. (Fittingly for a picture from Israel, the ordering is BGK)