Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
We encourage researchers to utilize the full-text search on this page to navigate our oral histories or to use our catalog to locate oral history interviews by keyword.
Please contact [email protected] with any feedback.
Photo courtesy of Stephen A. Fulling
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of Stephen A. Fulling by David Zierler on July 8, 2021,
Niels Bohr Library & Archives, American Institute of Physics,
College Park, MD USA,
www.aip.org/history-programs/niels-bohr-library/oral-histories/46816
For multiple citations, "AIP" is the preferred abbreviation for the location.
Interview with Stephen Fulling, Professor of Mathematics and of Physics and Astronomy at Texas A&M University. Fulling explains the history of why his primary academic department is math and how the field of general relativity became more directly relevant to observational cosmology in the 1960s and 1970s. He recounts his middle-class upbringing in Indiana and his dual interests in math and physics which he developed during his undergraduate years at Harvard. Fulling discusses his graduate work at Princeton, where Arthur Wightman supervised his research. He explains the contemporary controversy over the Casimir effect and his interest in the Minkowski vacuum, and he discusses his postdoctoral appointment at UW-Milwaukee. Fulling describes his work on Riemannian spacetime and Robertson-Walker spacetime, and he explains the opportunity that led him to the University of London, where black holes was a focus of research. He describes meeting Paul Davies and Chris Isham and how the field started to take black holes seriously as observable entities in the 1980s. Fulling explains his longstanding interest in asymptotic expansion and he surveys more recent advances in the Casimir effect. He reflects on the Unruh effect as it approaches its 50th anniversary, and he addresses the disagreement on whether or not it has been observed and whether the Unruh effect implies Unruh radiation. At the end of the interview, Fulling discusses his current interests in the soft wall problem and acceleration radiation, and he explains his ongoing interest in seeing advances in research on Casimir energy.
Okay. This is David Zierler, Oral Historian for the American Institute of Physics. It is July 8th, 2021. I am so happy to be here with Professor Stephen A. Fulling. Steve, it's great to see you. Thank you for joining me today.
I'm happy to be here.
Steve, to start, would you tell me, please, your title and institutional affiliation?
I am Professor of Mathematics at Texas A&M University, and also Professor in the Department of Physics and Astronomy at Texas A&M. That is a courtesy, or honorary, appointment, since my degrees are in physics, and most of my publications are in physics, but all of my teaching is in mathematics, except for serving on graduate PhD committees, and things like that.
That does beg the question right off the bat, Steve, why your primary appointment is in the math department and not the physics department.
Oh, well, in my undergraduate days, I took as many math courses as physics courses. I was always interested in the mathematical aspects of physics. My PhD was at Princeton under the supervision of Arthur Wightman, and many of his students were much more mathematical than I am. We can talk about that in more detail later if you want to come to it. And then, I moved in the direction of general relativity as opposed to quantum field theory, and by the time I finished my degree, the people who were most interested in what I was doing were the general relativity people, not the rigorous quantum field theory people. Then I finished my PhD in 1972, which was a period of recession in jobs in physics in the United States. They were hard to come by. I got a postdoc at the University of Wisconsin-Milwaukee in the physics department for two years, and then in 1974, I had to find another job.
So, I applied to postdocs in various places, and ended up in the Maths department of King's College-London, where they had a physics group that was doing general relativity and quantum field theory in the style that I was interested in doing and competent to do. But it was located in the Maths department as opposed to the Physics department, which was in the same building two or three floors down. The physicists there, the theoretical physics there was mostly condensed matter as opposed to what I was doing. They had a strong tradition in that mathematics department of doing relativity. So, in 1976, the job shortage in physics in the United States was still in progress, and so I started applying for various physics and math departments. I think the fact that I was located in something that was technically called a math department might have made me more attractive to math departments than might have been the case otherwise.
So, I applied to, among many other places, the math department at Texas A&M, where I already had a contact. One of my Princeton classmates, Francis Narcowich, had been at A&M ever since he got his degree from Princeton in 1972. So I knew him, I corresponded with him, and he recommended me to the department head who looked at what I was doing and said, "This is good," and was apparently quite happy to hire me. So, I went into the Math department, and I had to learn how to teach calculus, which from my point of view, from my background, was probably easier than learning how to teach freshman physics with labs. So, I saw no reason to try to leave there as long as they were happy to have me. And although there are very few universities where the communication between the math department and the physics department is as good as it was in Princeton, I think in Texas A&M, it's good enough to have a productive -- not always collaboration, but contact, going back and forth from one building to the other for seminars and colloquia and graduate oral exams and so forth.
Steve, just to get a deeper appreciation of your sensibilities, administrative distinctions aside, at the end of the day, are you fundamentally a physicist or a mathematician? Or perhaps a better way of asking that question, are your primary questions in physics, and you use mathematics to answer those questions, or alternatively, are your primary questions in math and you use your physics expertise to answer those mathematics questions?
Well, I think both are true, but to work on the second horn of the dilemma at the moment, I wouldn't say so much that I use physics questions to solve math problems, as that I use physics questions to motivate math problems. There are lots of problems that I'm interested in from a purely mathematical point of view, but most of them arose or attracted my attention because of some physics problem I was working on. So, I would say that both of those things are true. So, I'm going to be both a mathematician and a physicist, and cynics would say I'm neither. I always joke, when somebody asks me, “What is a mathematical physicist? You call yourself a mathematical physicist”, I say what it means is that when I submit a paper to the proceedings of a math conference, it gets banished to a section in the back of the book called either applications or heuristic considerations. And when I submit a paper to the proceedings of a physics conference, it gets banished to something in the back called either mathematical methods, or formal developments. Nevertheless, I have a good reputation, I think, in both fields.
Right. Steve, a nomenclature question. I've heard the terms physical mathematics, and mathematical physics. Are these meaningful distinctions to you?
Physical mathematics, to the best of my knowledge, is not used as a description of a research field or a professional field. It appears mostly in the titles or the advertising for textbooks at the graduate or upper-level undergraduate level. Mathematical physics, in this country at least, refers primarily to the sort of work that Wightman and his students were doing in Princeton in the '60s and '70s when I was there. The more specific tagline for most of what they were doing is axiomatic field theory. So, studying quantum field theory from a very rigorous point of view in reaction to the unpleasant reality that, in the previous generation, to get any answers whatever out of quantum field theory, one had to play awfully fast and loose with the logic of mathematics. It was felt that when you have very few direct experiments for the basic structure that you're assuming, then you had better be very careful about the logic, otherwise you might be producing just nonsense.
And I think the same is true, to some extent, of general relativity, which at the time, when I started, was still regarded with considerable suspicion in American physics departments. There was a tradition of doing it in mathematics departments, because that was the only place that those people could get jobs at the time. But there was an explosion of prospective applications of general relativity in astrophysical problems around 1970, and the modern era of relativistic astrophysics and cosmology began and became much more respectable in physics departments. So, both the field theory and the general relativity have always had a very strong mathematical component. So, you have people who are called mathematical physicists because they're in mathematics departments, but they're doing physics or something that's very applicable to physics. And you have people in physics departments who are studying physics by highly mathematical methods. And they can sometimes be the same people.
Steve, you touched on something that I wanted to ask, and that is your perspective on the relative interest in general relativity over the course of your career. Have you seen it, particularly in the light of all of the excitement with regard to LIGO and what's going on today in cosmology, do you see the interest in general relativity as waxing and waning over the decades, or has it been steadily increasing?
Well, from decade to decade, or half decade to half decade, I agree that it waxes and wanes. But as I said, there was a huge uptick in the 1960s and '70s, when it became more directly relevant to observations. And also, became relevant to fundamental theory. It used to be that the people we then called elementary particle theorists -- now they're more often called high energy theorists -- would start their public elementary talks by saying there are four fundamental forces in nature: gravity, electromagnetism, the strong force, and the weak force. And then they'd cross gravity off the list and say they don't have anything more to say about that, and then they'd go on to talk about elementary particles. But then, with the string theory revolution of the mid '90s -- or mid '80s, I guess it was -- things turned around and you had a sizable percentage of the people working in that area say they believe gravity is the most fundamental of all, and everything can be built out of gravity in some sense, if by gravity you mean some higher dimensional theory like string theory. So, that, perhaps as much as the astronomical connection, was responsible for an upsurge of interest in general relativity.
There came a time, as you probably know, and I wasn't quite -- since I had gotten myself into a math department and was at a junior level where I wasn't responsible for anybody's grant or salary except my own, I wasn't aware of what was going on at NSF, but I heard from other people that the field became so popular that NSF decided to call a halt, and told the researchers to practice birth control and not crank out so many PhDs in general relativity because they weren't going to be able to find jobs. Then, five or ten years later, they reversed themselves, and said, "Now we desperately need people," to work on things like LIGO and all the relativistic astrophysics that was finally bearing fruit. But don't ask me for any historical footnotes on that. That's just my impression.
Steve, a very broad question about your overall research agenda. Where do you see overlap between your work on general relativity and quantum field theory, and where do you see these areas of research as discrete?
Well, certainly each one of them can and really does exist as something discrete. But I've always been -- always is not the way to put it. From the beginning, I have been very interested in combining them. In my graduate school days, I was very interested in what would happen with standard quantum field theory if one moved into a curved spacetime. So, I was mainly interested in taking a standard quantum field theory, usually a free quantum field theory, against the spacetime background. The spacetime background could depend on time, but it would not be itself a quantum field. So, quantum field theory in curved spacetime is what I call that. So, most of my early work was in that, and from that, I developed interest in a number of other things that are less related to gravity.
For example, some of the same issues that arise in that curved space quantum field theory also arise in flat space quantum field theory with boundaries. So, that's why I've done so much work in more recent decades on the Casimir effect, and similar things, which has nothing to do really with gravity directly. Although, I think that also can be combined. I've also collaborated on papers with the title “How Does Casimir Energy Fall?” for example. Trying to see what happens with Casimir calculations when you do have an external gravitational field. So, that's one thing that's not really gravitational, but it's something that has grown out of my gravitational work. Likewise, I use lots of asymptotics, the branch of mathematics where you, crudely speaking, expand everything in power series. It's much more interesting than that sounds.
So, I've tried to do from time to time some pure or generic mathematics in that field without any particular physical application in general, not to mention the gravitational one. Likewise, spectral theory: if you have a non-trivial gravitational background, or even any non-trivial potential or a space dependent or spacetime dependent coefficient in your differential equation for the field, then you're going to be led to replacing Fourier transforms and Fourier series by more general eigenfunction expansions. So, I had to learn about the general theory of expanding things in eigenfunctions -- of course, advanced undergraduates learn that under the name of Sturm-Liouville theory, but that's only a special case for one dimension, and usually for problems of finite intervals where you only have [a] discrete spectrum. But there's also a very interesting theory that develops when you are on infinite intervals, and you have again continuous spectrum. So, you have some generalization of Fourier transform, but you have special functions or whatever in place of the sines and cosines. And the problem of getting the right normalization in the formulas is non-trivial. So, that's been one of my favorite subjects over the years. I'm perhaps rambling. I'm not sure that I answered your original question.
Steve, I'll ask you to wear your allegiances on your sleeve. Ultimately, do you believe that string theory offers the clearest path to developing a theory of quantum gravity?
I will plead neutrality. I serve on dozens of graduate committees of students who are working in string theory, so I have of course learned a lot about it, but I don't work on it, and I withhold judgment. And I'm not trying to sit on the fence. I don't have any strong opinions, really, either way.
Are there alternatives to developing theories of quantum gravity? In other words, if not string theory, then what?
Well, I'm not the right person to ask because at Texas A&M, the people who are concerned about those things are the string theorists. But if you go to Penn State, for example, you'll find people who do something else, just to pick one at random. Or University of Cambridge, and other places in Europe that do things called causal sets, and such, which I really don't know about. So, I'm not going to say anymore.
Steve, when did black holes become real for you?
Again, because of my mathematical bent, there's a sense in which they were real from the beginning because they were natural solutions to the equations that presented interesting problems. So, I sort of internalized it as something one talked about, especially given that I had that postdoc in England in the mid '70s, where black holes were really a hotbed of activity at University of London and Cambridge and Oxford. I traveled frequently to Cambridge and Oxford for weekly seminars because the train trips were reasonably short, and I didn't have a teaching duty, so if anything was interesting, I could always spend the day going to the neighboring place. But somewhere along the line, when I wasn't necessarily keeping tabs on the developments in observational astronomy, the tide turned, and the astronomers became convinced that black holes actually were real.
So, if you come up for air from the mathematical depths occasionally, you find that the world has changed a little bit. As was the case also, similarly, for the dark energy and the exponential expansion of the universe in late times, which was predicted during this postdoc period in the 1970s in the paper by Gunn and Tinsley, which made a splash at the time but then was pretty much forgotten. And then, sometime in the '90s, the observations were made that won the Nobel Prize for Adam Riess and Perlmutter and those people. After that, everybody in physics knew about it, but I wasn't quite aware of the significance of that even at the time. It wasn't until a couple of years later I realized that people were talking about this in a different way. And I would ask, was this what Gunn and Tinsley did? And they'd say, who were Gunn and Tinsley? They were pretty much forgotten.
Well, Steve, let's take it all the way back to the beginning. Let's go back to Indiana, and let’s start with your parents. Tell me about them.
I would say my upbringing was very midwestern, middle-middle class. We were definitely not rich, but we weren't by any means impoverished. I was just brought up to be careful with a dollar. My mother was a college graduate from Evansville College, which became later the University of Evansville, best known for its basketball team. She majored in, I think, French or maybe foreign languages in general. She didn't learn much science and became a librarian. My father did not graduate from high school, but once he got a job he became quite a responsible citizen, and he worked for Chrysler Corporation for many years in various office positions. Eventually, it was a two-income household when my brother and I were old enough for my mom to go back to work in the library.
I had no regrets whatsoever about my parentage and my early family life. I think I was quite fortunate. They supported me, my intellectual and academic ambitions all along the way. I went to high school in the southern suburbs of St. Louis. Lindbergh High School, named after Charles Lindbergh. Among public schools, it's rather highly regarded in the state of Missouri. That's not the elite corner of St. Louis County, but it is recognized as a good school district, and I did quite well there and ended up at Harvard with a National Merit Scholarship. That's not quite the correct description, because I scored well on the National Merit test, but I didn't get one of the scholarships from the National Merit Corporation. It was a sponsored scholarship from Chrysler Corporation, which was intended for children of employees. So, I was, I think, quite fortunate there, and it helped quite a bit to help get me through college.
Steve, when did you start to get interested in science? Was it as a young boy?
I would say so, but I would say I start everything as a generalist and then become more specialized as I go along. Science was of interest, but so were lots of other things. In high school, I was pretty much convinced that I was going to be some kind of a scientist, but I was also interested in other things. I was on the debate team, and at one point when I was -- I was interested in politics, is what I'm saying. Social and political issues as well as in science.
I also, at the high school stage, became fairly interested in foreign languages. I remember at some point when I was interviewing for colleges, the interviewer asked me what I wanted to major in, and I said something about physics or astronomy, and I said I'm also interested in foreign languages. And he said, "Well, it's sort of hard to do both. You don't have enough time in your curriculum to take a lot of those languages." That was probably correct, but it could have been a self-fulfilling prophecy. I don't know. It might have discouraged me somewhat from doing something that I might have done. In high school, I took two years of Latin, and I started studying Russian in summer school during those years. That was enough for me to quiz out of the freshman Russian class at Harvard. So, I took the second year of Russian at Harvard, and ended up with a decent reading knowledge for scientific purposes. The Princeton math-physics library had lots of journals and books in Russian, so I actually did read things in the original which is much harder to do here in Texas where very rarely does a library bother to buy something that's not in English.
Steve, was Sputnik a big deal in your life?
Well, it was certainly widely publicized at the time. I don't know how much it influenced me personally. I don't remember coming to having some kind of epiphany on the basis of that, but it certainly was in the background of what was going on.
When you got to Harvard, were you more interested in pursuing math or physics, or was it a dual interest right at the beginning?
That was sort of what I was going to talk about, the narrowing of interest. I said at that time I wanted to major in astronomy. So, I'm listed in the freshman book as an astronomy major. I took the yearlong astronomy course as a freshman for which I didn't quite have the background. So, I think that was my hardest course freshman year, and I postponed taking a physics course. And by waiting until my sophomore year to take the physics course, I was able to take a more advanced -- not a more advanced course, but shall we say the elite version of the beginning course. They had two different versions. There was a standard one, Physics 12, and there was a more high-powered one, Physics 13. By waiting a year, I think I managed to survive Physics 13. I'm not sure I would have otherwise.
Meanwhile, in my day, you didn't take calculus, per se, in high school. The more recent structure of AP classes, I don't remember existing. Instead, what we had was, after two years of algebra and one year of geometry, in senior year, if you, like me, were determined to take a math course every year, then in the fall, you had trigonometry, and in the spring, there was a semester left over which was devoted to a course called Mathematical Analysis, which used a textbook titled Foundations of Advanced Mathematics, or FOAM, we abbreviated it. It covered analytic geometry, and the very beginnings of calculus. So, I learned some basic calculus, but it did not -- nowhere in the trig or the calculus class was there a decent treatment of exponential and logarithmic functions. So, when I got into the astronomy class and some other places, that was a weak point. It caused me a little bit of panic at the time. But I knew that I needed to start by taking a freshman calculus class. There was no question of advanced placement. But again, Harvard had two tiers. There was the math course that was pretty much open to everybody, and then there was a higher-powered version which covered, in two semesters, what the other sequence did in three.
So, that was what I thought I should take. But then I had to be interviewed by somebody. In fact, this course, the year I took it, the fall semester was to be taught by a man named John Tate, who was a famous number theorist, but I didn't know that at the time. I just knew he was a good calculus teacher. And then the spring semester was slated to be taught by Raoul Bott, another famous mathematician of differential geometry and group theory. But it turned out, Tate decided he'd like to teach the entire course. So, he took over and did the spring also. But in order to get into the course in the first place, I had to be interviewed by one of them, and I drew an appointment with Bott. Again, I had no idea who he was, but he interviewed me, and said, "You know, to succeed in this course you have to be quick." But how did he know whether I was quick or not? I thought the purpose of the exercise was that they would try to scare me away so as to, so to speak, they have no liability in case I flunked out. And I decided I'd go ahead and risk it. I think neither I nor the professors and the academic bureaucrats had much of any idea, any way of judging what my success level would be, except maybe for SAT scores and things like that, and high school grades. But I think taking that more advanced course was one of the best decisions I ever made. I really loved it.
Who, at the end of your undergraduate experience, were the most important professors for your intellectual development?
In undergraduate work?
Yes.
I can think, well, Tate for this math course. The physics course, the second year, was taught in part by Edward Purcell -- also, another part of it was taught by Daniel Kleppner, who was junior faculty at that time. Later became a well-known atomic experimentalist at MIT. By that time, I had changed my major from astronomy to physics.
What other people were there that were influential? Well, in my senior year, I took -- at Harvard, there was no sharp division between undergraduate and graduate courses. If you had the prerequisites for a graduate course, you could take it as an undergraduate without any fuss. Whereas at Princeton, that was not true. They came in undergraduate and graduate versions. I think the rationale there was that Ivy League undergraduates -- there were some Ivy League undergraduates who were quite capable of taking graduate courses, but there were some graduate students who had come from weaker backgrounds who might have been demoralized if they had had to compete with these undergraduates. So, they were kept segregated. That was the story that I heard later on, but I ramble.
I took what would normally be a first-year graduate course in electromagnetism, that is, the Jackson course. That was taught by Shelly Glashow. I also took the year-long first semester graduate course in quantum mechanics, but because of some course conflicts, my two roommates and I petitioned to take that as an independent study. We picked out Sidney Coleman as the target, and he agreed to do this, although he was not teaching the regular course. But he agreed to supervise us out of the textbook by Messiah. I think we also switched at one point to Gottfried in the second semester. I think those two were quite influential. Sticking to physics and math, those were the people who had the greatest influence on me, I would say.
Steve, what advice did you get about graduate school? Specifically, did you consider staying at Harvard?
A lot of what I learned about graduate schools did not come from any official source, but from student scuttlebutt. Where did my fellow students learn about it? I don't know. I think I was somewhat reclusive and introverted compared to some of the others. In particular, one of those roommates I mentioned whose name is Harris Hartz -- who's now a federal judge in New Mexico -- but at the time, he and I intended to go to graduate school in physics, and he was the sort who was able to find things out. He was one of the people who clued me in about what to do at what time, and so on, for applying to various places. But I'm not sure he was the one who told me about Princeton. Somebody else -- somebody on the faculty remarked -- in fact, I think it was actually somebody who was in the administration of my dorm group, what was at that time called Assistant House Master, now replaced by some other term regarded as more politically correct. He was not in the sciences, but he knew from advising people that if you were good at theoretical physics, you probably wanted to go to Princeton. I think he was the one who planted Princeton in my mind, although it was also things that we talked about with the physics majors of my year and the year that preceded us, those who were seniors when I was a junior. So, that was where I had my primary goal there, but I applied to a number of other places.
I don't think, in the end, I applied to Harvard. But I applied to Princeton, Caltech, something else that I've forgotten, and State University of New York at Stony Brook, which was just getting started then and had a good reputation. They were famous for making C.N. Yang, at the time, the highest paid college professor in the country. He also was director of a center for theoretical physics there, and something that was called a splinter from Yang's chair was a fellowship, very well-endowed. I applied for that and was offered it, but in the meantime, I'd been accepted to Princeton. So, it turned out I didn't go to Stony Brook at all. I did later on, for a semester leave of absence in the '80s. But I certainly have a good impression of the place.
Steve, to go back to the question about general relativity, obviously your perspective is much different as an undergraduate compared to a graduate, but would you say there was more interest in general relativity at Princeton than there was at Harvard?
Yes, definitely.
Who was driving that?
John Wheeler was the father-figure. Also, on the more astronomical or observational side, there was Jim Peebles and Bob Dicke, and Wheeler had a number of younger people -- Remo Ruffini, Karel Kuchar, and Jim York, who were all assistant professors at about the same time when I was there as a grad student. And on the field theory side, there was Valya Bargmann, who taught several courses that I took. He didn't teach general relativity, but he had a background because he had been Einstein's assistant at some point, way back in the day.
How did you develop the relationship with Wightman?
Well, somewhere along the line I think I learned that he was the resident mathematical physicist. He taught the quantum mechanics course that I took in my first year, which was technically a second-year course, because as I said I had already had the standard first-year course from Sidney Coleman. He taught that course, so I got to know him right away. I think it was already in the spring of that first year that I talked with him and asked him to be my dissertation supervisor, and said I was interested in combining quantum field theory with general relativity. I don't remember exactly how much emphasis I placed on that at the time. The sequence of events, I'm not quite sure, but I think already that year I was framing the project of constructing quantum fields in a curved background, but specifically the de Sitter space background, which has just as many symmetries as flat space, Minkowski space. So, the symmetry group has the same number of parameters. And the idea was that by doing this you could construct an interacting quantum field theory in this universe in which the spatial sections are compact, and finite volume, and so on, and then take the limit as the radius went to infinity, and as a limit [to] get a theory that lived in Minkowski space.
If you had done it right, this theory should automatically be invariant under the Lorentz group. Whereas if you try a more obvious kind of spatial cutoff, the cutoff would destroy the Lorentz invariance. So, the theory that was constructed along the way would not be Lorentz invariant, and at that time, the people who were doing that -- basically, James Glimm and Arthur Jaffe -- had made considerable progress on the Phi^4 theory with cutoffs, and then taking limits. But they had trouble proving to their satisfaction that it was actually Lorentz invariant. Now, I had almost no understanding of the technical issues involved there. So, everything I said was somewhat naive, but I had the idea that if I could do what they were doing in de Sitter space and then take the limit, then I would get their answer, and it would obviously be Lorentz invariant because it had been de Sitter invariant all the way along. So, that was what I proposed, and it had nothing to do with general relativity from the point of view of cosmology and real astrophysics. It was more that purely technical motivation. But I never got that far because it turned out that the -- now, I'm getting ahead of the story.
Let me go back and answer your question about Wightman, because the next point is that during my second year, he went on leave to France. So, he was away for my entire second year, when I was working intensively at the library learning everything I could about the de Sitter space and the de Sitter group, and how to do quantum field theory. So, I was very much on my own at that point. So, of course, I didn't have much contact. There were some letters, but I didn't have direct contact with him until the third year when he came back. But what I found for de Sitter space is that what made de Sitter space really interesting was that although it had this very large symmetry group, there was no element of the symmetry group that globally corresponds to time translations. I'll try to show with my fingers. If you look at some point in spacetime at what you would call the future time direction, and you know what it means to go from one time to another time leaving the spatial coordinate fixed, if you look around for a group element that does that for you, you'll end up with one that, instead of advancing from one time to another along this cylindrical spacetime, tilts. It goes up on one side and down on the other. That means that in any field theory that would make any sense, that group element -- the Hamiltonian, the generator of the time translations, would not be positive because on the back side of the universe it's moving things in the negative time direction instead of the positive time direction. So, that means that one of the principal axioms of quantum field theory did not hold. Normally, you start by assuming that you have a Hamiltonian that's positive definite. And that wasn't going to hold in this case.
In fact, the very interpretation of the quantum field theory, free quantum field theory, in this situation was by no means obvious. So, here, you can say that the physics is again taking over from me. I became very interested in this problem of how to make sense of a quantum field theory in the curved spacetime, in particular the de Sitter case where what you would expect to be the natural thing to do was not quite working. I was more interested in that than in the technicalities of proving theorems. I quickly learned that I was not a good “hard analyst”. I may have been very interested in mathematics, but I was not at all that skilled in proving that things were less than epsilon when I needed to do it. I was much more adept at fitting the mathematics and the physics together and with luck resolving problems of interpretation, and at least discovering them and showing that things one might have naively thought were true were not going to be true. So, I never got around to constructing interacting quantum field theories from an axiomatic point of view in de Sitter space. I entirely worked on free theory, first in de Sitter space and then in more general spacetimes. The thrust of the research therefore moved in a different direction that was, as we briefly talked about before, more along the lines of what people needed in doing astrophysics and cosmology in situations where quantum theory was going to become important. This was before the big explosion associated with Stephen Hawking and the black hole evaporation. That happened while I was a postdoc in Milwaukee after I got my degree. But I was very well-positioned to work in that area.
What was Wightman's style like as a mentor? Was he hands-on? Did you contact him frequently?
Not as often as I wanted to. I would say that he let me pretty much go in my own directions. He had just come out of a period in which he had been very heavily loaded with students. He had, I think, over ten PhD students at the same time. But that backlog was being worked off at the time that I knew him. So, the number of students that he dealt with was reducing at the time. As I said, he was away for one crucial year, and that got me used to doing things on my own. And he was not a relativist, so sometimes he wasn't in a position, really, to provide detailed guidance on that aspect of what I was doing. But he dutifully read my manuscripts as I produced them, and gave valuable, critical comments. In the end, there was a period toward the end where I wouldn't say our relationship was strained, but I think we, as I put it, the things that I think are important in my thesis are not the ones that he thinks are important. Lots and lots of graduate students say that when they get to the stage of the last two years, but --
Steve, was there anything going on in the world of observation or experiment that was relevant for your thesis research?
Not really. I would say that even -- as I said before, the experimental verification of black holes, or even the plausibility of them, was not well-established at the time. Not as much as it became later. When I started working on the Casimir effect, what was involved there was -- one of the reasons for doing it from the point of view of those of us who were doing quantum field theory in curved space was, “Gee, this is something that they can actually measure. This is actually observable!” That was a rare privilege for us. But for the people who were doing experiments, the Casimir effect was still controversial at that time. The experiments were, as I said, consistent with the theory, but they didn't confirm the theory.
It wasn't until the 1990s sometime that the experiments became good enough to confirm the basic outlines of the theory. And there are still problems in that field with the -- well, I shouldn't get into the details, but there are still disagreements between theory and experiment that are not gross, but they're large enough to be worrisome. We had the situation where -- this is the “Drude versus plasma” problem, if you've heard about that. If you try to model vacuum energy in situations with realistic materials, not just assuming that you have perfect conductors or ideal dielectrics, but a better model of the conductivity, then the simplest model is called the plasma model, and it works fairly well. But we know from other aspects of conductors that the plasma model is not adequate, and a better model is called the Drude model, which has an extra parameter in it. So, you use the Drude model with a reasonable value for this extra parameter, depending on which experimentalist you talk to, that ruins the -- it makes the results worse as far as agreement with experiment. But if you talk to other experimenters, they don't agree. So, it started as a squabble between theorists, and it ended up as a squabble between experimentalists that has bothered the field for the past ten or twenty years. I don't participate in that directly.
What would you say were your principal conclusions in your thesis research, and how did your research contribute more broadly to the field at that point?
I wrote a very lengthy thesis which was sort of a data dump of everything I had learned, ranging from group representation theory to the details of defining field operators and so on, both in de Sitter space and in more general situations. And I typed it on punched cards, because we had a program -- a primitive version of what later became Troff, which was in turn superseded by TeX. But it was something that produced text on a standard computer printer, but to do equations with subscripts and superscripts, you had to put the subscripts and superscripts on the line above and below. So, the equations were spaced out hugely. Nowadays, one would call it an aesthetic abomination, but at the time, it was wonderful that you could do that at all instead of having to write in all of the superscripts and subscripts and integral signs and so on by hand.
A friend and I wrote a program to automate this placement of the superscripts and subscripts. It was an auxiliary program that you ran beforehand to produce new punched cards that went into the file with the text punch cards. But the point I was going to make is that because this method of printing was so expansive, it made the thesis come out at more than 500 pages. It wasn't quite as long as it sounds, but it's quite a hefty object, and there were still enough little doodads that couldn't be perfectly replicated by the character set that was available so that I had to go through and add a few little squiggles on every one of the 500 pages. It took about ten hours to create a copy of this thesis. So, you asked me, what did it accomplish? Well, I think it certainly set the problem, and I think probably the only part that was ever published as a journal article was the one that showed that the naive quantization of scalar field in the coordinates natural for auniformly accelerating observer, the Rindler coordinates, was not the same as the standard quantization in the entirety of Minkowski space using the usual cartesian coordinates. So, that was the germ of the Unruh effect, which is closely related to the Hawking effect. Although, what I did, I think, had no influence whatever on Hawking. But Hawking and I both had influence on Unruh, and he came up with his famous result in which he -- I say, the main importance of what he did was to show that the Rindler vacuum that I carefully constructed was not the really interesting thing about the Rindler space. The interesting thing is the Minkowski vacuum.
The ordinary Minkowski vacuum, looked at from a Rindler point of view, which reflects how an accelerating detector would react, and showed that from the point of view of such an experiment, such a detector, the standard vacuum is a thermal state with a non-trivial gas of particles there. It changed the focus from what I'd done, which was the obvious naive way to get started, to this thermal approach to life, which was also what Hawking was discovering from the black hole metric. So, all those things came together around 1975, '76, when I was in England, and Unruh was getting established in Canada. We were trading letters at that time. In fact, Unruh was in London himself for about a month during the summer, I think. We worked together, so there's a paper by Paul Davies, and Unruh, and me on quantum field theory in the vicinity of a two-dimensional black hole, which was the impetus for longer papers that Paul Davies and I wrote in the proceedings of the Royal Society in the years to come. 1976, '77, on conformal transformations of the vacuum, and defining the stress tensor, and so on.
Steve, besides Wightman, who else was on your thesis committee?
Let's see. I think Peebles was there, and Leonard Parker, who became my supervisor, or senior colleague, when I was a postdoc in Milwaukee. There's another stroke of luck. I somehow got acquainted with him, and unbeknownst to me, he had done quantum field theory in expanding universes as a graduate student at Harvard. His supervisor was Sidney Coleman, but I had no knowledge of Leonard at the time when I was an undergraduate student of Coleman. But he was doing something that was very closely related to what I was doing, and published it in the Physical Review around 1971, I think, when I was well on my way of writing the thesis. So, there was a moment of panic that I might've been scooped. But they were sufficiently different that much of what I did in the thesis was not a duplication of what he had done. In fact, he came to -- I've forgotten the exact sequence of events, but I visited Milwaukee at some point, and I went along with Remo Ruffini, whom I mentioned before, as a junior faculty member at Princeton in general relativity, who was invited to give a seminar, or a colloquium, at Milwaukee. I guess, it was his idea that I should go along with him, and he would sell me to Milwaukee as a postdoc. And that's the way it worked out.
So, I headed off with Parker, and I think I already had the job offer before Parker came to Princeton. But he spent that entire year at Princeton, which was my final year as a graduate student. In fact, that was my fifth year. I could not finish in time to get the degree at the end of the fourth year as I originally planned. But I spent my fall semester finishing the thesis, and then I began being paid as a postdoc by University of Wisconsin-Milwaukee during the spring when Parker was there, and we were working together. So, it was a seamless transition. I went from a graduate student to a postdoc without really having any great difference in the nature of my life, because I was still at Princeton doing sort of the same things as before. But then, of course, at the end, I went to Milwaukee for two more years as a true postdoc. So, the point is that Wightman suggested that Parker could serve on the committee since he had the title of a visiting professor at the time. So, he was also on my examining committee. And the fourth member was named Ed Groth, who was an experimental gravity person. Formerly a graduate student, and at that point, a postdoc working with Dicke and Peebles, I think. So, not from the area, but not so far as to be completely at sea. So, an experimental gravity person was a natural person to play that role on the committee. He was actually almost a contemporary of mine, so I knew him first as a graduate student.
Steve, tell me about your postdoctoral research. What did you want to work on next and in what ways was it connected to your thesis research?
Well, I was ready to follow Parker's lead, and he wanted to see whether [the cosmological singularity could be avoided by] the change in the energy of the vacuum that was produced by choosing the state of the field to be not the standard vacuum, but some other state, where the state of each oscillator in the field was not the normal ground state of the harmonic oscillator, but something else, and then do that for many different modes of the field. Then you can produce an expectation value for the energy density and the pressure that will cause the radius of the Robertson-Walker universe, or Friedmann universe if you prefer, to go down and instead of crashing to zero as it normally would, it will turn around and go back up again. Is that possible? Well, yes. If you fine-tune things very well, you can find a state that will behave that way.
So, I programmed the equations for the evolution of the function, the radius of the universe as a function of time, and set it off with this minimum, and let it run in both directions. Well, I set things up so that they had to be symmetric, and ran it one direction, and had some difficulty because of my naivete about how to handle numbers of very widely differing orders of magnitude. But I was able to produce a solution that ran for many minutes on the computer, and I ended up with a radius that was immensely greater than I started with. And it looks like it was probably going to smoothly join onto a standard Friedman solution. So, it could be a model of the --
Who were some of your collaborators at this point? Who were you working with?
I think just Leonard Parker at that point. I knew lots of other people at Milwaukee, with whom I had common interests, but I don't recall that I ever collaborated with them.
And in terms of the intellectual partnership, what did you bring to the table and what did Parker bring to the table?
Oh, well, I was about to say that the paper I just described was our first paper together, but then we wrote, actually, a trilogy the next year that was more -- we're going back quite far -- let me consult my publications and see what the names of these things are. Okay, the second journal article in my vita is Parker and Fulling, Quantized matter fields and the avoidance of singularities in general relativity. That's the one I just mentioned. The first one was something I did at Oak Ridge as an undergraduate, and it doesn't matter here [for our purposes]. And then, the third one is Non-Uniqueness of Canonical Field Quantization in Riemannian Spacetime. That was the journal publication of the main result of my dissertation. Then there's a paper that we wrote in collaboration with Bei-Lok Hu, who has been at the University of Maryland for many years, but at that time he was in the postdoc market and floating around and came to visit us once.
Now, this is what I wanted to look at: Numbers six, seven, and eight. Parker and Fulling, “Adiabatic regularization of the energy-momentum tensor of a quantized field in homogeneous spaces.” And then, number seven is Fulling and Parker, “Renormalization in the theory of a quantized scalar field interacting with the Robertson-Walker spacetime.” And the next one is Fulling, Parker and Hu, “Conformal energy-momentum tensor in curved spacetime: Adiabatic regularization and renormalization.” So, all of those were part of, sort of, the successive stages in the same project. The second one was one where I came up with a new idea for how to rationalize what we would now call the vacuum subtraction, or the way to renormalize the energy-momentum tensor. But it was a really very primitive idea at the time, which was replaced by much, much more sophisticated and convincing arguments in the years to follow, in the remainder of the '70s. This paper was published in 1974, and the ones I regard as more respectable came out in 1976, '77, and so on, from my King's College days with Davies and Unruh and Christensen. So, I would say the second paper out of the three was the one in which I had the most original contribution, but I'm not sure that contribution has really passed the test of time. I think it rapidly became obsolete with better understandings of what it means to renormalize energy in curved space. I can't remember any details beyond that of who did what, but I certainly felt that I was advancing Parker's program. He was the boss, but I also thought for myself. Does that answer your question?
Absolutely. Steve, tell me about the opportunity at the University of London. How did that come together for you?
Well, as I said, I was looking for postdocs, and I followed up every lead I got. I don't remember how I knew to apply to the University of London. I don't think I answered an ad or an announcement. There was some more personal communication, grapevine, of senior people, as far as I can recall. But there was a group in the Maths department at King's College, spearheaded by Chris Isham. Technically, the senior member was John G. Taylor, but Isham was really the driving force. Paul Davies was also involved, and a bit further removed, David Robinson also. But since the research grants there, to the best of my knowledge and memory, it was not -- whether you were affiliated with the grant was not so -- as a permanent faculty member, whether you were affiliated with the grant was not such a sharp distinction. You could work with people even if you weren't firmly part of the group. But the main thing that it provided was salaries for postdocs. So, the postdoc funds were associated with that grant, and I was one from 1974 through '76, and also Mike Duff.
And then, during the second year, starting in 1975, Steve Christensen, who had been a student of Bryce DeWitt in Texas. Duff had come from Imperial College along with Isham. In fact, they were almost -- no, that's not true. I was about to say they were the same age, but that's not quite true, because Duff is actually noticeably younger than I am, although I didn't realize that at the time. He just had his 70th birthday recently, and I was going through [my] 75th.
Did you enjoy your time in London?
Yes, yes.
In what ways were the emphases in London different than what you had experienced in the States?
Well, there was quite a bit of attention at that time, as I said, to black holes, and to the beginnings of quantum gravity. Certainly, there was a big difference between the London situation and the Milwaukee situation, because in Milwaukee, although, as I said, there were lots of people around who were doing theoretical physics that I found interesting, the main thrust of such people was various kinds of flat space quantum field theory, either what we now call high energy theory, or applications to condensed matter. Parker and I were essentially working alone. We weren't part of a larger web of things, as was true both in Princeton and in London. So, that was the greatest contrast.
How did the opportunity at Texas become available for you?
I saw an announcement of the position in Science magazine. I was writing to every job posting that I could find that seemed reasonable, and since I knew Francis Narcowich, I also, of course, wrote to him and said I was interested. If I hadn't known him, then the application might have gone nowhere, for all I know.
And you joined the mathematics department initially. That was from the start.
Yes, right.
Did you have any misgivings about that?
To some extent, to change the direction of my research to make it more mathematical. But in the end, I decided just to follow my nose and do what I thought was the right thing to do without worrying about how it was classified. So, there was a period when I was publishing papers at things like the Journal of Mathematical Physics, and Journal of Mathematical Analysis and Applications, but then in recent years I went back more to the physics side of things and published in Physical Review.
Now, did you have that courtesy appointment with physics and astronomy from the beginning?
No, no. That came, I believe, in the year 2000. It says so on the vita, Professor (by Courtesy) of Physics Department, 2000-Present. So, I'd been here 24 years when that happened, and I think I proposed it partly because I was spending so much time serving on graduate committees. I thought it should be somehow formalized, and they agreed. So, I got that status, which is that I get all of their announcements. So, I have twice as much email to read. Fortunately, not twice as many committee assignments. Apart from the graduate committees, they don't put me on departmental committees in physics.
Steve, when did you first meet Paul Davies?
At the summer school in Sicily, in Erice. I think it was the summer after I finished my dissertation. I had just got the degree from Princeton, and the postdoc in Milwaukee didn't really start until the fall. So, I had a summer where I was unattached, and I applied to this summer school, basically a one-week-long conference, or pedagogical conference on relativistic cosmology, or something like that. In fact, the principal topic of discussion at the time was the final act of the contest between the Big Bang and the steady state theory. The organizers of the conference had a lot of investment or background in the steady state theory. So, Fred Hoyle was there, and Geoffrey Burbidge, and an observationalist, Halton C. Arp. The big news was that Arp had discovered -- I haven't thought about this for years, so it'll take me a while to try to get it straight. By making an atlas of peculiar galaxies, as I recall, he had found something that could be interpreted as evidence that the steady state cosmology was more accurate than the standard Big Bang cosmology. So, that was the big bone of contention at the time. Of course, it turned out not to last, but that was sort of the focus of that meeting.
But Paul was there, and a number of us gave talks on our research. So, I got to know him, he got to know me, and if he had any input into the personnel decision, that may have helped me get the job at King's College, too, I don't know. But I ended up working more with him than with Chris Isham or John Taylor, although Chris was very much -- I had a lot of contact with Chris Isham, but I didn't actually collaborate on papers with him. But there were very many really helpful discussions.
Steve, tell me more broadly about your work with Hawking effects and Hawking radiation.
I first learned of it while I was at Milwaukee. As I said, we were somewhat isolated in the area, but I guess it's the preprint version of what eventually became Hawking's short paper in -- what was it -- Nature on black hole explosions. It was circulating, and that surfaced, I think, during my second year there. I particularly remember because before I had any knowledge of that, there was some occasion during the first year when someone came into the department and asked to talk about -- I don't recall whether it was a journalist, or an amateur physicist -- he wanted to talk to somebody about black holes and their connections to black bodies, and he was referred to me. And not knowing anything at all about Hawking's work, I assured him that he was confused, that black holes and black bodies had nothing to do with each other [except the name], and tried to disabuse him of that conception. Later, of course, I realized that in some sense I had sent him on entirely the wrong trail, because that was the whole point of Hawking's work, that black holes did behave like black bodies.
So, anyway, we learned about that work then, and I got to King's College, and my first year in England was the year that Hawking was at Caltech working with Jim Hartle on what became what in the trade is known as the Hartle-Hawking vacuum. It later became clear, really, through the work of Unruh, which we were aware of in preprint form certainly through 1975 -- his famous paper was published in 1976, but the preprint had been floating around for at least a year before that. And the full understanding of the connection certainly developed over a period of many years at the hands of many people. In other words, not merely me, or certainly not in a lead role.
Steve, I wonder if you can explain two terms: the Robertson-Walker universe and Robertson-Walker spacetime.
Okay. Well, they're the same thing. In my terminology, Robertson-Walker is an ansatz for the mathematical form of the metric in a spatially homogenous, but possibly expanding -- time dependent -- metric. So, it depends on one function, which is a function of the time coordinate. And that's called the radius of the universe. You can think of it as literally the radius of the universe if the universe is closed, so it's like a three-dimensional sphere. But it also makes sense in the cases where the spatial sections are either flat or hyperboloidal. So, you don't actually have a finite radius to the universe, but in the hyperboloidal case, there's an effective radius of the space. And in the flat case, that's actually more subtle because it doesn't literally have an interpretation as a radius. It's more properly called a scale factor, and you can define it to be unity, or your favorite value at any one particular time, but then it will be different at a later time. So, you can still talk about the universe expanding. You take two nearby points in space, and as the universe evolves, they will tend to move away from each other. The [derivative of the] scale factor determines the rate at which they're moving away from each other.
And then, in a serious attempt to model the actual universe, you'll put that into the Einstein equation, and calculate the, or make an ansatz for the matter content of the universe, and how it behaves as the universe expands, and then solve Einstein equations with that matter source, and that gives you what's called a Friedmann universe. Friedmann was the physicist who first worked out this sort of cosmology. Spatially homogenous, but time dependent cosmology, and the standard Friedmann universes are those that correspond to particular assumptions about the nature of the matter. Usually, idealized to some perfect fluid. So, there's a radiation-filled Friedmann universe, and there's a so-called matter-filled, or dust-filled universe where you neglect the pressure entirely. They're sort of opposite extremes, and there are other things you can fill in between them. But then you also have, when you consider more exotic physics than what the classical cosmologists considered, you can have other equations of state that can give other solutions, under still the same Friedmann classification. For example, if you consider dark matter where the pressure is equal in magnitude to the energy density, that is what gives the exponential expansion. Whereas the radiation has pressure equal to only a third of the energy density, and the [dust] matter has no pressure at all. And when you consider effects from vacuum energy, that can change things also, but not appreciably, except in the very early stages of the universe when the radius is very small. That's the only case in which Casimir-like effects could become significant, for example.
Steve, I'll ask a similar question I asked regarding your graduate school days. As we move into the 1980s in your career, was there anything at that point in observation or experimentation that was relevant for your research at that point?
I suppose it was in that period that the black holes began to be taken seriously by the observational people. I would say that was it. If memory serves me, the solid observation of Casimir effects came in the '90s, as also did the exponentially expanding universe, supposedly explained by dark matter or the cosmological constant, depending on your taste. That was also the '90s. The '80s may be more of a fallow period. I think that if you look, for example, at the various editions of the general relativity textbook by Bernard Schutz, in the first edition, the cosmologies -- Robertson-Walker solutions and so on -- are postponed to the last chapter so that he can spend more time talking about black holes. I think that black holes were the center of attention in the '80s, which is when he wrote the first -- let me make sure. I can at least look that up. Give me a second.
Sure.
Yes, it's 1985. And the second edition is more recent, in this century sometime. Maybe 2005 or 2010, somewhere in that vicinity. In the second edition, there's much more material added about cosmological solutions because dark energy and dark matter produced a renaissance in cosmology in the late '90s and later on, after the black holes had pretty much played themselves out for a while -- until LIGO. They go back and forth, right? And when I teach, I do the cosmology first, rather than rushing to do the black holes, because I don't see that anymore as being quite as much of a priority as it was in the 1980s. The cosmological solutions in some ways are simpler mathematically, so you might as well do it before getting into the Schwarzschild solutions, and things like that.
Steve, you asked an interesting question in a 1990 Phys. Rev. article, “When is Stability in the Eye of the Beholder?” What did you mean by that?
I think it had something to do with the -- you know, I haven't looked at that paper probably since I wrote it, so I'm having trouble remembering exactly what that wisecrack was about. I remember that it has to do with a cosmological solution. I think there was some kind of a dispute in the literature about whether the thing was actually stable, but I think that the editor of Physical Review insisted that I change the title, or at least add a second line to it. Although, I certainly didn't put that in the CV. Let me grab a copy of that paper, too. You've piqued my interest now. I like to understand my own papers. Again, I think I can quickly find that. Okay, could we take a brief break?
Sure.
And among other things, I will take a quick look at this.
Very good.
[BREAK]
Okay, I'm ready to resume. I was correct. The published version does have a longer subtitle, so I guess I didn't put it into the vita because it's only a three-page paper, and I guess I didn't want to take up too much room with the title. The subtitle is: “Comments on a singular initial value problem for a nonlinear differential equation arising in semiclassical cosmology.” And I have almost no memory of this. 1990, yeah. But what I say in the first three paragraphs are a bit more comprehensible than the abstract. So, there is a person whom I got to know very remotely named Wai-Mo Suen. I think he was a postdoc in Milwaukee at about this time, and he observed that “there exist perturbations with arbitrarily small initial values of the Hubble expansion parameter and the curvature scalar which evolve to arbitrarily large volumes within an arbitrarily short time. This brief report presents a simplified model of Suen's phenomenon, and shows that by change of variables, the instability can be made to disappear from that model. So, the physical significance of instability hinges on which variables have the deepest experimental meaning. The answer to this question in a cosmological context is not clear." And then I go on to write down this second order nonlinear differential equation and study its solutions and claim that -- I have an argument on the third page which I'm not going to try to read or summarize. It says depending on the physical interpretation of the equation, what theory it originated in, you can say that the instability is real, or it's an artifact of a perverse choice of variables.
Steve, more broadly, you've had a long-term interest in asymptotic expansion. I wonder if you can talk a little bit about that.
Well, let me take this opportunity to talk about something that I'm working on now. More precisely, I have a number of undergraduate students working on it, or former undergraduate students. I'm dealing with quantum theory, and again, this sort of thing originates in a cosmological context, but it's strictly a theory of quantum field theory in flat space, but subjected to some kind of a potential. You can think of it as a space dependent mass, a Klein-Gordon equation in which the mass depends on location and space. And if the mass squared starts out zero in one direction -- this is working in one space dimension -- zero in one direction, and steeply rises in the other direction, then you're in some sense modeling a wall. It would be a potential wall in a standard quantum mechanics problem, but in the relativistic case, the situation is a little bit more complicated.
But basically, think of a steeply rising potential as a confining wall. And we want to study the renormalization of the energy and pressure in that situation. The pressure ought to have something to do with how the energy changes as you move the partition out. That's the standard thermodynamics sort of thing. But when you solve an equation with a crude momentum cutoff on the integration to make everything finite, that turns out not to be necessarily true. So, introducing the cutoff ad hoc is somehow giving you something that doesn't correspond to a genuine, consistent physical system. So, we came up with this soft wall as another way of dealing with such a problem. So, you'd like to be able to find the solutions for the mode functions of that theory with that potential, and then calculate their contribution to the energy density and the pressure, and then integrate over the frequency of the normal modes to get the total energy and pressure of the theory. There are various ways to do that. You can treat it simply as an applied math problem, an engineering problem: This is a differential equation that any competent applied mathematician should be able to solve numerically, so let's do it with a computer. But the equation is sufficiently simple that you would hope that you would be able to look at it a more analytical way.
You want to look at the differential equation and construct an approximate solution. There are actually two parameters in the problem. There is a frequency parameter, or more properly, a wave number. It's the reciprocal of the wavelength of the solutions in the field-free region. And then there is the coordinate itself, which since I'm assuming there's a monotonically increasing wall, talking about the coordinate which we'll call Z is the same as talking about the height of the wall at that point. So, if either or both of those parameters is large, they appear in the equation as a sum of omega squared plus Z to the alpha, where alpha is the parameter that I use to tell how fast the wall increases. So, they occur as a sum, so if either one of them is large, the sum is large, and then WKB approximation for the solution of the function is good. But if they're both small, then WKB approximation is not very good. However, in that case, you can expand as a power series. In fact, you can expand as a power series in omega, the frequency, in a way that will be good for either small or large Z.
So, you have two different expansions. One is a power series in omega. The other is a WKB expansion which could be thought of as an expansion for one over omega, for large omega. But it's better thought of as an expansion in one over omega squared plus Z to the alpha, this whole quantity. So, one approximation is good at the small end -- for given Z, one approximation is good for small omega, one approximation is good for large omega. You have to do the calculation and integrate over omega, which really does have to be done numerically. And there's a region in between where neither of them is very good.
So, I've been working on that with a succession of students trying to do it largely analytically with just a few terms in the series. But we realized that was not really giving satisfactory results. So, I have a new student this year who has a good computer science background who decided to cut the Gordian lot and do much more sophisticated computer calculations. He went out and got his own supercomputer account and programmed the series to much higher order than before, and is [now] getting much better results. So, I'm not sure whether that's exactly answering your question, but that was the thing that it triggered to me. But I would say that WKB-type approximations are interesting and valuable for two things. One is that experience shows that they tend to be good approximations, much farther in than they have a right to be. This is an expansion around infinity in some sense. But it gives good approximations when the parameter is fairly small. Not really small, but fairly small, of order of unity at least. And the other thing is that kind of approximation is applicable in the limit which is opposite to that that's valid for most other approximations you could think of. One thing you might do is a power series in omega; another is a power series in Z, which is the standard power series solution of a differential equation that you find in differential equations textbooks. At least, you used to before numerical methods took over and engineers became less interested in power series solutions. Also, numerical series themselves tend to be better if the functions in the differential equation are not oscillating terribly fast in comparison to the length-scale or time-scale of the solution. So, those three methods that I rapidly fired off are valid in one limit, and the WKB-type solution is valid in the opposite limit, which makes it -- it stands more, nearly, alone, as a particularly important role to play.
Steve, what were some of the key advances in the 1990s on the Casimir effect?
First of all, they were experimental. As I said, the old experiments from the '50s through the '70s were not terribly precise. There was a lot of trouble in getting reliable observations with small enough error bars to be really convincing and interesting. But better experiments were done in the '90s. That forced a change in the way theorists looked at the problem, because when you don't have really accurate experiments, you can do calculations for fairly idealized models without getting into obvious trouble. So, you can do perfect conductors or perfect dielectrics. After shoving a few infinities under the rug, you get precise answers that look reasonable. But once the experiments became better, then physicists began demanding that the theory be more convincing and accurate. So, it became necessary to model the materials more realistically. That, from the point of view of beautiful mathematics, was a bad thing because it meant that all of the work we were doing with looking at spectral theory for boundaries of various shapes was no longer quite adequate, and you had to do something that involved looking at the details of the condensed matter more carefully. That's not really what I do, but I think it's probably what the people who are interested primarily in that field from the point of view of actual experiments would be more interested in that than in the more formal things that I do.
Steve, moving into the late 1990s and early 2000s, what was your reaction to the discovery of the acceleration of the expanding universe?
As I said before, since I was in the math department and only -- in fact, I think at that time I was heavily involved in undergraduate educational reform, and was a little bit out of the loop, or wasn't paying all that much attention to research at that time. So, it didn't really register with me immediately that what had happened in 1995 was such a great advance, especially since I already knew vaguely about this earlier work that I'd mentioned by Gunn and Tinsley. I thought it was known, but I didn't realize that it hadn't actually been established observationally until the '90s. So, the importance of it only dawned on me slowly. I don't know that it influenced my research in any way, except I think it influenced the field -- as I said, it attracted more attention back to cosmology, away from black holes. You see the traces of that in the revisions of Schutz's book and other textbooks as they came out in that period.
And when did you start following LIGO?
Well, let's see. Again, it's something that I was aware of in the background, but it became really important when they actually reported results. I don't remember the date, but maybe ten years ago when they started reporting these things. So, there's a lot more attention to those things, and so one hears about them in colloquia and so on.
What was the significance, in your view, of the detection of gravitational waves, not just the theoretical proposition that they existed?
I did take it upon myself to publicize to the other members of the math department what was going on. I forwarded some emails that I received, and stressed that this LIGO discovery did two things at once. First of all, it clearly demonstrated the existence of gravitational waves, which had been controversial for at least a generation before that, with the time when Joe Weber did controversial experiments which never really definitively proved that gravitational waves existed. But LIGO did. But at the same time, it also proved – maybe not that black holes exist -- that would probably be an overstatement. But it did show that these astrophysical objects were giving out the kind of radiation that had been predicted from the collision and merging of black holes. So, it confirmed two theories at once: black holes and gravitational waves, which I think makes it quite remarkable.
More recently, I wonder if you could talk about your research in vacuum energies.
Well, the thing that I described with the soft wall is an example of that. If you just do the standard problem of two parallel plates, for the electromagnetic field at least, you get a constant energy density in the space between the plates. That's for the electromagnetic field. If you do the scalar field, and technically, a scalar field that's not conformally coupled, then you get also a kind of energy that clings to the plates. In fact, the interval of the energy density right up to the plate is actually divergent. So, you have to somehow renormalize the mass of the plate in order to make sense of this thing as a realistic physical system. So, one of the -- the soft wall is one way of modifying the notion of a reflecting plate to be something softer, where you get fewer infinities. But it becomes more complicated calculationally, so that's been one of the subjects of my research of recent years.
Another thing that's related to that which is sort of hanging because I haven't had time to work on it recently is what's called the torque anomaly. As I said, if you put in an ultraviolet cutoff, you get what's called a pressure anomaly. The force calculated from the pressure doesn't agree with the force identified as the rate of change of the energy as the position of the partition changes. But introducing a better cutoff, going to this soft wall, seems to remove that difficulty. But we ran into a similar problem calculating the total energy in a sector between two mirrors. So, instead of moving apart perpendicularly, they pivot at a vertex. Again, we had this problem that the torque didn't match the change in the total energy with respect to the angle. I thought, oh, this is just another situation like the one we had before, and it will disappear if we do the regularization in a different way. I don't want to get into the details, but there's an obvious thing to try that I told my student. I said, "This will cure your problem, if you do the calculation this different way." And she came back and said, "No, it's different numbers but the same problem. It still doesn't match up." And then I went back and looked at an old paper from 1979 by Deutsch and Candelas [Pandilas] where they had done the conformally coupled field, which is one in which this pile-up of energy on the boundary doesn't occur. So, there was no infinity to throw away.
I'm sweeping a little bit under the rug there, but I don't want to get into details. In some sense, this case is much simpler physically, not just mathematically. And I looked at the formulas in their paper which purported to be final renormalized quantities and found that this phenomenon still existed. So, there is a mismatch between the torque and the rate of change of the energy with respect to the angle. That is still not resolved to my satisfaction. It has something to do with the fact that the angle is a very singular point. You would get rid of the problem if instead of considering the angle, you -- let me see if I can draw something with my finger here. Here's our wedge, like this. So, instead of a piece of pie, I take a partially eaten piece of pie. So, cut off a little piece at the bottom. There's a short, highly curved arc down here, and a longer, more gently curved arc at the top, and straight-radial lines connecting them. So, now we can say, what happens if you change the angle in that case? Then, there is no sharp angle that's changing. There are angles, but they're entirely right angles at the four corners. So, with Kim Milton at the University of Oklahoma, and his entourage, we studied that problem.
To the extent that we could do the calculations, it seemed that there was no difficulty in that case. But as so often happens, a new difficulty emerges taking its place. There's a contribution from energy along this curved thing at the bottom, which depends on the logarithm of a cutoff parameter. Those things are bad news because they mean that there's no -- if something diverges as a power of the cutoff parameter, you can argue it away as renormalization of something and just throw it away. But when there's a logarithm, then that process becomes ambiguous and introduces a new length-scale into the problem which is not in the original formulation of the problem. So, this annular sector calculation was somewhat unsatisfactory. First, because there is that logarithm contribution which replaced the infinite behavior at the vertex. And secondly, because we couldn't calculate the energy at every point, we could only argue that we had a handle on the total energy. I'm not satisfied with that. So, although we claimed victory on that paper, in some sense, I still think there's a lot more to be learned there. But I ran into a point where the special functions involved became intractable. So, I put it aside for something else and have never really gotten back to it.
Steve, in 2014, you wrote a retrospective essay on the Unruh effect. I'm curious, 40 plus years after you first started thinking about this, what needed to be updated, and what has stood the test of time?
Are you talking about the article in the Scholarpedia?
That's right.
I don't think there's anything from the established understanding of the problem that needs to be thrown out. I think it's just that understanding has improved and become more precise. You might ask Unruh and see if he agrees. It's hard to draw a distinction between improved formulations of conclusions and conclusions that are really new, that one didn't understand before. Since, as you say, it's been going on for 40 years, I sometimes find it hard to remember when I reached the level of understanding of something that I now have.
Steve, on the question of whether or not the Unruh effect has been observed, where are you on that debate?
There are two different schools of thought there, at least. There is -- my coauthor on that encyclopedia article, George Matsas from São Paulo, is among those who can't understand why there is really a need to give an experimental justification. He says, if you believe quantum field theory and you just transcribe things into different coordinates, then this is the result you get. It has to be this way, just like you know from looking at the Newtonian equations, that there has to be Coriolis force and centrifugal force if you describe things in terms of curvilinear and time dependent coordinates in classical physics. It's just describing the phenomena that we know from a different point of view. So, what's the big deal? Do you actually have to mount a camera on a rapidly rotating disc in order to convince yourself that the Coriolis force happens? So, in that point of view, there's nothing to be shown. But there's another school of thought -- you did ask about the Unruh effect, not about the dynamical Casimir effect, right? What I started to say had more to do with the so-called dynamical Casimir effect.
So, I'll backtrack on that. But there is the argument that something related to the Unruh effect has been discovered by looking at the polarization of electrons in storage rings. John Bell wrote an important paper on this. The thing is, it's related to the Unruh effect, but it's not exactly the same thing as the Unruh effect. You can't talk about a temperature unless you allow the temperature to depend on the frequency of the mode you're looking at, which somewhat dilutes the usual notion of the temperature. You have to change the weighting of the Boltzmann factor depending on which mode you're looking at. It's not quite satisfactory. But the advantage of that is there's something that can be observed at realistic speeds, whereas trying to verify the Unruh effect in its simplest form requires very high accelerations that are hard to produce.
Relatedly, Steve, what is your feeling about the implication that the Unruh effect suggests Unruh radiation?
Oh, yeah, that's another controversy. Again, I'm of the opinion that it clearly does. There was Unruh's original paper which said an accelerating detector will detect particles as if there's a thermal bath. Then there was the paper by Unruh and Wald eight years later which fleshed out a remark that was already in Unruh's paper, but it was just one paragraph and wasn't developed. It said that from the point of view of inertial frame physics, the detector is actually emitting particles rather than detecting them.
The second part is the part that has been controversial in some quarters. I tried to understand the controversy, and the paper that I found most readable and comprehensible among the people who say there is no radiation is one by Ford and O'Connell. This Ford is not Larry Ford, but G. W. Ford of Michigan. My understanding of that paper was that they were talking about what happened if and when the detector came into equilibrium with its surroundings. In which case, almost by definition of equilibrium, there's as much energy flowing in as flowing out, so there's no net radiation. And I agree with that, but I don't think that says Unruh and Wald were wrong, because they weren't studying that problem. They were assuming that you had no radiation present, and you accelerated and looked to see what happened. It's just a different situation. But there are other papers that may in some ways be more sophisticated, but the fact that they're more sophisticated makes it harder for me to understand exactly what they're claiming. So, I don't want to pick a fight with anybody over that at the moment. I'm satisfied that I understand what's going on, but I think the different opinions may be some question of definition.
Steve, just to bring our conversation right up to the present, what have been some of the most important areas of research you've been working on recently?
Well, first of all, there is the soft wall problem that I mentioned earlier. I've also -- and this follows right along from what we were previously talking about -- in trying to understand some of the subtleties about the Unruh effect, I became interested, once again, in the classical problem of radiation by an accelerated charge, which in some ways is very similar to the Unruh problem, but instead of having a neutral object with internal degrees of freedom that can emit and absorb energy, you have a charged particle without any internal structure. One would think that was a simpler, more classical problem, but it's never been completely settled to everybody's satisfaction. There are some people who claim that if the particle is uniformly accelerated for all time, then it doesn't radiate at all. But that's nowadays a minority viewpoint, but it's very hard to convince some people otherwise. Again, it's some kind of definitional problem, which I always had the feeling that I should be able to sort out if I took the time to read all of the literature. But the literature is vast, and I have some students working in that area right now.
I have a graduate student who's working on the acceleration radiation problem for a Proca field in place of an electromagnetic field. That's a vector field with non-zero mass. The advantage there is that you can turn the charge on and off without violating gauge invariance. If you try to do that with the electromagnetic field, you get a violation of Maxwell's equations because Maxwell's equations imply charge conservation. You can't turn charge on and off. But if you say we have a massive field, then you don't have a gauge group for one thing, and when you turn the charge on and off, you do get a violation of Maxwell's equations, but it's just the violation you started with. You have a mass, so the equation is not Maxwell. It's something more complicated, and it doesn't become tremendously more complicated when the charge varies. So, we're working on that, and I have two undergraduates who are interested in this sort of thing. We're reading through some of the old papers by Fritz Rohrlich on this issue from the 1960s, and [we are] trying to understand what issues still remain. The field, due to an accelerated charge, is known. So, any issue about whether it's radiating or not must be a subtlety of the definition of radiation. You have to look closely at the situation and see, what does radiation mean in this situation? Does it really matter whether the charge is there for all time, which means it's traveling, ultimately, arbitrarily close to the speed of light at the beginning and the end? What does that complication do? You find yourself rapidly getting into some complicated questions, which are still going on after many years.
So, I really have three projects with students right now. The one with the acceleration radiation, the one with the soft wall, and another one on WKB approximations for the ordinary non-relativistic Schrödinger equation in the presence of a soft wall, where the WKB approximation for the propagator as opposed to the mode functions involves studying the classical paths of the corresponding classical mechanical system, and constructing an approximation to the wave function of the propagator from the action, and so on, associated with those paths. Which is in some sense the most elementary of the three problems, but it's also proving in some ways to be the most complicated, because you have to find all the classical paths, and evaluate integrals over functions that are parametrized by those paths. And it's turning out to be more complicated than I thought it would. It's dragging on.
I also stay in contact with the group at University of Oklahoma, led by Kim Milton, working on vacuum energy. But at the moment, I'm not closely involved in what they're doing. I'm just sitting in because I happen to find my time taken up quite thoroughly by these other three projects with students. So, I sit in on what that group is doing, which nowadays includes people from all over the world, not just Oklahoma, because Kim has former students in various places, and also is working with Gerry Kennedy, who's somebody who worked with me in past years. He was a student of Stuart Dowker in Manchester in the decade right after I was in London. He came to Texas A&M as a postdoc in the math department in, I guess, the '80s. Then, he went back in an entirely new career in actuarial science, because that was where he was able to get a job teaching financial mathematics. He's in the process of retiring from that, and he wants to get back into physics research. So, he visited me one summer, and then he got in contact with these Oklahoma people and has really gone with both feet into that. He's much more in tune with what they're doing now than I am. I'm more of an observer, but he's a full participant. So, that's four research projects, and that's more than enough to keep me busy.
Well, Steve, now that we've worked right up to the present, for the last part of our talk, I'd like to ask a broadly retrospective question about your career, and then we'll end looking to the future. So, of all the things that you've worked on, all of your collaborations, all of the research questions, what stands out in your memory as being the most intellectually or scientifically satisfying?
Well, I would say that the one that is most intellectually satisfying is not necessarily the one that's most important.
That reminds me of your comment about the divergence between you and Wightman on what was important in your thesis research.
Yeah, I don't want to over-stress those differences. But when you do the kind of work that I do, which is rather distant from experiment, then you always have to worry, or defend yourself against charges that what you're doing has no relevance to real science. I try to keep things sane, let's say. Not necessarily directly related to experiment, but somehow grounded in reality. So, what are the things -- you know, I'm not sure I want to answer that without first putting some thought in it. There's 50 years there.
That's right.
Yeah, so I think I'd like to pass on that for now.
Okay. It suggests, though, that there's a lot. There's a lot to consider.
Right. There's a lot of different things.
Well, Steve, we can end by looking to the future. Of all the things that you've worked on, what are some of the most open questions that remain for which you're optimistic that you'll be around to contribute to resolving them?
Well, I hope that this question of acceleration radiation will yield. And then the issue of what it means to renormalize a quantum field in some kind of a background, either time dependent or space dependent, is something where the understanding, I think, has drastically improved over the years. Maybe I'm not quite ready to claim complete victory, but I think we understand it a lot better than before. Then, the remaining issues in Casimir energy, which I mentioned before, which is not something that I'm directly involved in, but I'm a close observer of it. It was there we need -- that involves a better understanding of materials in condensed matter physics than I claim to have. But I think we should see some progress there in future years. I guess, I'll stop there.
Okay. Steve, it's been a great pleasure spending this time with you. I'm so glad we were able to do this, so thank you so much.
Thank you.