Julia Menzel.
Courtesy of Menzel.
Earlier this year, Julia Menzel completed her PhD at MIT’s Program in History, Anthropology, and Science, Technology, and Society. Previously, she received a BS in physics from Yale University and an MPhil in history and philosophy of science from Cambridge University as a Gates-Cambridge Scholar. She is currently the Kenneth O. May Postdoctoral Fellow at the University of Toronto’s Institute for the History and Philosophy of Science and Technology. Menzel’s dissertation covers a wide range of subject matter spanning from the epistemology of theoretical physics to science policy. Here we present her replies to a few questions we posed to her about it via email.
Will Thomas: First, congratulations on finishing your dissertation. I’d like to start in the middle of it with Ken Wilson and the renormalization group. Can you discuss what the renormalization group was and why it was significant for physics?
Julia Menzel: Thanks, Will. The renormalization group is the historical name for a procedure that allows physicists to solve problems involving many coupled degrees of freedom at many different scales. It’s used, for example, to characterize how the strength of particle interactions scales with energy, as well as to characterize phase transitions in statistical physics. Kenneth Wilson’s work on the theory of the renormalization group originated in his work on strong interactions in the early 1960s, which he began while a doctoral student at Caltech under Murray Gell-Mann. In the early 1970s, Wilson published a series of influential papers that showed how renormalization group techniques could be used to solve for the critical behavior of a ferromagnet, which built directly on prior research by Leo Kadanoff, Michael Fisher, and many others. Not long after, particle theorists used his techniques to prove the “asymptotic freedom” of quantum chromodynamics, a key event in the history of the Standard Model. Wilson received the Nobel Prize in Physics for his work on the renormalization group in 1982.
Interestingly, Wilson’s work on the renormalization group was strongly informed by his lifelong efforts to use computers as an aid to scientific research. He would often remark that he devised the procedure while attempting to imagine how to program a computer to solve physics problems. In the 1980s, Wilson would begin to refer to the renormalization group explicitly as an “algorithm,” and he devoted significant effort to the development of numerical renormalization group methods.
Thomas: Wilson goes on to be a major advocate for building supercomputing infrastructure in the US, and, I believe, coined the influential policy idea of a scientific “grand challenge” in this context. How did this grow out of the renormalization group, and what did he hope to accomplish?
Menzel: Between the late 1960s and the mid-1970s, particularly under the Nixon administration, the US federal government made significant cuts to funding for basic physics (i.e., as opposed to applied military projects) that dramatically transformed the landscape for academic researchers at American universities. Depending on how you calculate it, government funding for physics in the United States was slashed by something like 50% in the span of less than a decade. These funding cuts had a particularly damaging effect on experimental programs at national laboratories and universities, notably leading to the closure of multiple accelerator facilities and other experimental centers. In 1981, Kenneth Wilson described this rather darkly as “the liquidation of the American basic research establishment,” which in his view threatened to severely damage not only American science but also the future strength of the US economy.
In the early 1980s, Wilson organized a response to this situation that turned upon recent advances in computing technologies, particularly new “supercomputers” that made use of “massively parallel” architectures. At this time, Wilson’s own research increasingly involved work with simulations (e.g., Monte Carlo RG, lattice gauge theory), and, like other physicists at this time, he became deeply frustrated by the limited memory and processing power of existing computers. Searching for the roots of this problem, Wilson began to tour research facilities at Fortune 500 companies with large scientific computing programs (specifically in the oil, pharmaceutical, chemical, aerospace, and automobile industries) to interview physicists in industry. At Exxon, Schlumberger, General Electric, and other corporate R&D headquarters, Wilson learned that industrial scientists were struggling with many of the same computing challenges as academic physicists—and, moreover, that these challenges were intimately related to the declining strength of American industry relative to Japanese, German, and other international competitors. This wasn’t just a problem with hardware. To remain competitive in an increasingly global market, Wilson learned, industry needed not only bigger and better computers for scientific simulations, but also new scientific methods for solving problems involving many coupled degrees of freedom: precisely the class of problem that renormalization group methods were devised to solve.
Wilson was disturbed by this state of affairs, but he also saw an opportunity. If the federal government had proven reluctant to pay for basic research, then maybe industry could be persuaded to foot the bill, particularly if that research contributed to the development of new computer technologies and simulation techniques. After winning the Nobel Prize in 1982, which gave him a prominent public platform, Wilson began to campaign to persuade Fortune 500 companies and the pro-business Reagan administration to invest heavily in new scientific “supercomputing” facilities. Teaming up with likeminded scientists like Larry Smarr and Steven Orzsag, Wilson helped to broker a deal between industry executives, policymakers, computer manufacturers, and scientists that led to the creation of national supercomputing centers at Cornell, the University of Illinois Urbana-Champaign, Princeton, and UC San Diego. At the time, these centers were quite unusual in the way they pooled together funding from the National Science Foundation, corporate donations, and university monies, as well as the ways they actively facilitated interactions between academic scientists and “industrial partners.” They mark a new chapter in the history of the relationship between scientists, the state, and corporations in the United States.
At its most ambitious, Wilson’s campaign for supercomputing can be seen as an attempt to redefine what a scientific experiment is at all in a “post-industrial” society. He wrote and spoke boldly about the future of scientific research, imagining a future in which all experiments might be done on a supercomputer, rather than in a lab. As you might guess, not all physicists agreed with Wilson’s perspective or program. Philip Anderson, for example, quipped that there were “lies, damned lies, and supercomputing,” contesting the idea that simulations could ever serve as an adequate substitute for traditional experimental work. Other physicists objected to Wilson’s attempts to join the interests of research scientists and big business, whose influence they saw as undesirable and potentially compromising.
Thomas: At the same time, we see the emergence of new theory-oriented research centers, a development you refer to as part of an “Aspen consensus.” What is this consensus and what is going on here?
Menzel: During the 1970s, an influential group of physicists associated with the Aspen Center for Physics began to argue that theoretical research in their field had outgrown the scientific institutions built to support it. Pointing to developments in renormalization group theory, string theory and supergravity, black hole physics, and numerous other examples, these scientists argued that the most exciting and forward-looking developments in physics increasingly took an “interdisciplinary” or “hybrid” form and thus fit awkwardly into conventional academic specializations. Traditional scientific institutions like university departments and government agencies, they argued, had failed to adapt to the changing character of theoretical knowledge. According to this perspective, these organizations had begun to actively hinder the growth of cutting-edge theory by continuing to uphold outdated divisions between various subfields of physics as well as between physics and other scientific disciplines (e.g., in hiring decisions, funding schemes, and graduate training).
Between the late 1970s and the early 1990s, these physicists led a loose movement to create a new kind of scientific institution better fitted (as they saw it) to the needs of the “new” theory. The Aspen Center for Physics was the explicit prototype for many of these organizations, which were envisioned as “interdisciplinary” enclaves with minimal programming or structure, placed in beautiful natural settings at a remove from university “bureaucracy” and “red tape,” and funded through a combination of government support and philanthropic and corporate donations. Many of the founders, directors, and staff of these institutions were ACP trustees, and there was quite a lot of overlap among their leadership.
Fruits of this movement include the Institute for Theoretical Physics at UC Santa Barbara (now Kavli), the Mathematical Sciences Research Institute in Berkeley (now Simons Laufer), the Santa Fe Institute for the Sciences of Complexity, Caltech’s program in Computation and Neural Systems, the Isaac Newton Institute for Mathematical Sciences in Cambridge (UK), and the arXiv pre-print server, among others. There was considerable traffic among these organizations, as well as between them and older theoretical centers like the Institute for Advanced Study and institutes in the UK and Europe. This movement helped to create an institutional ecosystem and intellectual infrastructure for a particular set of topics and a particular style of theoretical research, which continues to be influential today. The work of the Simons Foundation can be seen as the direct continuation of this project.
Thomas: You make a broader argument that these developments were part of a broader turn away from “matter” as a key organizing concept in physics. Can you explain this argument?
Menzel: For much of the twentieth century, and certainly during the immediate postwar period (1945-1970s), physics research in the United States was predominantly focused on the study of “matter.” As a scientific problem, “matter” pointed not only to a set of research objects—quarks, solids, plasmas, and so forth—but also to a historically specific relationship between theory and experiment, a particular way of organizing scientific institutions, and a specific set of scientific goals and philosophical debates (e.g., disputes about which kinds of matter should be considered “fundamental”).
In the late twentieth century, “matter” dissolved as the conceptual center of the field, and this is closely related to the political and institutional developments gestured to above. Many physicists still study matter, of course, but in many corners this concern has given way to less tangible or phenomenological forms of research: “formal” theory, work with simulations, studies of models and techniques independent of their specific realizations, etc. One theorist at MIT has memorably described this in terms from computer science: physics used to be about “hardware,” but now physicists care more about “software.” This change in temper is of course something that’s widely discussed and debated, which is itself very interesting.
Thomas: Thanks very much, and best wishes for continuing your work in Toronto!
William Thomas
American Institute of Physics
wthomas@aip.org
\You can sign up to receive the Weekly Edition and other AIP newsletters by email here.