Stephen Wolfram

Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.

During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.

We encourage researchers to utilize the full-text search on this page to navigate our oral histories or to use our catalog to locate oral history interviews by keyword.

Please contact [email protected] with any feedback.

ORAL HISTORIES
Stephen Wolfram

Credit: Wolfram Research

Interviewed by
David Zierler
Interview dates
March 18 and April 17, 2021
Location
Video conference
Usage Information and Disclaimer
Disclaimer text

This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.

This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.

Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.

Preferred citation

In footnotes or endnotes please cite AIP interviews like this:

Interview of Stephen Wolfram by David Zierler on March 18 and April 17, 2021,
Niels Bohr Library & Archives, American Institute of Physics,
College Park, MD USA,
www.aip.org/history-programs/niels-bohr-library/oral-histories/46902

For multiple citations, "AIP" is the preferred abbreviation for the location.

Abstract

Interview with Stephen Wolfram, Founder and CEO of Wolfram Research. He describes his recent efforts to launch an “assault” on the final theory of physics and he muses on the possibility that the human mind is a quantum mechanical system. Wolfram recounts his family’s German-Jewish heritage and his upbringing in Oxford, where his mother was an academic. He describes his schooling which put him on a trajectory to skip grades and begin college at age fifteen and to complete his PhD at age twenty. Wolfram discusses his early interests in particle theory and computer systems and he describes his summer research visit to Argonne Lab and his visit with David Gross at Princeton. He explains the circumstances that led to his admission at Caltech to work on QCD and his decision to accept a faculty appointment at Caltech thereafter. Wolfram narrates the origins of the SMP program and the intellectual property issues he experienced as a Caltech professor. He explains his intellectual migration away from physics toward the work that would become Mathematica and Wolfram Language, and he describes his time at the Institute for Advanced Study. Wolfram discusses the business model he adopted for Mathematica and his educational motivations that were incorporated into the program from its inception. He discusses his interests in complex system research and his fascination with cellular automata, and he narrates the intellectual process that led to his book A New Kind of Science. Wolfram surveys the reviews, positive and negative, that he has received for this work, and he offers a retrospective look at how NKS has held up as it approaches its twentieth anniversary. He describes the launch of Wolfram Alpha and the promises and limits of quantum computing and why he has returned to physics in recent years. At the end of the interview, Wolfram asserts that he has never taken risk in any of his decisions, and he considers how his approach and the intellectual and business ventures he has pursued will continue to yield solutions for many of the ongoing and seemingly intractable problems in physics.

Transcript

Zierler:

OK, this is David Zierler, Oral Historian for the American Institute of Physics. It is March 18, 2021. I'm delighted to be here with Dr. Stephen Wolfram. Stephen, it's great to see you. Thank you for joining me.

Wolfram:

Pleased to be here.

Zierler:

To start, would you please tell me your title and institutional affiliation?

Wolfram:

That's an interesting question. Let's see, my title. Probably Founder and CEO. That's a good start. Wolfram Research is probably my institution. I think I'm still Adjunct Professor of various things. And a few other kinds of affiliations. But mostly, when people ask for that on a form, it's CEO of Wolfram Research.

Zierler:

As adjunct professor, in what ways are you affiliated in academia? Do you have students? Do you teach classes currently?

Wolfram:

Again, that's a slightly complicated question. So for the last 18 years, we've had a summer school that we do, needless to say, every summer, that usually brings about 60 or 70 terrific students from around the world. So I have, every year, about three weeks of extreme professor-ing. And then, some sort of tentacles of that last at other times. In the last year, we've had this big project to explore fundamental physics, a Wolfram Physics project. And for that, we have a collection of what we call research affiliates and junior research affiliates, who are various graduate students, undergraduates, other kinds of things. And so, have students? Complicated issue. Students who work with people who work with me? Definitely. I'm sure if you look me up on academic genealogy sites, you won't find students who I've had because I haven't done it through academic institutions.

Zierler:

It's a bit of a parlor game what science you've been interested in at any point in your career. Just as a snapshot of where we are now in March 2021, what science is most compelling to you? What are you interested in these days?

Wolfram:

Well, for about 30 years, I had been thinking about a possible assault on kind of the final theory of physics. And about two years ago, I actually launched into a serious effort to do that. And to my great surprise, it's worked fantastically well. And what's been most amazing to me is the formalism that we've developed for exploring fundamental physics turns out not just to apply to fundamental physics but also to give one a bunch of insights into distributed computing, metamathematics, perhaps even economics, some other areas. So the thing that's really pretty interesting to me at this point is that this sort of approach to fundamental physics that comes from computation gives us insights not only about fundamental physics, where it seems to be really telling us about kind of what the machine code of the universe is like, but also we're able to apply these same ideas to a lot of other kinds of areas and solve a bunch of problems that have been around mostly over about 100 years.

But the thing that's really remarkable about what's happened, particularly in physics, with this project, is that as I say, what we've got is sort of a formalism that represents the machine code underneath what we know about physics today. And what's happening is that a lot of areas of mathematical physics that people have been developing for a long time have not really had very good foundations. They've been fascinating mathematical theories, but they haven't been well-grounded in something that one can see might relate to actual physical reality. This model and formalism that we have seems to provide that kind of grounding, and it both seems to be informing a bunch of these different areas of mathematical physics, and these areas of mathematical physics are, in turn, informing the kinds of things in our models. And so, for example, just this morning, actually, we were doing a session with a bunch of our research affiliates and junior research affiliates, talking about progress that people had been making. And it's pretty spectacular.

So both in the direction of general relativity and the direction of quantum mechanics, the thing that's happened is, our model is becoming kind of the best way to work out things that you might want to know about general relativity by going sort of underneath Einstein's equations and coming up, so to speak, rather than what people normally have to do in, for example, numerical computing studies of relativity for working out black hole mergers and things like this, which is to go down from these sort of formal mathematical Einstein equations, and then try and break them up to the point where they can be fed to a computer. What we're doing instead is coming from underneath, so to speak, from a fundamentally computational model and then finding out that yes, we can get the same or better results than people who are coming sort of down from the Einstein equations.

So that's a pretty neat thing. If you want to know if your model is right, first, you can make various mathematical proofs of things, but by the time you can actually use the model to make sort of practical predictions that align with things that people have measured otherwise, that's a pretty neat thing. And that's happened, now, both for general relativity and for quantum mechanics. Not yet quantum field theory. Only quantum mechanics so far. So in quantum mechanics, a thing from this morning is that we now have a faster way to do simplification of quantum circuits than people have had using existing quantum information theory, by essentially making a translation from existing quantum information theory to our models, computing things in our models, and then going back again.

So in terms of the history of science, it's a rather interesting way to validate new models is to say, "Look, we can just compile things into our models, and actually run in our models, and see that we get the same kinds of results that people had gotten by doing things in much more elaborate ways, sort of from mathematically derived kinds of models.” But the thing that you're asking, what am I excited about? The fact that we seem to have a formalism that really seems to tell us, fundamentally, how physics works, and that physics is fundamentally computational, and that that formalism also applies to metamathematics, for example, just as we can understand the physical universe, of atoms of space and their connectivity, so we can understand the mathematical universe of the possible theorems that mathematics and their connection. And that's a thing that didn't, in the beginning, look like it was anything to do with physics.

And yet, it turns out that it aligns in our formalism very much with physics, and seeing this alignment allows one to kind of use ideas that have been developed in 20th century physics, for example, about gravitation theory and so on, and apply them to areas like metamathematics, distributed computing, and so on. And I'd been interested in kind of foundational questions in science for a long time, and one of the core principles that has come out of a lot of investigation that I've done of the computational universe, the possible models, and things is this thing called the Principle of Computational Equivalence. And I had been interested in things like the generalization of the idea of intelligence in the context of computation, and I was realizing that that was just sort of a story of computational sophistication. And one of the things that people have asked me about for years, and years, and years, and I've always said, "I have nothing to say about this," is, "So what does all this say about consciousness?" And I've always said, "I have nothing to say about that. It's an ill-defined concept. There's nothing useful to discuss."

But I realized a couple of months ago that, actually, in our models, there is something useful to discuss. And it's actually a really interesting story, in which what one realizes is that the concept of a coherent kind of thread of consciousness that we think we have is a thing that essentially drives the way that we perceive the universe and ultimately drives the laws that we attribute to being the way that the universe works. So, in some sense, if our consciousness worked differently, we wouldn't have general relativity, we wouldn't have Einstein's equations. We would have something bizarrely different. And so, the fact that there's this kind of connection between these ideas of very formal physics and ideas that, turns out, philosophers have talked about for centuries about how consciousness works is something that–actually just the last few days, I've been writing a piece about what we can learn about consciousness from our models of physics.

And one of the things that I find interesting is, I've written all this stuff down, I've started showing it to people, and people say, "Well, that's interesting. That's similar to what Immanuel Kant had to say, and Hegel had to say, and so on." But they didn't know any of this physics. So they guessed a bunch of things, particularly about the nature of space and time, that we can now actually nail down. And so, it's both impressive, as far as I understand it, how far they got, and also really interesting that the things that they thought about haven't really had a place in the existing development of physics, but now do have a place. And it kind of brings me back to my early life because when I was a young kid, one of the things that I said, because my mother was a philosophy professor, was, "If there's one thing I'll never do when I'm grown up, it's philosophy. Because who wants a field in which people have been arguing about the same questions for 2,000 years?" But I think this piece I've just written is probably the first time I've written something which could be viewed as–well, it's a strange kind of philosophical piece because its subject matter is philosophy. Its methodology is ultimately computation and physics. So in terms of today, March 18, that's what I'm excited about.

Zierler:

Have you looked specifically into the question of whether the mind is a quantum mechanical system?

Wolfram:

If you're asking the question, does it matter that there's quantum mechanics to the operation of brains, I think that the answer is probably not much. The thing that comes about in this kind of model of physics is, quantum mechanics is a story of what we call multiway systems. That is, in classical physics, we have the idea that, in a sense, definite things always happen. You throw a ball, it goes in a definite trajectory. In quantum mechanics, in our models, what happens is, all possible things happen. There's this multiway graph in which the possible edges in this graph correspond to essentially possible threads of history. And these threads of history can branch, they can merge again, and there's a lot of structure that's associated with that kind of branching and merging of threads of history. But one of the things that one has to start thinking about when one tries to understand quantum mechanics in our models is, as an observer of this kind of branching, multiway system that represents our universe, we are also part of that universe. We are embedded in that universe.

So just as the universe is branching, and merging, and so on, so, too, our brains are branching and merging. So one has the slightly brain-twisting question of, "How does a branching brain perceive a branching universe?" And so, a big part of the story of what happens with kind of the things I've figured out about consciousness has to do with the fact that a branching brain can have the idea that a branching universe is all consistent. It turns out to be consistent for the branching brain to, in a sense, believe that the branching universe is actually doing one consistent thing. And it's that sort of interplay between the actual underlying branching and the successful belief that it's doing a definite thing that kind of drives what ultimately ends up being measurement in quantum mechanics and so on.

One of the things that comes out of our models that I think is, right now, probably the single most elegant conclusion is, there's been sort of a mystery for a long time about the relationship between general relativity, theory of gravity, and quantum mechanics. And what comes out in our models is, it turns out that they're actually the same theory. One of them operates in physical space, the other operates in what we call branchial space. In our models, there's actually sort of a bigger space that combines those two things, which becomes sort of evident when you look at black holes and things like that.

But the thing that's really quite remarkable is that the Einstein equations in physical space, which describe the deflection of geodesics, the deflection of shortest paths, as a result of the presence of mass, those exact equations turn out to be the same as the equations that produce the Feynman path integral in quantum mechanics, except that, now, the deflection of geodesics is not in physical space, but in this thing we call branchial space, which is, essentially, a space that maps out possible quantum entanglements. So to me, that's kind of a remarkable conclusion of this question of, "How do general relativity and quantum mechanics fit together?" The answer is, they're actually the same theory. And I didn't see that one coming at all.

Zierler:

Does the sameness of the theory suggest that, at least linguistically, the idea that we're attempting to integrate gravity into the Standard Model is not the way to think about that?

Wolfram:

Yeah, it's not a particularly useful way to think about it. The fact that what we now seem to have is a sort of core, in a sense, machine code from which there are various high level descriptions that can be given. And the standard model is one potential description. We're not there yet. We don't know how to map what we've done exactly into the standard model. We have a seriously bright glimmer of how local gauge theories work in our models. But we haven't got an electron, we haven't got a photon. We know what those things must be in our models. They must be kind of topological obstructions, little pieces of, in a sense, knotted-ness in this whole giant hypergraph that we think represents the structure of space. But actually figuring out how those topological obstructions work is something we have not yet done. It's kind of on the roadmap hopefully for the next year or so. I have been interested in these kinds of questions for a long time.

And so, some of the precursors of this work, I've been asking mathematicians about how things that they know might contribute to this for a long time. And they're particularly interested in some results about graph theory. And I think a fairly typical response that I got from a distinguished graph theorist a number of years ago to my questions was, "That's a very good question. Come back in 100 years, and we may know more about the answer." But one of the things that is kind of a hallmark of the physics way is, let's just bash forward and see what we can do, even though the mathematicians say it might take 100 years to really know what's going on. And that's part of the story of what we're doing.

Zierler:

To go back to an earlier term you used, fundamental physics, I wonder if you can address perhaps the misapprehension or not that that phrase connotes a hierarchy of importance within physics. As in, "This is fundamental physics. It's the really important physics."

Wolfram:

Well, that's not my implication when I say fundamental. One could ask, "What should be the right term for the thing that lies beneath whatever one knows in physics?" To me, seems like fundamental is a pretty good term for that. I think a more evocative term, given what we now know about kind of how this seems to work, is the machine code of the universe. But that's a term very much of our times. That's not something that anybody would have understood 100 years ago, when people were actually very much on the track of the kinds of things that we're doing today. What happened, a sort of key idea is the idea that space is discrete, which was an idea that many people talked about a little bit over 100 years ago. And when quantum mechanics really got established, it was kind of assumed that space would be discrete. But it then just technically couldn't be made to work. Einstein--people have sent me this quote so many times--basically said, "In the end, it will turn out that space is discrete. But we do not yet have the tools to understand how to make that work." So he said in 1916. So I think, finally, we do have these tools, and he was right.

Zierler:

To what extent is your current work possibly of relevance to some of the major cosmological mysteries of our time, such as dark matter and dark energy?

Wolfram:

Presumably. We don't know exactly how that works. Actually, one of my next projects has to do with trying to explore the various experimental implications of our models, and many of those are cosmological implications. One of the types of implications of our models is that space doesn't end up necessarily being precisely three-dimensional. So there end up being dimension fluctuations in space, and that, I think, is one of the more likely phenomena to be able to see is dimension fluctuations possibly in the early universe. And the question is, what does a photon do when it propagates through 3.01-dimensional space? We don't really know. But that's something we have to figure out. In terms of the specifics of the expansion rate of the universe and dark matter kinds of things, there are various suggestions from our models about how things like that could work.

But it's too early to say right now how that works in detail. What we know is that in our models, it's sort of interesting, we can derive Einstein's equations as the large-scale limit of the structure of spacetime. And in the derivation, the cosmological constant is undetermined. So to know the cosmological constant from the derivations we have is not something we can do. It's kind of like we have to go into more detail. It is a generic fact from our models that you get Einstein's equations, but Einstein's equations with an arbitrary cosmological constant. It's a little bit like if you're studying thermodynamics, you can say, "Well, let me look at water. Let me look at air." They both, at a microscopic scale, are quite different.

But if you look at a macroscopic scale, they both satisfy the same equations for fluid flow, and you don't have to know the microscopic details to know that they satisfy the same equations for fluid flow. But if you want to know their specific viscosities and things like that, then you have to go look more at the microscopic details. And it's something similar with our models that we can say, generically, it follows Einstein's equations. But specifically, what are the parameters? You have to go further in exploring the models to know that. I think one of the things that we're seeing right now, because we have this actual, practical method for solving the structure of spacetime directly from our models, presumably reproducing Einstein's equations, but we're going sort of directly from our models to the structure of spacetime.

And one of the big current questions is, what are the places where the model kind of shines through, where you don't get exactly what Einstein's equations say? And it looks like very rapidly rotating black holes are probably one place. There may be some other places. The real question is, are there places that are experimentally, sort of cosmological experimentally accessible of that type? But anyway, we're talking a lot about the future. I'm happy to talk about the future. I'm also happy to talk about the past.

Zierler:

Well, let me ask about the recent past. And this is, of course, a question we're all dealing with right now. Particularly, with your expertise in computational physics, to what extent has the pandemic, and remote work, and social distancing not or have, in fact, negatively influenced your work over the past year?

Wolfram:

I've been a remote CEO for 30 years. This has had absolutely no effect. The only consequence is that, finally, after all these years, other people live kind of the same way that I've been living for a long time. Our company is a very geo-distributed company. So I have been used to dealing with groups that are working physically remotely for decades, actually. 10 or 15 years ago, I decided I wasn't going to travel. In fact, I hadn't traveled much for years. But I decided I was really not going to travel, and I was just going to do video conferences and so on. And for a number of years, that was enough of a novelty that all talks I might give were all video conferences. And then, it was kind of like, "Oh, but can't you come in person?"

And so, for a short while, I did kind of do talks and things in person, and go to conferences, and so on. And just last fall, I was deciding, "I'm going to stop doing that again." Then, the pandemic came. Now, nobody expects it. So that's a good thing. But there are many things to say about the role of science in this pandemic and things that, for me, are interesting as somebody who's kind of studied what models can and cannot tell one. I've been kind of appropriately humbled by the fact that I have, at different times, thought, "I more or less know what's going on," about the immunology or the epidemiology of what's happening, yet I know from lots of work that I've done on the foundations of computation and so on, that these systems can always surprise one. And in this case, they have repeatedly surprised me. And I think it's an interesting moment for how much one believes in certain kinds of science.

I think if we look at sort of the long span of scientific history, we came into this time believing that science can figure everything out. And before Copernicus, people thought, "Whatever we sense with our senses must be what's going on." After Copernicus, it was kind of like, "But there might be some mathematical science that tells us something different, and it might be true, and we might not know from our sort of human intuition correctly what's happening." But we've gotten to the point where people are like, "Science can solve everything." But it turns out, and one of the big conclusions of my longtime work is this phenomenon of computational irreducibility, which is, basically, the phenomenon that says, "Even if you know the precise underlying computational rules for the operation of a system, it may not be the case that you can readily predict what that system will do in any way that's more efficient than effectively just running each step in the computation that it has to do to sort of behave as it would behave for itself."

And so, that's sort of an interesting phenomenon because, in a sense, it's kind of science revealing its lack of strength from itself from inside, so to speak. It's not a question of saying, "Oh, I don't believe in science from the outside." This is a phenomenon that comes from science itself explaining that science is not as all powerful as you might've thought. The model of science where you just say, "If I have the right model, then I can work out what will happen," isn't really right. And that phenomenon of computational irreducibility will have many consequences. We can see some of them already in science. But it has many consequences for things like AI, ethics, a bunch of things about how we think about things like computational contracts, and kind of the future of AI, and an autonomously connected way of governing the world, and so on. These are places where this phenomenon of computational irreducibility has all kinds of consequences. And I find it interesting that that phenomenon is kind of one of the obvious features of thinking about the world in a computational way.

And you might have been able to say, "Well, it's nice to think about the world in a computational way, but the world isn't really computational." And just as you might have been able to say, before Copernicus, "It's nice to think about the world in a mathematical way, but the world isn't really mathematical, in some sense." Well, by the time you can see the kinds of things we've figured out about fundamental physics, you realize, actually, it looks like it's kind of computational all the way down. And so, you really have to pay attention to these phenomena that are kind of phenomena of computation, so to speak.

And so, I'm interested in the history of ideas and so on. And so, for me, it's sort of interesting to see this as it kind of emerges. I think one of the things that is probably a feature of what I've spent my life doing is sort of building towers of ideas and technology that kind of interleave with each other. And at this point, I've built a fairly tall tower. And from that tower, one can see all kinds of things. And some of those things have consequences that are fairly near-term, and that are fairly easy to explain, and so on.

And some of them, the consequences, I probably don't understand very well yet, myself. But they're probably 50, 100, or more years out. And it's both interesting to watch the development of ideas that have those kinds of time scales, and also, from a pure human point of view, is somehow frustrating. Because it's like, "Look, this is the direction it's going to go in. I know this is the direction it's going to go in." Sometimes I can sort of test myself by saying, "That was the direction I thought it was going to go in 40 years ago, and by golly, yes, that's absolutely the direction it's gone in." But you can't accelerate that progress of time. And in fact, that's even the lesson of this phenomenon of computational irreducibility. Computational irreducibility is, in a sense, particularly in our models of physics, the thing that happens in the inexorable progress of time is the actual execution of this computationally irreducibility computation.

So this phenomenon that one has to grapple with as a person sort of in the middle of trying to create ideas, that there's a certain sort of irreducible computational effort that's involved in kind of having those ideas sort of achieve all the potential they can achieve and sort of all the diffusion in the world that they can. But that's interesting to see happen, and I've also been quite interested in the history of science, perhaps having been involved in science for 45 years now, it makes me more interested in the history of it, and in seeing kind of the patterns that one can understand from the past, and realizing, "Gosh, we're in exactly the same pattern today." And the story of such a pattern is, it takes 100 years.

Zierler:

Well, let's engage in some history now. Let's go back before London. Tell me about your parents and where they're from.

Wolfram:

Well, let's see. My father Hugo was born in Bochum in Germany in 1925. And actually, I just recently found his father's PhD thesis from the University of Bern in 1909. It was a thesis in veterinary science. And my father's father became kind of a high-end country vet. I don't know how country it really was. But that's the vision. It's always said that he was more connected to animals than people and would kind of tip his hat to the horses as he walked down the street. But my father's mother actually came from England. She'd been a concert pianist. She died in childbirth. And so, she was out of the picture. But then, in the early 1930s, my father kind of ended up going to school in England, and then by probably 1936-ish, that whole family had moved to England.

And so, that was his origin. My mother was also born in Germany, actually, in Berlin in 1931. Her mother came from Innsbruck in Austria from a kind of rather hardy, mountaineering kind of family. But my mother's mother, Kate Friedlander, went into medicine and did her medical degree in Berlin. And then, somehow, started getting involved in the world of psychoanalysis with the Freuds and so on. And she moved to England by 1933 or '34 and ended up starting some whole story of child psychoanalysis and so on. She wrote a book that still seems to be in print, I think, called The Psychoanalytical Treatment of Juvenile Delinquency, which I can't say that I've read, but I have certainly looked at. And my mother's father, I know little about other than that he was a medical researcher in Berlin. He also moved to England. I don't really know those circumstances. My mother's stepfather had been, I think, involved in infectious diseases in Germany and had gotten to a fairly advanced point in running some infectious diseases hospital, and then also moved to England. And sort of re-qualified and became a radiologist.

And I, actually, just recently found his passport. He was still going back and forth to Germany in 1938. So kind of an interesting circumstance. But that's a little bit on the story of my parents. When World War II started, my father was just finishing high school. And he really wanted to join the RAF and become a Spitfire pilot. Probably, if that had happened, I wouldn't be here to tell the tale. But the RAF turned him down on the grounds that he had too many German connections. And so, he spent most of the war working in an aircraft parts factory. And my mother was younger during the war and, I think, her probably only knowledge of things natural came from the fact that she spent some of the war years sort of outside of London on some farm. So my father never went to college. He did a bunch of kind of correspondence-type courses. After the war, he ended up joining a company that was, at the time, in a very kind of leading-edge industry, which was textile importing and exporting. It was a time when international trade was energizing, and textiles were a big thing. Rolling the clock another 50 years on, it wasn't such a high-tech, leading-edge business. But when he joined it, it was.

And over the years, the person whose company he joined eventually sold the company to him, and then he ran a textile import/export and manufacturing company for many years. His hobby was writing novels, and he published a few of them. And that was his story, more or less. My mother went to college at Oxford and did their PPE course. And then, when she graduated from there, she wanted to do a doctorate. And I guess there weren't really doctorates in philosophy, even though in Oxford, a PhD is called a DPhil, which stands for Doctor of Philosophy. I don't think they had people doing doctorates specifically in philosophy. So her doctorate was in anthropology, and she studied, in a surprisingly formal way, all kinds of questions about kinship, and incest, and so on. And eventually, later in her life, that turned into a book about those kinds of things.

Then, she went to work at Oxford University as a philosophy tutor, tutor in the British English, Oxford sense. I guess these days, they call such people professors because it's too confusing not to. And so, she was what would probably be called a philosophy professor for most of her life in Oxford. She mostly worked in kind of analytical philosophy and philosophical logic, although not the kind of formal, mathematical side of philosophical logic, but more the kind of, "How do you relate things we might say in language to things that might have a more precise meaning at a logical level?" And she wrote a book about philosophical logic that I know is still in print because when I browse philosophy sections in college bookstores, I often see it. And I had mentioned this comment, that when I was young, I had the point of view that if there was one thing I would never do when I was grown up, it was philosophy.

And that's kind of a feature of hearing about philosophy and philosophical argumentation from my kind of earliest days, so to speak. And it's like, "Why are you arguing about this point? There's a definite answer to that question. You don't need to argue about it for 2,000 years. I don't know what the answer is, but there's going to be a definite answer." And it's sort of ironic to me that a lot of what I've done in my technological life has to do with building computational language, which is, in some sense, trying to build a sort of computational way to represent meaning of the kind that we humans think there is in the world, so to speak. And in fact, the activity that I'm engaged in had some precursors in earlier times, particularly in the 1600s, and ironically enough, what would become computational languages were called in the 1600s was philosophical languages. But not much work got done on them between the 1600s and now. So it turns out that despite my early kind of statement that if there's one thing I'd never do when I'm grown up, it's anything with philosophy, that's turning out to be not quite correct.

A few years ago, I was reminded of a very charming thing that happened probably when I was maybe 5 years old or something. I was at some party that my mother had taken me to with a bunch of philosophers. Mostly, these ancient, gray-haired people who were probably much younger than I am now. As probably would be repeated by me at various kinds of events like that, there was some older philosopher who thought, "Oh, that kid in the corner is going to be an interesting person to talk to." So I spent a long time talking to this older, no doubt distinguished, though I don't know who it was, philosopher. And we'd have a long discussion about many things. And as he's walking away, he's kind of muttering to himself, "One day, that child will be a philosopher. But it may take a while."

Zierler:

Prophetic.

Wolfram:

Yes, quite.

Zierler:

Did your parents consider themselves refugees from Nazi Germany?

Wolfram:

That was not a sort of thing when I was growing up. Because I was born in 1959, which is after sort of the features of the war had kind of blown over, so to speak. So I think my parents just saw themselves as being in England, doing English things, so to speak.

Zierler:

Your parents were from Jewish backgrounds, correct?

Wolfram:

Yes, yes. And from time to time, we would visit somebody from, typically, the psychoanalytic circle. And to me, it would be, "Well, they talk in this strange accent, and they're old, and I don't really know particularly anything about what's going on." And I think it's interesting because in a sense, the cultural background, I would not have in any way identified with, just because years had passed, people had assimilated, so to speak. I probably don't even know all of the history.

Zierler:

Is your sense they came from assimilated backgrounds in Germany, that they were not particularly Jewishly connected?

Wolfram:

I don't know, to be honest. Did they go to a synagogue? I don't think so. On either side. I really don't think so. Yeah, it's an interesting question, actually. I don't know. I think that when my parents' parents came to England, my impression is that the circles they'd existed in on both sides were not particularly the refugee circles. And certainly on my mother's side, they were not kind of Jewish circles. They were the psychoanalytical circle, which had a large component of probably Jewish background people, but that was the circle. Not the sort of culturally or religiously Jewish circle.

Zierler:

Were you Jewishly connected at all? Did your family belong to a synagogue? Were you bar mitzvahed? Any of those things?

Wolfram:

No, and it is really embarrassing how little I know about that culture. Occasionally, it comes up. I occasionally will go to a bar mitzvah of some child of a friend or something, and I'll be sitting there, reading the book that's in the pew and realizing, "I've never seen any of this stuff before."

Zierler:

So your sense is that it's not as if your parents left some religious affiliation behind them in Germany. They didn't have it there, and they didn't bring it to England.

Wolfram:

No, they didn't have it there. I think on my father's side, perhaps my father's-father's-grandfather was some kind of rabbi in Aachen. And, I think, even of some distinction. I think he was a person involved in translating a bunch of Jewish texts into German for the first time. And I think that part of the family was also connected to the Rothschild family. But I think none of the money went in their direction. So there'd been sort of a distant history, but by the time of my father's father, I don't think there was a significant religious connection. As far as the Nazis were concerned, yeah, the whole family was Jewish. But as far as actual practices, I don't think so. And I think my father's father had a variety of siblings who were in a variety of professional activities. I know almost nothing about them.

Zierler:

As you mentioned the heritage of your family, it sounds like there's a much stronger tradition of life sciences in your family and not so much in the physical sciences.

Wolfram:

I guess. I don't think philosophy's quite a life science.

Zierler:

Veterinarians, things like that.

Wolfram:

Well, yes. From the grandparents, it's a concert pianist who died young, a veterinarian, on the biological side, I suppose, a medical researcher, and then a psycho person, so to speak, which I suppose is life sciences. I don't know if psychology would identify itself as a life science. But yes, a mixed bag. My parents did not view themselves as knowing about science kinds of things. I think perhaps one of the reasons that I got involved in science was I wanted to do something different from what my parents had done. It's kind of interesting and ironic, I think some of my own children would perhaps say the same thing. Yet, it is charming how close the apple actually falls to the tree, even though it may be interpreted in a different way. And I got interested in science very early.

In fact, I think I probably taught myself to read from illustrated science books that had captions. So I think I sort of was interested in science from a really early stage. And it was, in retrospect, kind of charming that only thing my mother, who didn't really know much about science at all, was, "How can you know that such a thing is true?" And, "What would be possible evidence for such a claim?" Those kinds of things. Which probably was, actually, a better piece of early education perhaps than knowing something about some extra scientific fact.

Zierler:

Well, let's get to 1959 when you hit the scene. What neighborhood were your parents living in in London at the time?

Wolfram:

That's a good question. I was born in Hammersmith. But I know that only from my birth certificate. My parents were kind of splitting their time between London and Oxford. I think my mother, right then, was working at the London School of Economics. But I think very soon thereafter, they ended up with a place in London, a place in Oxford, and sort of going back and forth at different times. And then, they bought a small house in the country that ended up being a very good investment. And most of the time I was growing up, they would spend weekends there. And my father, for most of the time I was growing up, did what to me seemed like a completely crazy thing, commuting into London from Oxford every day. And he would explain that he enjoyed the peace of being in his own kind of tin can on the motorway, so to speak, sort of separate from everybody else for that period of time every day. They had an apartment in London, and I could not tell you where it was.

Zierler:

Do you have siblings?

Wolfram:

I have a brother who is ten and a half years younger than me.

Zierler:

Is your sense that your mother specifically did not want to leave the workforce when you were born?

Wolfram:

Well, she went on working. I think she must've finished her PhD by the time I arrived. But I'm not completely certain. That would be easy to look up, but I haven't done so. The one thing that I do have from my mother that is kind of interesting is, from her sort of ancestry of the psychoanalysis mother and so on who studied the ways of children, my mother decided to write notes about me and my development from age zero to around age 2, when I think she gave up. But I have these notes, and I've been meaning to do something with them.

They have the sequence of words that I learned and all of these things, which as I think about training artificial intelligence systems and seeing how they start and where they get to, it's kind of charming to have one's own sequence of which words one learned in which order. I think the only notable thing I could tell from those notes was that, as some children do, but it's not so common, I learned the word yes before learning the word no. That may be more of a projection of parents than a projection of the personality of the child. I'm not sure. So no, my mother stayed doing her thing quite independent of my arrival, so to speak.

Zierler:

Is your sense that she was advanced in her ambitions to stay in the workforce, despite being a mother? Was she privileged with her opportunities in being able to do that?

Wolfram:

She was an academic, and there were plenty of academic women who were single and in kind of the more monastic model of academia. But my impression from her contemporaries in academia were that there were plenty who were married with children and so on. But for me, the, "Do women work, and do they do sophisticated things?" thing has always been taken for granted, so to speak, because that was the background that I happened to come from. And it's an interesting question. I went to kindergarten in Oxford with a lot of children of academics. How many of their mothers were in the workforce in one way or another? I actually think plenty were not in the workforce, as I think about it. But there were some who were academics and other things. I guess my mother came from at least one more generation of working mothers, so to speak, in the sense that her mother had been a rather intensely working kind of psychoanalysis person when she was growing up. So probably in retrospect, I think it's probably true that if I looked among the kind of people who lived in Oxford at that time, my mother was by no means remarkable for being a working mother, but it was probably not the most common case, so to speak.

Zierler:

Was there a nanny or someone else who assisted in your childcare?

Wolfram:

Well, my mother would pay her various philosophy students to look after me from time to time. [laugh] And I believe I made something of an impression on some of them. I first went to kindergarten when I was 3 years old. So by that point, I'm kind of out and about some part of the time. So yeah, I guess there must've been lots of philosophers around me. I don't know. Maybe that had some terrible consequence for my later life.

Zierler:

What was your earliest memory?

Wolfram:

Well, I remember some scene of the apartment that my parents lived in in London, which they moved from when I was about 2 and a half years old. And I did at one point check that it was a correct visual memory of it. By the time I was in kindergarten, I have fairly detailed memories. I can't necessarily place the year, but 3, 5, that kind of age. And of lots of different things. I suppose a few of them, I can date from when certain things happened in the world. I think I was a somewhat not completely middle-of-the-road kindergarten student, because I think I was quite opinionated. I remember there was a teacher, who was a very serious kind of Christian–I wouldn't necessarily say fundamentalist, but going in that kind of direction.

And so, when I would show up at age 4 or so with the latest knowledge about the life of the dinosaurs, and the origins of the universe, and so on, it was a terribly shocking thing. And I think that teacher had this, translated into American, sort of elastic band theory of children's minds. That if you stretched them too far, they would break. But I didn't really follow this theory. And I do remember one thing. How old was I? I was young enough that it was easy to take a little coat and flip it over my head, so I must've been quite small. Probably 4 or so. You asked for early memories. I think there was a, "OK, children. Now, all run around the room like buses," which for whatever reason, I wasn't into doing. And I just stood still. And it's like, "What are you doing?" "Well," I said, "I'm a lamppost." So I don't know what you take from that. One of the things that was a little odd was that when I learned to read, I must be a rather whole word reader. And I also had pretty good visual memory.

And so, one of the strange things, another early memory I guess, was the situation of looking at a book, which has a few words on a page, and then you're sitting around in some circle, and you're reading this book. And I would look at the page, and then I would look at the teacher, and I would say what was on the page. And I did not realize that other children were not doing this, that they were looking at the page to read the page. And that took me a long time afterwards to realize how strange that must've been. It wasn't like it was a hugely sophisticated memory task because it was large print books were there were only a few words on the page, so to speak. But I think that led to, again, teacher confusion about me.

Perhaps my favorite teacher story from kindergarten was, back in those days, the British money system was pounds, shillings, pence, which was this strange mixed radix arithmetic with 12 pennies in a shilling, 20 shillings in a pound. And so, as a 6-year-old or whatever, the exercise is, can you do arithmetic with pounds, shillings, pence? And it's a big mess. And so, I was very proud of myself for realizing it didn't need to be like this. It could just be decimal. And so, I said this to this teacher. And it's like, "It doesn't need to be like this." And this teacher said, "Well, it's been like that for hundreds and hundreds of years, and it will never change." Well, I'm guessing maybe that happened in about 1966 because just, like, a year later was this big announcement. British currency is going to be decimalized. So that was one of my early, "Don't necessarily believe what you're told," type stories.

This is the problem with talking to somebody like me who probably has a decent memory is that there's lots of stuff. Two other amusing kindergarten stories, one of which I can date very precisely. So one was, I'm walking to school. In those days, you could do things like that, just walking a few blocks, age 6 or whatever it was. And I noticed that the little patches of light under the trees looked really strange. So I look up, realize a piece is bitten out of the sun. So I get into school, say to all these kids, "Look, there's a piece bitten out of the sun. It's an eclipse." So all these 6-year-olds start looking up at the sun. And teacher's like, "Oh my God, you can't do that." And it was also a great denial of the fact that, "Look, there's an eclipse. You can see it. You just look, and there's this eclipse." And so, that was, again, an early lesson in even the very obvious, "When you tell people about it, they don't necessarily see it." Right around that time was also a case of how delegation works or doesn't. In those days, they would let young children have hammers and nails, and do woodwork, and things like this.

And so, there was this thing of making some object, and I was doing it with some young girl, who said something like, "Don't worry, I know what I'm doing. Just hold that nail, and let me hammer it in." Well, didn't work that way. And I ended up with a very black and blue thumb. But that girl actually became a mathematician, Frances Kirwan. She's now Dame Frances Kirwan. But my early memory of her was the, "Don't trust the person who says, 'Don't worry, I know what I'm doing.'"

Zierler:

What was your earliest understanding that the adults in your life, either your parents or teachers, recognized that perhaps there was something more to your precociousness that might suggest significant intellectual ability?

Wolfram:

I never thought of myself as precocious. I was perhaps lucky enough to go to a kindergarten with a bunch of children of academics and other such things. It was a very bright crowd. I sometimes have thought I've never been in a similarly bright crowd. Maybe at my company, we have a similarly bright crowd. But I sometimes thought I was in the brightest crowd there than I was in for many years after that.

Zierler:

But when were you put on an educational trajectory that would lead to you checking off all of the degrees at such young ages?

Wolfram:

Well, I found an early school report of mine from when I went to elementary school when I was 7, and I posted this somewhere recently, I was very charmed by it. The sort of summary of the performance of the year or whatever. I think it was something like, "He is full of spirit and determination. He should go far." Which I thought was pretty good, actually. I don't know what I was like at 7 years old, but that's not a bad description of me at much later ages. And so, I give considerable credit to the teacher who wrote that. But I think in the British school system, there are all these kinds of class ranks. You're first in this subject, second in that subject, bottom of the class in that subject, and so on. And so, I have all those old school magazines and so on. So I have all my class ranks from all the times when I was 7 years old through whatever. And I typically did quite well in some subjects, not so well in others.

I didn't do terribly well in math, for example. I did well in things like English, and writing, and stuff like that. For me, yes, I was often sort of in the top. I was always in the top stream of the stack of different classes of different students, and I was usually kind of the top or top-ish student. I bounced around among the top five or so. But it was just like, "OK, I do what I do, and there's nothing particularly special about it," so to speak. I developed sort of interests outside of school, first in space, and then in physics. The first time I grudgingly admitted I was not doing badly in the stack of kids was when I was 12. In England, it's sort of the transition from elementary school to high school. And so, there are these scholarships and scholarship exams.

And so, I went to Eton, which sort of was viewed as being one of the top two high school places to go. And so, I got a scholarship there, and that was the top sort of academic educational achievement for the 12-year-old, so to speak, that you can kind of get in England at that time. Although, everything was ranked in those days. So at Eton, I was scholar number 7 in 1972 or so. Although, it turned out, the top six were all people who were already at the school, but were young enough to retake their scholarship exam. And I was still young enough the next year, and so I said, "Can I retake it?" I don't really know why because it didn't make much of a difference. And they said, "Well, no, you really shouldn't because we're going to change the rules this year and down-rank anybody who's already at the school."

But I have to say, at the time, I was perfectly well-aware of the fact that the scholarship crowd at Eton was a top crowd of kids and an interesting crowd of kids and people to know. But again, I wasn't particularly into the, "I'm a spectacular performer in the academic world," thing. First, at Eton, I was like, "Oh, it's really important to do well in all these classes." And so, my big achievement was, my first term there, to come top in the school in the whole ranking type thing. And then, I was like, "This does nothing for me. I am not interested in doing this anymore. I will work on things that I think are interesting, and I'll do OK in these classes. But the other people who might come top are vastly more excited than I am about coming top. So it's not for me to do." So I think I did come top again some other time, but not on purpose.

Zierler:

Let me reframe the question. To fast-forward to getting a PhD at age 20, by the fastest track on the normal track of things, you have to compress at least five years somewhere in your education. In other words, if the earliest that one normally gets a PhD is 25, 26 years old, something like that, somewhere along the line from being a 5-year-old to a 20-year-old, you have to either skip or compress something like five years' worth of education. Where and how did that happen for you?

Wolfram:

I learned that you could learn things by reading books. That's basically all it comes down to. I was first interested in the space program, then by about 1971, when I was about 11, I got interested in physics. And I just started reading books. And I got a bunch of college textbooks, and I read them. And at first, I didn't understand lots of things in them. I suppose the thing that was a driver for me was, there were all these kinds of questions, where it's like, "Can I understand the answer to this question?" I didn't really know whether it was a question that had been answered or not. But it was like, "Let me go figure out from these books whether I can answer this question."

And then, later on, I guess by the time I was probably 13 or 14, maybe even earlier, I started going to university libraries and looking at the physics journals. I suppose for me, I can kind of chart the things I learned because I liked to kind of summarize what I learned by actually writing sort of expositions of things. And so, when I was 12, I wrote this thing that exists now somewhere on the web called something like Concise Directory of Physics, which is kind of a collection of physics facts and so on. A little bit Wolfram Alpha style, but that's just the way lives work. And then, by the time I was going on to 13, I spent the summer writing a sort of treatise called Physics of Subatomic Particles, which I also found recently. And I was a little bit shocked as I was trying to retrace some of the things about the history of particle physics.

Actually, one of the more amusing things was looking up some of that history on the web and finding that that thing I wrote when I was 12 or 13 actually showed up as one of the top hits for the description of that piece of history of particle physics. And it wasn't bad, actually. I was impressed with myself long after the fact, so to speak. So part of the way I learned stuff, I suppose, was writing expositions of it for myself. I never showed those to people at the time. And the first place anybody will ever have seen it was when it got posted on the web a decade ago or so. But I guess it was reading books, trying to write down what I understood, and trying to solve problems that were not in the books, but were things that I wondered about and thought, "Can I figure this out? What do I need to know to know the answer to that?"

I did go to some math classes when I was in high school, although they became sort of one-on-one tutorials after a short while. And then, I didn't go to physics classes. I'm actually still in contact with my former kind of physics teacher from that time. And I kind of peeled off, sort of doing my own thing. I actually made a bunch of physics educational videotapes, which sadly, at least as far as I'm concerned, are no longer extant. But that was kind of my effort to learn these things. So by the time I was 14 or so, I probably had a decent knowledge of college-level physics. There was a particular thing that happened in particle physics in 1974. That was sort of a big surprise discovery in particle physics.

Zierler:

The November Revolution, J-psi.

Wolfram:

That's correct, yes. And actually, even before that, the rise of the e+ e- total cross section that had been observed somewhat flakily in Italy, and then less flakily at SLAC in California.

Zierler:

And you were aware of this because you read the science page in the newspaper? Or were you reading Phys Rev Letters at this point already?

Wolfram:

I was reading some Phys Rev Letters, but I was also reading things like New Scientist. Which, for better or worse, I recently realized I had been a subscriber to continuously for 53 years, even though much of what it says is often false, it is still quite entertaining to read. At that point, I was looking at physics journals. I forget, must've been 1974 probably, I kind of wrote my first not very good paper about e+ e- annihilation, electron substructure, and so on. And it's kind of ironic that so many years later, the model for physics that we have has electrons with substructure. Although, we don't yet know what it is. The only thing was, back in the days when I was writing that paper, it was talking about the possibilities that electrons were, like, 10 to the -18 meters across. Now, in our current models, the more likely possibility is that they're 10 to the -81 meters across.

So wrong by a great many orders of magnitude. But one part of the idea is the same. But yes, my first paper, I don't know. It had some interesting points. I wouldn't rate it particularly highly today. But it was something that I just worked on myself. I suppose by 1975, when I was 15, I was fairly regularly going to the physics seminars in Oxford. So the school that I was at was a boarding school near Slough, and Windsor, and so on. But probably a less than one-hour train ride to Oxford. And so, I would kind of fairly regularly do that and go to these physics seminars. And I think one of the perhaps sad things was that I could sometimes be somewhat brash because I would ask questions, when it was clear that the person didn't really know what they were talking about, I could be quite, let's say, forceful. And so, I think it's one of those things, if you become known as sort of a brash kid in a field that is international, it's hard to lose that reputation for a long, long time. [laugh]

Zierler:

Given your lack of patience with philosophy, even as a child, if you were sort of preordained to pursue interests in science generally, have you ever reflected on the question, "Why physics?" Why did you choose physics to be the area that you would devote your energies?

Wolfram:

What happened was this. I was first interested in the space program. 1960s, spacecraft, that was the exciting stuff going on. That was the vision of what the future would be all about, so to speak.

Zierler:

And this is an American-centric interest? You're looking at the Kennedy Administration? Or is there something happening in the UK that's also capturing your imagination?

Wolfram:

No, no, no, this is America and Russia. I remember the first space craft that I was sort of personally taken with was a Russian Venus lander. Must've been '67 or so. I knew every deep spacecraft and could tell you, probably, at the time, every detail of Mariner IV, and how it had gotten to Mars, and so on. And I would write these little booklets, which I still have, about all the sort of nerdy details of these spacecraft. And when the American space program was sort of in the manned spaceflight phase, I would stay up late in the UK to watch the Neil Armstrong foot on the moon type thing. And from that stuff, I got interested in kind of, "Well, there'll be Mars colonies before we know it. I want to kind of design how that all works." And so, I started making these little designs for spacecraft and things like this, trying to figure out things about, "If you have a lower gravity thing on Mars, and you have this car-like object, how does it work?" And all those kinds of things. And then, that got me interested in, "OK, how do you figure out stuff like that?" And then, I realized, "Well, you have to understand physics." And so, that got me to start learning physics. And then, I got really interested in physics as such.

And actually, the first area of physics that I was probably interested in from a research point of view is statistical mechanics. And I could kind of date my interest in that pretty definitely to probably June of 1972. Because I had gotten a series of books, the Berkeley Course series of five books. And I think it's book number four. It's about statistical mechanics, and it has this picture on its cover that is kind of a simulated movie strip of gas molecules bouncing around in a box. And I thought that was a really interesting phenomenon, and so I tried to understand the second law of thermodynamics and tried to understand it from the book. Was very frustrated because the book basically didn't explain it and just sort of said a lot of things people often say about the second law, which is almost paradoxical.

And in fact, my first sort of serious physics-oriented computer effort was in 1973, when I tried to simulate those particles bouncing around in a box that I'd seen on the cover of that book. Actually, ironically enough, just this very morning, a student in our physics project was showing off a project that I'd suggested to him about looking at hard sphere gases in a box, and looking at the causal graphs, and kind of using the formalism of our physics project to understand new things about hard sphere gases. So I guess I can't escape those things from June of 1972 until now. And actually, many, many years later, I found out from a person called Berni Alder, who was a big hard sphere gas person, the person who had, in fact, made that simulation on the cover of that book, that the simulation was a fake. And this was a terrible discovery for me.

No, I discovered that long, long after I had figured out kind of foundationally how I think the second law works, and it didn't really matter very much what that simulation actually did. So I was interested in statistical mechanics. I got interested in particle physics partly because of sort of the excitement around that–well, no, that's not right. I think I got interested in particle physics just because I was kind of drilling down to say, what's the most fundamental stuff one can study? And that was particle physics. Yeah, because I had already written my two little treatises on particle physics. One about physics of subatomic particles, the other was called something like Introduction to the Weak Interaction. And I'd really written both of those things before the 1974 J-psi thing. And it's kind of charming, recently I was writing an obituary for Tini Veltman, and that caused me to look back at my early introduction to weak interaction because it actually had some history in it which wasn't easy to find.

And it was kind of charming that it was talking about the intermediate vector boson, which of course, now, we call the W particle. But that was, at that time, just a distant theory. Maybe that's how things would work. And mostly, it was, "The weak interaction is a point process, and we're just going to study its effects, so to speak." So if you ask how I accelerated learning physics, the fundamental answer was, by reading books, by writing things, which helped me understand stuff. That's been something that I've continued doing all my life, so to speak, to write as a way to understand things. And it probably helped that I had what you might see as being kind of the chutzpah or something to say, "Even though I'm a 13-year-old kid, I can still try to figure out stuff about physics. Nothing says 13-year-old kids can't do that." I didn't really know 13-year-old kids couldn't do that. It wasn't a time when you could go look things up on the web, and do 13-year-olds doing physics type things, and see what the complete world inventory of that was.

But no, I think as far as I was concerned, I was just doing stuff because I thought it was interesting and learning what I needed to learn. And I suppose I got to the point where I could sort of do the scholarship exams for Oxford when I was 16 or so and do very finely on them. But I think, in some ways, I was in the industrial machine cracking nuts type situation. Because it was like, "Do this integral." So by that time, I knew a bunch of techniques that were very industrial-strength for doing things like integrals, of very generalized methods. And yes, I could use those generalized methods where I needed to know only a few facts to sort of crack the nut of doing these integrals that people were supposed to do by using standard tricks. And so, in modern times, I'm sure I would've done terribly because I'm sure the methods that I used were completely sort of bizarre. I don't know whether you had to write down the methods, I don't remember. But if I did, they would've been odd.

In England, there are standardized government exams, at least at that time, O levels and A levels. I never did A levels because you could get out of those by getting a scholarship to Oxford or Cambridge. But O levels, I did do. And so, one of the O levels that I did was physics. And so, they're sort of graded one through five or so, and I think I got a one in every subject except English writing, where I got a three. And I was well-aware of the fact that most well-known writers in England had done poorly on their English language O level. So that was fun. But in physics, I do remember the one question that was on that exam that I was completely confused by. The question was, "Name two differences in the effect of an electric field and a magnetic field on the motion of an electron."

So I'm like, "OK. It's charge times electric field, and it's charge times V cross B for the magnetic field. And that's one difference. What on earth are they looking for in two differences?" So I think, "Look, I know another difference. But I know this is not what they're looking for. But I'm going to write it down anyway," which is, the electron has a magnetic dipole moment, but so far as we know, it doesn't have an electric dipole moment. So a magnetic field can precess the spin of an electron, but an electric field cannot. So that was what I wrote down. And I'm sure the true wanted answer was, "One of them is not proportional to velocity, and the other one is. And one of them is in the same direction of the field, and the other one is at 90 degrees," or something. But I was not a skilled enough exam-taker, so to speak, to realize that kind of parsing. But I perfectly well knew that the answer I gave was a standard kind of particle physics answer not typically found in the 14-year-old, so to speak.

So there were sort of odd experiences like that. And when I went to Oxford as an undergraduate in 1976, I guess I was 16 or 17, and it was nice because at that time--I don't know if it's still true--it wasn't required that you go to the lectures about these different subjects. All that was required was that you show up at the end of the year and take some exams. And so, I did try going to the lectures for a day or two, and I was like, "No. This is not the picture." Although, I did find some graduate-level lectures, which I went to and had fun with. And in fact, one of the sets of lectures I went to was by Roger Penrose, and as I mentioned, just this morning, I was at some virtual event celebrating his Nobel Prize, and I'd found the lecture notes that I took going to his class back in 1977 a few months ago when he won his prize. They were good lectures, actually. At least based on the notes, they looked very clear.

So I found some of those things in Oxford, and I was interacting with an experimental particle physics group there, and through that, was able to use their computer system access. And so, I was able to access the ARPANET, predecessor of the internet, and was able to access all kinds of computers to run all kinds of programs. And that was important to my development because, in a sense, one of the things I should've explained is that I learned to use computers when I was 13 years old or so. And I didn't really understand why physicists weren't always using them. And particularly, it was clear you could do algebraic computations with computers, and there were a bunch of kind of research-grade systems for doing that, and I used all of them. And I could produce this kind of bizarre, alien, magic formulas that seemed like, "Where did this come from?" "Well, I worked it out on a computer." And I have to say, at the time, I did not internalize as much as I did later. It seemed like an obvious thing to do, and it just wasn't clear to me why everybody wasn't doing this.

Later, I kind of realized that the babysitting necessary with these kinds of early computer systems to actually get to the answer required actual knowledge of computers that probably wasn't widely had. And there were other minor issues. Like, in those days, the typical physicist didn't know how to type. And I was very sad because I was always a very fast typist. And I was kind of sad, a number of years ago, when everybody learned to type. And I no longer had that advantage, so to speak.

So for me, part of my early ability, perhaps, to do increasingly interesting physics was the fact that I happened to understand computers as tools and was able to use them to do all kinds of physics things. And in terms of the historical sequence, I left high school when I was 16, and I had a job for about six months or so at the Rutherford Lab in England in the theoretical particle physics group there. And I wrote a few papers while I was there and used the very fine big computers that they had and the little kind of pen plotter desktop, overgrown calculator type things that they had, and I made good use of that in a bunch of physics research that I did at that time.

Zierler:

What were some of the projects going on at Rutherford at that point?

Wolfram:

I know what I worked on. I worked on stuff to do with weak interactions. I have to look up all the papers I wrote at that time. I was interested in neutral currents, which weren't yet known to be a thing, and the effect of them on things like weak decay processes. And so, I studied things like pi-zero to neutrino/antineutrino, which I think has finally been observed. And back in those days, I was just trying to work out whether it should happen. Then, I also looked at particle production in e+ e- annihilation. And then, there was this thing called the A1 meson, which was a slightly mysterious creature. And what it turned into later on was, would decay processes like tau goes to A1 plus neutrino be suppressed relative to decay processes like tau goes to rho plus neutrino. And that was kind of a question of the kind of relativistic quark structure of those mesons.

And so, that was a question of kind of how relativistic the quarks were and how we should think about bound states of relativistic particles. But I think at that time, people were doing a bunch of particle phenomenology. One project that I did with other people had to do with particle phenomenology. I do remember at Rutherford, this time when it was like, "OK, all you particle physicists. We're not sure what the future of particle physics is going to be. We're going to do this course about laser physics. Maybe you should take that." So I very enthusiastically went and took that course about laser physics, and I've used the facts I learned from that many times since, actually. Although, I think Rutherford went on doing particle physics, as it turned out.

Zierler:

Did you have a mentor at Rutherford?

Wolfram:

Not really. Roger Phillips was the name of the group leader, and then there was a chap called Dick Roberts, who was a bit more on the computation of phenomenology side. And then, there was a person called Dennis Sivers, who was actually a visitor from Argonne National Lab, who I got to know. And then, the next summer, I worked at Argonne National Lab and wrote a paper with him and somebody else. And then, there were a couple of younger people, who were still very old by my standards. But David Scott, who I think later went into materials science, if I'm not mistaken. Another person called Roger Kingsley, who I think went into scientific administration. A person called Chan Hong-Mo, who I think is still in the particle physics business.

Zierler:

The question is, who would you bounce ideas off of as a member of a community? Or did you never seek that kind of feedback?

Wolfram:

Oh, I talked to a bunch of people there. I talked to all the people I just mentioned.

Zierler:

You didn't see yourself as a student of theirs.

Wolfram:

No. No, I wasn't. When I went to Oxford, the person who sort of primarily recruited me there was Chris Llewellyn Smith. And I had sort of been wondering, "Should I go to Oxford or Cambridge?" And I went to visit Cambridge, and interacted mostly there with John Polkinghorne and Peter Landshoff, and met people like Mike Green and so on. But I guess Chris Llewellyn Smith seemed to be, at the time, more in the young and modern branch of particle physics.

Zierler:

What was he working on?

Wolfram:

He was working on QCD by that point. Things related to QCD I think. Whereas the people in Cambridge were still pretty much doing S matrix theory. And then, I think Chris and I wrote one paper together, a mathematically interesting kind of paper. It was about moments of structure functions and the positivity of moments of structure functions. So if you're working out the momentum distribution of quarks, in those days, they were still largely called partons, inside a proton or something, you could measure that momentum distribution by things like deep inelastic electron scattering and so on. It was known that these structure functions would run as a result of asymptotic freedom, and so they'd gradually change as you'd change the momentum transfer and the deep inelastic scattering. And the question was, what constraints were there on the possible forms of those structure functions induced by various positivity constraints on moments? And I forget where this came from. I think I had been playing around with–I actually don't know. I'm sure I have the documents still.

But that came out of some computer calculation that I did about deep inelastic scattering that I might've done as a result of interacting with this experimental group. The main person I interacted with in the experimental group was Tom Quirk, who had been at Fermilab and other places. I think in later years, became a venture capitalist in Australia, actually, and left academia after having seemed like a lifer in academia. But in fact, he went off to do something different. I'm kind of skipping around a little bit, but when I was at the Rutherford Lab, those are the people I interacted with. And I knew Chris Llewellyn Smith by that time because I'd already decided I was going to Oxford because this was the time between high school and college that I went to work at Rutherford Lab.

And I was well-aware of the slightly bizarre character of the situation that I was working in this theoretical physics group, writing papers, and then I was going to go off and be an undergrad in Oxford. And that was a little bit odd. Actually, sort of an interesting story fragment, the general advice from the people at Eton and so on was, "Don't go through all this stuff young. It will put you at a terrible social and other disadvantage."

Zierler:

What about your parents? What advice may they have given?

Wolfram:

I was a first and then only child. So they didn't really know that much about children. And it probably wasn't helped by the fact that my mother's mother had studied children. And so, in retrospect, I would say that their reactions to things I was doing were a certain degree of, "OK, that's just what children do." I think that's really all it came down to. "If that's what he wants to do, I guess that's what he's going to do. And I guess that's what children do."

Zierler:

When you talked to Mike Green, did you discuss string theory at all? Was that on your radar?

Wolfram:

This was 1975 or '76.

Zierler:

Which is right at the early stages.

Wolfram:

No, no, he was talking about world sheets and that kind of thing. I subsequently sort of ran into Mike Green again when he was visiting Caltech when I was there. And I don't think I've run across him since, I'm afraid. But no, at the time, it was the late days of the S matrix.

Zierler:

Which you recognized was getting to be outmoded already. That was part of your motivation to go to Oxford.

Wolfram:

Yes, that's true. It looked like the past. Field theory and QCD looked like the future. And then, one thing that happened, another early thing, was that I had been interested in statistical mechanics. And in a sense, one of the core questions of statistical mechanics is, "How does that work in the universe?" Because it seems like the universe goes from this state of sort of hot Big Bang high entropy to this situation where we get all these galaxies forming and all this stuff happening in the universe. And how is that consistent with the second law of thermodynamics? And I was really interested in that. And I'd been interested in that 1973, '74.

And again, to bring it to today, I remember hearing a lecture by Roger Penrose back in probably 1975 or '76, where he was talking about this paradox and his interpretation of it in terms of gravitational entropy, and I just went to this event with Roger Penrose this morning, where he started it by talking about the exact same theory. It's like nothing has changed in 45 years, except I understand this a lot better now. And I don't think it's quite right what he's saying, and I think I know what is right. And that's a change. But back in those days, I was very interested in what would be the thermodynamics of gravitation. And I was interested in, also, how there comes to be lots of matter, at least around us in the universe. Where's the antimatter in the universe?

So the separation between matter and antimatter. I studied a bunch of stuff about matter, antimatter separation. It turns out not to be correct. It turns out that there's more matter than antimatter in the universe. But I remember I read Steve Weinberg's book on gravitation and cosmology. Actually, I remember reading it on the bus going to Rutherford Lab. So that must've been 1976. And so, I then got quite interested in the interface between particle physics and cosmology, which wasn't really a thing yet then. And I recently found, actually, a paper that I wrote that was probably written in 1976 or '77, so when I was 16, 17-ish. It was about the abundances of stable charged particles made in the early universe. And actually, I thought, upon rereading, it was a pretty nice paper.

But what happened with it was, it's quite a long paper because it had a whole introduction to particle physics meets cosmology, so to speak. Because that wasn't a thing back then. So it was kind of a description of how you work out things from the Einstein equations to figure out things that would eventually turn into statistical mechanics of particles in an expanding universe, and so on. And the journal I sent it into said, "Oh, take out all that introductory stuff." So I then wrote a shorter paper that didn't have that introductory stuff, which was kind of a shame because it actually was pretty good, I think. And it probably would've advanced the field a bit if people had read it because it was a decent introduction to how that kind of stuff worked. A certain amount of skullduggery happened around the paper I wrote, and subsequently, it was kind of interesting. I sent it into this European journal, and they were like, "Oh, you're using SI units. You can't use SI units."

And so, I was like, "Wait a minute. You're a European journal. There's this SI thing. You should be using SI units." Just because American physicists at that time were very CGS oriented. And of course, this was a time when most physicists, as today, h-bar = c = 1, so to speak. And putting in units to actually work out things like, "OK, how many of these particles actually exist in a sample of seawater?" was something one didn't do, so to speak. And so, it was kind of an unusual thing. But there was more skullduggery that came later. But that's a different story.

Zierler:

What about asymptotic freedom? When did you become aware of the work of Wilczek, Gross, and separately, Politzer?

Wolfram:

I was not heavily aware of it when I was at the Rutherford Lab in 1976. So I knew lots about it by 1978 because I was writing papers about it. I know where. I must've known about it by the time I was at Argonne in the summer of 1977 because I wrote a paper about gluon, gluon goes to heavy quark, antiquark. It was kind of the first paper with Dennis Sivers and a person called John Babcock. We worked out what is now I guess called gluon fusion production of charm particles. I learned a very interesting thing from that paper, a meta fact about science, which was, in that paper, we did this calculation from QCD, which I did using computer algebra and all these kinds of modern tools, so to speak.

Argonne National Lab had the great feature that they actually trusted physicists to be in the same room as the mainframe computer. It wasn't a question of putting your card deck for somebody else to run. You could actually do it yourself, which was kind of nice. But in any case, we worked out these results about the production cross sections for charm particles, and at that time, the J-psi had been observed, and it was pretty clear it was a charm quark, anti charm quark bound state, but separate charm particles had not yet been observed. And so, we were trying to work out the rate at which they would be produced in proton, proton collisions and had a particular prediction based on QCD. But then, there was an experiment from Fermilab that said actually, the production cross section was at least five times smaller than what we had predicted.

So it's like, "OK, we have a theory. It's based on QCD. QCD is a very nice, clean theory. It seems to work in a bunch of other places, and it predicts this thing. And I don't really see a lot of wiggle room in how this prediction could be wrong. So the second half of the paper, which I suspect I wrote--I'm sure I wrote every word of the paper because that is my way in writing papers--but I think the second half was sort of more speculative, and it was about, "If this calculation is wrong, how come it's wrong?" Because after all, there's this experiment, and sort of the meta theory of science, if the experiment doesn't agree with your theory, then you should reject your theory, so to speak. But, like, "How could this be wrong?" And so, I had various things to say about higher order corrections and stuff. I haven't looked at this paper in whatever it is now, 40 years or so.

So I can't immediately tell you what all the different kinds of wiggles there were. We published the paper anyway. Turns out, in the end, the experiment was wrong. And it was interesting because it was an emulsion experiment where the idea had been, "Find these little tracks that would be these charm particles." But I learned subsequently, when an experiment says it doesn't find something, then that's always an even more concerning question than if the experiment says, "Well, we found this, but we didn't find that, so to speak." If it found nothing, you kind of have to wonder, "Was it looking in the right place?" And in this particular case, it wasn't. And actually, as a result of that experience, I had the principle in doing particle physics that if I was doing any sort of phenomenological work related to an experiment, I would always physically visit the experiment, and actually look at it, and ask a bunch of questions. And subsequently, a couple of times, I was quite amazed at the extent to which the–it was a very insidious thing.

People were doing these complicated Monte Carlo simulations. Say, "Given the theory, here's what we should see with the experiment." But actually, the big effect was in a place where there weren't any detectors, for example, in the experiment. But the Monte Carlo, which sort of told you everything, said, "Well, if the theory worked this way, then the experiment should see this." The fact that the anomalous piece of what it had saw was something that was very unstable with respect to what could actually be determined from the actual experiment as it was set up was not something you could tell from those kinds of early Monte Carlo simulations. So I kind of learned to be a slightly more skeptical consumer of experimental data after that experience.

Zierler:

You came to Argonne in the summer between your Oxford undergrad years?

Wolfram:

Yeah, it was the summer of 1977, so it was after my first year of college in Oxford, yes.

Zierler:

What was the opportunity? How did you get connected with Argonne?

Wolfram:

I had met Dennis Sivers, who was a visitor at Rutherford Lab, and he was the person who got me to go to Argonne. And then, I don't know, I was tagged as a summer visitor of some kind. And there was a person called Ed Berger, who was the head of the group at that time, another person called Gerry Thomas, who I've interacted with even quite recently, who was involved in that group. So Ed Berger, I wrote a paper with, was working on particle phenomenology, and we wrote a paper about angular distribution of muons produced by the Drell-Yan process in proton, proton collisions. And I haven't thought about that in 44 years. There were some people who were visiting Argonne. I think Ephraim Fischbach was one of them, Boris Kayser was another one.

And we started a piece of work together that never finished. And then, I kind of, while I was at Argonne for that summer, went and hung out at Fermilab a bit and gave a talk, I remember, at University of Chicago and Purdue, and met all kinds of different physics characters, who, now that I'm back in the business, I realize have stayed in the business the whole time. I remember, there was a chap called Chris Hill, who was a person who was at the University of Chicago at that time. And at Fermilab, there were all kinds of people.

Zierler:

Was this your first time to the States?

Wolfram:

I had visited the US once before that in 1975 maybe. Maybe '76. And I'd visited New York. My parents had friends there. And I took a day trip out to Princeton, and I thought, "I'll go check it out." I thought, "These buildings look awfully familiar." They're just like Oxford. So I end up going to the little information kiosk at the entrance to campus, and I looked young. So they're like, "Are you interested in being a student here?" And I was rather confused by that question. But I said, "Well, maybe I'd be interested in physics graduate school." So they said, "Oh, you should come over to the physics building and meet some physicists.” So I was like, "OK." So I did, and one person I met that day was David Gross. That must've been 1976.

By that time, I must've known a bunch about QCD because I remember sitting in David's office and explaining that I'd written some papers about various things in QCD. And I didn't realize that the word paper in American can mean things like a term paper. So for me, the only meaning of the word paper was something you'd published in a journal. So it took a little while for that part of the conversation to clarify. But eventually, in one of those unforgettable moments, David said to me, "Well, by the time you're in graduate school, we'll have QCD all figured out." [laugh] So I've reminded him of that a few times since. I can now date that time.

That was late 1976 because I had been going to some lectures in Oxford given by Michael Atiyah, who was talking about instantons, which were a big thing at that time. And David Gross was working on instantons, which are kind of possible field configurations that might be important for the QCD vacuum. And I think David's idea was that instantons and these things he called merons, which was some kind of partial instanton, would be kind of the story of the QCD vacuum and so on. But it must've been right after that I started thinking about instantons and their relationship to asymptotic freedom, and I kind of realized that it was pretty easy to work out that if there were effects from instantons, they would be like q to the 10th, where q is some momentum parameter. By the time something is increasing that rapidly, it's really hard to see because it goes from almost nothing to being too big for it to be sensible that that's the only thing that's going on in a very short space.

And I remember when I was in Oxford talking to the mathematicians about this fact, because they were all like, "We're working on instantons, and instantons are going to solve QCD," and I was like, "I don't think this can possibly work. Because if you actually follow through what will happen, this is how the physics has to work out." And they're like, "We don't know about the physics." And they did very nice mathematics, which we're in the process of making use of, actually, for our physics project. Even though it hasn't had so much use in practical physics in the intervening years. And I think I met Elliott Lieb also that day, mathematical physicist. Another person I met was a person called Stu Smith, AJS Smith. Or maybe I met him later. Because I was considering, later on, maybe a year or two later, where to go to graduate school, and Princeton was one of the possibilities. And I think Stu Smith was one of the main people in that picture. So the three places I considered going were Princeton, Caltech, and Harvard.

Zierler:

Did you get to the West Coast during that summer? Did you visit Caltech or Stanford?

Wolfram:

No, no, I never visited Caltech. That's why I went to Caltech for graduate school. I'd never visited it.

Zierler:

When you got back to Oxford following that formative summer, did you go back with the feeling that you were not going to finish at Oxford, and that you were going to get pulled into graduate school in the United States?

Wolfram:

Yeah, I think I had talked to people about that when I was at Argonne. I think I probably ran into Geoffrey Fox at Argonne. I ran into all kinds of people in the physics scene at Argonne. And I ran into Frank Wilczek and all sorts of different characters. And I had run into some of these people because I'd been going to the theoretical physics seminars in Oxford for quite a few years by that time.

Zierler:

Graduate seminars, you mean.

Wolfram:

Yes. The main kind of seminars. And another person who I got to know when I was in Oxford was Nathan Isgur, who I was quite good friends with. We wrote a paper together about the production of heavy particles in cosmic rays. And actually, that was a paper that had another meta lesson for me, which was, we were interested in heavy charged particles that might've been produced in the early universe or might be produced in cosmic rays. For the early universe, you didn't need to know the rate of production of heavy particles in specific collisions like proton, proton collisions. But for this calculation from cosmic rays, we did need to know that. And so, needed to know, what's the rate of producing heavier than charm quarks in proton, proton collisions? And I'm like, "Well, I guess there are various ways to estimate it."

And then, I was realizing, "What an idiot. I just wrote a paper specifically about this question about production rate of charged particles, and heavy quarks, and so on in proton, proton collisions." So the main thing there was, there's a certain tendency not to use the tools you've built yourself. And that was my moment for realizing, "No, the first thing you ask is, 'Do you already know the answer to this from something that you've already figured out yourself?'" And I think there was kind of a theme in those years, actually, of heavy particles. Because when I was at Caltech, I wrote a paper with David Politzer about heavy quarks. But that's a different story. But yeah, in Oxford, I got to know Nathan Isgur.

And then, also, a person called Predrag Cvitanovi?, who, by that time, was transitioning out of particle physics. I just was at a memorial event for Mitchell Feigenbaum, actually, and I was recalling the fact that I first learned about the whole period doubling story the first day I was an undergrad in Oxford. I went over to the theoretical physics building and was just sort of hanging out there, and I ran into this young post-doc named Predrag Cvitanovi?, and he told me this whole bizarre, whimsical story about fish in the Adriatic Sea and kind of the iterated map of how many fish there were this year, and the subsequent year, and so on.

It was only many, many, many years later that I realized that Predrag Cvitanovi? was from Croatia, which is right next to the Adriatic Sea. But I didn't meet Mitchell Feigenbaum for a few more years after that. But that was my first introduction to those kinds of things. Basically, I got to the point where I could do somewhat OK, sort of professional-grade physics research by the time I was 16 or so, when I was working at the Rutherford Lab and so on. And then, it gradually got better from there on out, so to speak.

Zierler:

And your gaze was westward. We talked about crossing the Atlantic. What about continentally? Did you have any interest in CERN, or Saclay, or DESY? Were you thinking at all about these places?

Wolfram:

When I got my PhD in November of 1979, I basically had two job offers. One from CERN and one from Caltech. Actually, another one from Harvard, maybe. But the main ones were CERN and Caltech. So I considered going to work at CERN. I even considered some crazy arrangement of commuting between Los Angeles and Geneva. I was young in those days, and my main discovery–that must've been soon thereafter, right at that time, I got my first portable computer, which was a thing the size of a small suitcase called the Osborne 1. It wasn't something that ran off batteries, but my main discovery was the very back of a 747, past the last row, at least on the airlines I was taking, there was an outlet that I guess they used for plugging in a vacuum cleaner or something. And you could plug into that outlet.

And in those days, I think it was a different time. And it was certainly an interesting experience, taking an Osborne 1 computer through LAX security. The first time they'd probably ever seen one of those things. That was a curious experience. But they asked many questions, to their credit. I'd learned French in school for years, but they had never had the nerve to speak a word of it. So that was part of my reason for not considering CERN. I didn't really consider anywhere else. CERN was kind of the big place. DESY was sort of a second-tier operation at that time. It was just not as big as CERN. CERN was the big place.

And I think Jacques Prentki was in charge of the theory group at this time. He was the person who was recruiting me there. And there were a variety of people in that group like John Ellis and Mary Gaillard was there. I remember some interactions with her. And a few other people. I think I was pretty well-networked in the particle physics world back in the late 1970s. And there were only a few people I'd never met. I'd never met Steve Weinberg. And the story was that Murray Gell-Mann had been really mean to Steve in some seminar he'd given, and so Steve had vowed never to come back to Caltech. And so, when I was there, there was no sign of him.

Zierler:

What about Shelly Glashow? Did you have any interaction with Shelly?

Wolfram:

I really didn't, and it's a strange thing because I happened to have gone to some annual dinner that the Boston University of Physics Department has because I'd been living in the Boston area for the last 18 years. And Larry Sulak started inviting me to these dinners, and it was kind of a fun physics moment. So I'd been to them for years, and Shelly is often at these dinners. And we've almost never talked, which is so strange after all these years. And I noticed that Shelly had taken over editing some magazine called Inference, which is a kind of general science magazine publishing all kinds of interesting things. And it's like, "I had no idea Shelly was interested in all that kind of stuff. There would've been a zillion things for us to talk about over the last 40 years or more." So next time I see him, I know I have more to talk to him about. But no, I've never really had much interaction with him.

Zierler:

So to set the stage, you come back from your summer in Argonne to Oxford with the intention of figuring out how to leave and get into a graduate program in the United States. That's the basic idea?

Wolfram:

Yeah, yeah.

Zierler:

Why not stay at Oxford as a graduate student?

Wolfram:

Because I hadn't finished as an undergraduate.

Zierler:

But wouldn't they have taken you on?

Wolfram:

I don't know. I don't know what the bureaucracy was.

Zierler:

Why would they have taken you on with any more difficulty than Caltech?

Wolfram:

Well, they're an older university. I'm sure they had more bureaucracy. They'd been going since the 1200s. A lot of bureaucracy builds up in that period of time. But for me, the US was sort of the country of the future. And there was certainly a lot more energy in physics in the US. And it was sort of the only serious place to go to do physics at the time. It was interesting, when I finished my PhD in 1979, I considered going back to England and I talked to people about it. And the physicists I knew in England were like, "Why do you to come back here? There's nothing going on here. Stay in the US. It's where everything is happening." And so, yes, I had been thinking about it in modern times. I was in England at a time when it had ceded its place as the dominant sort of force in the world, so to speak, to the US.

And who knows what the future will bring for the US, but one can hope for the best. I think it's sort of interesting that the character of, at that time, a country that was feeling like, at least on an intellectual level, "We're not at the leading edge anymore." What did that feel like? And so, for me, it was the US or maybe CERN because CERN was sort of a happening place specifically for particle physics. At the time, I had no idea about business and companies, or that I was in the slightest bit going in that direction. And so, the fact that that's sort of an American kind of thing was an irrelevancy. I don't think I was very affected by my environment in those days. I lived in England, I came to the US, I was in California. I remember one of the first couple of days I was at Caltech, I ran into Murray Gell-Mann, who had been sort of involved in recruiting me to come there and so on.

And so, Murray says to me, "It must be a tremendous culture shock coming from England to California." And then, I see him looking me up and down, and I'm wearing this kind of bright yellow shirt and these sandals, looking every bit the Californian. And, as was sometimes his way, he was kind of blushing of, "Oh, I think I asked the wrong question." But I don't think I was very affected by my environment. I could've been anywhere because I was mostly spending my time doing and thinking about physics.

Zierler:

Well, let's stay in Oxford when you get back. When you make the determination that you want to apply to graduate programs in the United States, you mentioned Princeton, you went to Caltech because you hadn't been there, you hadn't known much about it. Where else did you apply? Where else was compelling for you?

Wolfram:

Harvard. And Harvard said, "We won't take you if you don't have a college degree."

Zierler:

What did Princeton say?

Wolfram:

They said, "We're happy to have you." But Caltech, I think, was partly because I hadn't been there. California seemed like an interesting place. And also, they gave me a better deal. It was some fancy fellowship and all this kind of thing. Whereas Princeton, it was a less fancy fellowship. And the other thing was that Princeton, there was some whole elaborate requirement about having to do all kinds of classes, and exams, and so on. And that wasn't present at Caltech. And so, I was like, "I'm just going to go to Caltech, where on day one, I can start doing the research that I've been doing anyway." And I realize that I probably wrote one of those personal statement type things. And in subsequent years, I've read other people's. And I think, again, it was one of these sort of personal statement, "What do you want to do in graduate school?" "Well, I'm working on the following five papers right now."

Zierler:

"Let me do my thing."

Wolfram:

"I'll do my thing." And when I got to Caltech, which is 1978, I was pretty productive, I think. There was one period of time when I think I churned out maybe one every two or three weeks.

Zierler:

But you said Gell-Mann was involved in recruiting you. What was the point of contact? How did he know about you?

Wolfram:

I think I was known. I'd been hanging around in the Oxford physics scene, in the Rutherford Lab physics scene, in the Argonne physics scene. I think Murray must've known Chris Llewellyn Smith. He probably knew a bunch of other people. There was a person called Graham Ross, who was at Oxford, who I think may have had some crossover with Caltech. I'm not sure. I think the number of 16-year-olds doing particle physics at the time was limited.

Zierler:

And the idea was that you would become Gell-Mann's student? Or not necessarily?

Wolfram:

No, not particularly. I remember Murray tracked me down in England one day and reached me on the phone, which was actually surprising that it was possible because it was purely coincidental. It wasn't in the days of cell phones, right? And I think Murray was a little bit taken aback that I wasn't more starstruck, so to speak. But I guess by that point, I'd interacted enough with physicists to be like, "They're just people like all the rest of us," so to speak. And maybe I also had the impression, which might've been unfair, that Murray was a kind of physicist from an older generation. Quite unfair. Now that I'm the older generation, I don't like to view it quite that way. But no, when I got to Caltech, there were a limited number of people in the theoretical, high-energy physics group. So I interacted with all of them.

Zierler:

And one name I'm amazed we haven't mentioned yet is Feynman. Where is Feynman in all of this?

Wolfram:

Well, I met him as soon as I showed up at Caltech. I hadn't met him before Caltech.

Zierler:

Was this part of the motivation? Were you excited to meet and get to work with him?

Wolfram:

Honestly, again, he seemed like a physicist from an older generation at that time. As I got to know him, he's a guy who stayed young, so to speak. But I had read his lectures on physics, and it was a textbook. I thought it was a fine textbook, and it had some cool things in it. And I don't think Feynman was really quite as famous back in those days as he became. As kind of a pop physics figure, it was only after the Challenger thing that he really became sort of widely known. And after his books. And I sort of viewed him, as I said, somebody from an older generation. I liked him quite a bit, and I really enjoyed his kind of, as he would put it, "Damn the torpedoes, full speed ahead," approach to almost everything. And a little bit later, I organized the theoretical physics seminar at Caltech.

And he would always come to that. And he was somehow very competitive. And he would have this thing going of, like, "Which of us can figure out the fatal flaw in this talk fastest?" And I'm afraid there was one terrible time when, I think, I had figured out some issue, and I kind of look over at Feynman, and he could tell that I'd figured something out because I'm asking these questions that are going on some direction. And he sort of realizes, "Oh, yes, you figured it out. OK. You get this one."

Actually, that turned into the paper that I wrote with David Politzer about what happens to the effective potential when there are Higgs particles of different masses and things like this. And I was like, "The effective potential when they're fermions goes negative for large values of the field. And so, if there are enough heavy fermions, the effective potential will sort of have this instability where the universe will kind of run down into not being the ordinary vacuum." And so, I kind of realized that. And then, David and I ended up writing a paper about that. And that was the silver lining of what was otherwise a rather unfortunate sort of personal situation of the person who I'd invited for this talk being sort of put on the spot of this kind of potentially fatal flaw type thing. And so, I felt bad about that afterwards. But that was all Feynman's fault. And I have to tell you, the problems of being around Dick Feynman, I remember this must've been 1981, I was organizing a series of talks given by people who had worked on previous systems for doing mathematics by computer. I had started building this thing that was called SMP, which was my first large software system.

And so, I figured I should do my homework, and get to know what everybody has done before, and invite people to come and talk about these things. And there was this one young computer science professor from Berkeley who comes, and I think he'd kind of made the mistake of flying down from Berkeley that morning, he was really tired. And he gives this talk. And he's talking about various things, talking about the name of the system they'd built and the name's a pun, and he's asking the audience if they can see the pun. And I happened to be sitting next to Dick Feynman. And at this point, Dick Feynman stands up, and he says, "If this is what computer science is about, it's all horse shit." And it was one of these ugly moments, and I'm afraid the person who gave that talk, I think, has been kind of a troll of many things we've done for about 38 years. And I think it can all be sort of traced back to be careful when you sit next to Dick Feynman in a talk because it's a liability, so to speak.

But yes, I got to know Dick Feynman reasonably well, I would say, in those years. And actually, subsequently--and again, I'm skipping around too much--we were both consultants at a company called Thinking Machines Corporation, which is where his son Carl was working. And we tried to time our visits there to coincide. The two notable things that happened there were, with respect to Dick Feynman, he was always very much in the, "Let's hide away and do physics," mindset. And actually, I was just struck recently because in our physics project, the nonlinear sigma model just reared its head. And a picture got taken of me and Dick Feynman at some event. And I remember that at that event, we were talking about the nonlinear sigma model. And I think that was sort of the last time I had thought about the nonlinear sigma model until probably a month ago was that moment. But I'm jumping around too much.

Zierler:

What about Carver Mead? Did you work with Carver at all?

Wolfram:

I certainly met Carver. I did not know him well. And I met people like John Hopfield, who was at that time working on neural nets. And I started doing simulations about neural nets back in 1980 or so. Actually, could never quite figure out how he'd gotten the results he'd gotten based on my computer simulations. But never really did much with them at the time.

Zierler:

When did you meet Bill Press?

Wolfram:

Much later.

Zierler:

Not at Caltech.

Wolfram:

No, he wasn't around Caltech at that time. I don't think I met him--when did I meet him? Maybe when I was consulting at Los Alamos. That must've been 1982, '83. I'm guessing I met him there.

Zierler:

So to set the stage intellectually, you talk about these five papers that were sort of part of your portfolio that you wanted to continue. What were the basic streams of thought you were working on at that point?

Wolfram:

I was working on QCD, consequences of QCD. I was working on cosmology, kind of the interface between particle physics and cosmology. I was still very interested in kind of early universe structure formation, but I never really wrote papers about that back at that time. I was interested in sort of the structure of quantum field theory, and I was interested in questions about the series of Feynman diagrams and what the whole sum of that series might look like. I'm sure I have a copy of what I wrote, but those must've been the main things. And there were some other things I was doing. I recently found an unfinished paper I wrote about neutrino background radiation from 1976, and that led to a very interesting moment.

And this must've been 1982. I thought that I had figured out a possible way to detect reactor neutrinos remotely from orbit. And in 1982, it would've been a big deal to detect reactor neutrinos from orbit because that was a time when everybody had their nuclear submarines hidden. And if you could detect neutrinos from orbit, you could see where all the nuclear submarines were. So it was kind of an interesting thing for me. It's like, "What on earth am I going to do with this information?" And I was actually kind of really relieved when I figured out that the method that I'd come up with wasn't going to work. But otherwise, it was like, "What am I going to do with this information?" So I didn't answer that question at that time. I'm not sure how I would be able to answer it today either. So that was another kind of sideshow thing that I was doing.

Zierler:

Did you feel any pressure to narrow your interests for the purpose of completing a thesis? And I should say completing a thesis in record time.

Wolfram:

Well, my original plan for my thesis was to put together the various papers that I'd written, which at that time, I think, ran to, like, 800 pages. And so, the people on my thesis committee were like, "Don't do that." So my compromise was I'd put together the short papers that I'd written by that time. And so, it was around 100 pages or so. But it wasn't really a big kind of pressure. I was just working on physics projects. I didn't really have an advisor as such. And Dick Feynman gave me these whole speeches about, "You need to pick an advisor who nobody will ever say, 'You were a student of X.'" And definitely, he said, "Don't pick me." He said, "I've never had a successful student." [laugh] Which I don't think is quite fair. And he told me these many, many stories about his time with students. He told me a thing before or by the time I got my PhD, he said he wanted to pass on to me a piece of wisdom that he'd gotten from Hans Bethe.

And the piece of wisdom was, a thesis is work done by the advisor and adverse conditions. And he said, the story was, "Student comes in, says, 'I want to talk to you about this thing, and here's what I've got.' And you have to remember what was going on in that project. And then, you suggest things, the student goes away, and then you come back a few weeks later, and you've completely forgotten what you said. And the student has not managed to do what you said they should do. And then, you're back redoing this thing." That's a bit of a cynical and unfair view, perhaps. I do think about that in my day job. I'm responsible for overseeing a great many projects, and they tend to have these sort of project review meetings and so on. And at least my one advantage is that, these days, there are livestream recordings of many of those meetings.

So unlike the Dick Feynman case of, "What did we talk about two weeks ago?" type thing, at least in my environment, that's a bit more organized. But he'd said his student who'd been very successful with him was a student called Al Hibbs, who he'd subsequently wrote the Feynman and Hibbs book about path integrals with. But he said Al Hibbs had come to him and said, "I want to work on studying the transfer of energy from the wind to waterways." And Dick Feynman said, "I don't know anything about that, so I'm happy to be your advisor." And apparently, he had a good time with that. And Al Hibbs went onto a long and distinguished career at JPL, where he was a spokesperson for a bunch of deep space probes and things. But in the end, a person who I got to know and wrote a paper or two with, and who was friendly with Dick Feynman, was Rick Field, who was perhaps the most junior person in that group at that time.

And so, Feynman was like, "You should have him as your advisor. It'll be good for him." So I did. So that's my genealogy, so to speak. And if I've had any benefit to Rick Field, it'll be a good thing. He's a fine fellow. And I may not be the most average student. So I guess I got to know almost all the people in that group. Who else was in that group? Steve Frautschi, who's still around, Fred Zachariasen, who I didn't know very well. At that time, he was largely off doing defense consulting. And I think had become a bit cynical about the physics community. Or at least that was my impression.

Zierler:

What about Peter Goldreich?

Wolfram:

I did know him. I liked him a lot. And he struck me as being one of these people who did physics because it was fun. Which is something that I have a lot of respect for. And I didn't really know him much in terms of his detailed physics work. I think he was on my thesis committee. Another person I knew at that time a bit was Kip Thorne. And I knew Steve Koonin, but not well, who I didn't really have great interactions with. And I'm afraid he was another person who was the notable recipient of a Dick Feynman kind of seminar attack. I remember Steve Koonin giving this talk about collision of nuclei. And Dick Feynman was there, and he says--and I can't imitate the Brooklyn accent--"So what you're really saying is, if you throw two cream puffs at each other, they go splat?" [laugh] Which was not a bad summary, although I think it was a little unfair in the characterization. And I guess I knew Ron Drever a little bit, who was involved with Kip Thorne in the early gravitational wave stuff.

Caltech was a sort of strangely insular place in the sense that the fourth floor of the Lauritsen Lab was the particle physics place. And the gravitation place was in a different building. And there was very little kind of direct interaction. And when it came to things like mathematicians, I'm not sure I ever saw a mathematician when I worked at Caltech. Oh, no, I remember once, I went to the applied math building because I was borrowing a liquid crystal display computer projector, which was very early. They had gotten it from Hughes Aircraft, where it had been developed, and there was this one giant thing that was a computer projector, which I was able to use back in 1981 to do what I do all the time these days, which is live demos of computer software. But that was a not-seen-before type thing back in 1981.

Zierler:

When did Caltech decide to offer you a faculty position? And did they entirely skip a post-doc tryout period?

Wolfram:

It must've been sometime in the earlier part of 1979 that I was kind of like, "I've written a bunch of papers. I should move on to the next thing." And I think maybe CERN had offered me a job and was recruiting me, and maybe another couple other places as well. And then, the first question was, did I want a teaching, faculty job like an assistant professor? Actually, another person who was there at that time who I still know well is George Zweig, and he was very much of the mind that, "You should take the teaching job." And I was like, "I don't want to teach. I'm not interested in teaching." Things change. In my later years perhaps, I find it fun at least as a hobby to do teaching. But at that time, I thought it didn't seem like a useful thing, and I just wanted to do research. And at that time, they had kind of a research track that I think had been sort of created sort of in the Linus Pauling orbit for people who are kind of research people who were doing things in that kind of space.

And so, they had this thing called research associates, which were kind of research faculty positions. I think they renamed them senior research associates just because otherwise, they sounded too junior. But that was sort of a permanent but not tenured in the sense of academic, AAUP tenured position. And I said, "That sounds like a pretty good thing to me. Sure." And so, that's the track that I ended up being on. Geoffrey Fox, I knew pretty well, and we wrote several papers together. Most notably, these things that I would call the three animal variables, which became called the Fox-Wolfram variables. And then, there were just a whole collection of people. And it was not really one of these, "Oh, you should apply for this job," type of deals. It was more like, "OK, you're doing all this stuff. We want to have you do stuff. What's the right structure?" So that's how I ended up doing that.

Zierler:

And who was driving that? Was it still Gell-Mann?

Wolfram:

I think it was. There were a lot of people in that picture because I had worked with many of the people in that theoretical physics group. The person who was the chairman of the physics department was a person called Robbie Vogt, who ended up not quite being my favorite person, but I think he was sort of the administrative person of all that. I'm not sure that I have a lot of knowledge of what the internal politics of that were. Maybe somewhere in some deep, dark archive at Caltech, it's findable. It's not a piece of personal history that I've particularly probed.

Zierler:

As you said before, you're kind of immune to place. It doesn't really matter if you're in Britain or the United States. You're not sort of aware of these things. I wonder, though, when you accept the offer to join the faculty at Caltech if it starts to dawn on you that you'll be making a long-term life for yourself in the States.

Wolfram:

Yeah, I considered going back to England at various times. I always liked the British countryside. I wasn't so keen on the people. And I felt that at the time, I always used to say, "England is a good place to be old in, but not a good place to be young in." Because it's like, you would be running into these old, distinguished professors, and people would treat them with great esteem, even though you thought what they were talking about was complete nonsense. And I've thought about that characterization in subsequent years because, of course, England has changed, become much more Americanized in many ways. And I don't think it's a good place to be old in anymore. So unfortunately, just waiting wasn't really a strategy. I didn't really have a super long-term plan. So there's a fairly detailed sequence of things that happened.

So in, I think, November 1979, I finished my PhD. And I then, about a week later, went off to CERN just to visit. And I was kind of in my, "OK, now let me plan," type mode. And at that time, I'd been using a bunch of these computer systems for doing mathematical calculations for physics, and that was the moment at which I had kind of pushed on people with existing systems and like, "This could be done much better. You should do it. Etc., etc., etc." And the young ones were like, "There'll never be anything done better than what we've already done." And the older ones were like, "Ah, we're too old and tired. We're not going to do this again." So I was kind of backed into, "OK, well, if I want this to happen, I've got to do it myself." So in November of 1979, when I was visiting CERN, I kind of started designing this thing that I called SMP.

Originally, it had other names. But eventually, it was called SMP, the Symbolic Manipulation Program. That was a system for doing high-level computation, mathematical computation, and so on. And so, I get back to Caltech probably in December of that year. And I started building SMP. I continued to write some physics papers, but I started building SMP. And that became sort of a big thing for the next year or so. And then, by the beginning of 1981 or the end of 1980, I had a first version of SMP up and running, and that was a fairly complex project because it involved a whole bunch of other people, and it involved big computers, and it involved building a big software system, which was definitely by far less of a known art at that time than it is today. And so, what happened after that was a series of things. There was this MacArthur Foundation thing, which was interesting and fun. That was in probably June of 1981. And that kind of, I would say, did a couple of things. It probably increased my sort of general visibility a bit and, "Youngest winner of cool, weird award that hasn't been given before," type thing.

Zierler:

And was the MacArthur Fellowship for SMP?

Wolfram:

Well, it's not really for anything.

Zierler:

No, but I mean in terms of how you got on their radar, what they were recognizing.

Wolfram:

I think Murray Gell-Mann was actually on their board at that time, and maybe was until he died. So I suspect that's how I got noticed by those folk. And several other people from Caltech were involved with the MacArthur Foundation. But I think the pitch probably, for me, was doing something between physics and computing in some strange place. And, "At least he's young," type thing. I always feel it's more impressive when these kinds of awards are given to young people because it's much higher risk. Sometimes the risk is much on the part of the person the award is being given to as it is on the part of the awarding organization for sending their money to somewhere that it is wasted, so to speak.

But I remember Dick Feynman saying to me, actually, when that award came through, he said, "Just be careful. Don't have other people's expectations for you get the better of you," type thing. And I have to say, it was interesting because I was like, "Eh, don't worry about that." Dick Feynman was famous for eventually writing a book called Don't Care What Other People Think type thing. [What Do You Care What Other People Think?] But that was very much my point of view. And it was something where I guess that I had a pretty single-minded focus at that time of, "I'm going to do this physics stuff, I'm going to do this computing stuff." The focus was on the content of what I was doing and not really on the meta of, "How does this position me in the world?" so to speak. And I think it was an interesting experience in media relations, let's say, when that MacArthur thing came through.

I remember there was one guy who came to interview me for some television program, and we have a perfectly OK chat, although it's clear for him, I'm kind of a museum specimen. And at the end, he says, "Well, that was kind of interesting. The last person I interviewed just lost a ton of money at Las Vegas." On the grounds that I'd just gotten lots of money from this fellowship. So I thought that was rather charming. I think that was when I was trying to figure out what to do with SMP. Because we had this big software system. It was clear it was useful to a lot of people other than just me and the very local people that were using it. And so, I was like, "OK, so we have this software. I guess we have to distribute it somehow." And so, the university had a technology transfer office, which was this one older chap, who was part-time, came into this little office in the administration building.

And so, I went to see him, and I said, "What can be done with this?" And he was like, "Well, you should talk to this person, you should talk to that person." And so, I did a few meetings, and they were really silly. So I go back to him, and I say, "Look, these meetings were silly. This is not going to go anywhere. How does this work? I don't know anything. I'm just a kid who happens to be a faculty member at your university. Tell me how this works." And he said, "Well, honestly, we don't really get to do very much because mostly faculty members just go off and start their own companies. And we don't really hear anything about it." And I said, "Well, can I do that?" And so, right then and there, he picks out the bylaws of the university, and he's kind of flipping through the pages. And he's like, "Well, this is the section about copyrightable material. Software is copyrightable material. It says copyrightable material is owned by its authors. This is all good."

So I said, "Great. Write me a letter that says that, and I won't come back and bother you anymore." He did. And so, I ended up starting to set up this company, and I made many mistakes in setting up that company. Actually, the guy who I kind of brought in as the CEO of that company was a Caltech alumnus from I guess applied math, who had worked at Hughes Aircraft for a bunch of years. He was a friend of Barry Barish's, actually. And at the time, I was like, "I'm just a 21-year-old kid, and I don't know anything about business, and I just want to do science. And so, not for me to CEO this company." And so, we got that company started. I was a little disappointed that I ended up raising a bunch of the money for the company, which I thought wasn't my job. But then, there was this big meltdown about our project, and Caltech, and intellectual property rights, and so on. And many, many, many years later, I learned pretty much what the full story was. Which was a very bizarre story.

And what happened was, I think in 1929, very long before any of this was going on, Arnold Beckman had been a post-doc at Caltech. And he invented the pH meter. And he took the point of view that he owns the pH meter fair and square, and he's going to go off and start a company with it. So he starts a company called Beckman Instruments, and it becomes a very big and successful company. And by the time I was at Caltech, he was the chairman of the board, biggest donor, all those kinds of things. It's a good story of interacting between, "Go off and do your thing," and, "Years later, you'll come back and thank the university with buildings and things like that." But there was another thing going on at the time. A chap called Lee Hood, who was then the chairman of the biology department, had invented a gene sequencing methodology. And as Lee Hood claimed to me years later, he claimed, like, a dozen times, he'd talked to Arnold Beckman and said, "Beckman Instruments should care about gene sequencing," and Arnold Beckman had said, "No, we don't care."

So Lee Hood said, though I can't validate this on my own authority. And then, apparently, at some moment, Lee Hood goes off and starts Applied Biosystems, and it's becoming fairly successful. And Arnold Beckman freaks out. And I think the phrase was, "Don't let any more intellectual property walk off campus." So at that time, the president of Caltech was a person called Murph Goldberger. Actually, that was an interesting story. When I had been considering being a graduate student at Princeton, Murph Goldberger was one of the people I'd interacted with. But it was a little strange because I'd sort of said, "I'm choosing between Princeton and Caltech," or something. I forget exactly what he'd said, but in retrospect, it was like, "OK, he already took the job." He was the president of Caltech at that time, and his provost was a chap called Jack Roberts, who was a chemist who had not been a big enthusiast of Murph Goldberger's. In fact, he really wanted that job, and he had been, I think, pretty upset when Murph got the job.

And I think, perhaps, it's an epic battle between the chemists and the physicists, I doubt it's still going on, about sort of who's going to be in charge. Kind of the Linus Pauling, Caltech achievement group or the, I don't know, Feynman, Gell-Mann achievement group, and so on. In any case, we were unfortunately caught in the crossfire of this situation. And Jack Roberts kind of made it his kind of personal mission to sort of somehow prevent this whole thing of intellectual property walking off campus, but we were really the only piece of intellectual property at the time that was of any great note. And so, we were very much in the middle of that picture. And it was a mess. And it featured real world corruption stuff and the whole nine yards. The, "Well, if you give me stock in the company, then maybe we'll let you do this," kind of thing. So it was kind of ugly. And I was a young, naive in the ways of the world kind of person at that time, although perhaps not as naive as you might've completely assumed.

So I remember a bunch of meetings with Murph Goldberger about this whole thing in which I was like, "Look"–oh, that's right. The main thing that happened was, they wanted to kind of reverse their policy and say, "Well, actually, software is owned by the university." And so, I said to Murph, "There's this field of computer science, and people who do it do software. And if you take that position, you will not be able to hire a computer science faculty." And I just happened to visit Caltech a little while ago and was talking to I guess the person who's now the chairman of that department, and apparently that scar lasted a very long time. But in any case, at that time, the computer that we'd developed a bunch of stuff on was a Department of Energy-funded computer, and at that time, there was this sort of mandate for government-funded research to be able to transition to the private sector. But they were sort of like, "It's funded by the DOE. It has to stay with the university."

So I contacted the DOE and said, "Is that really what you want?" And they said, "Well, no, actually. We've got this mandate to transition things to the private sector, so good for you," type thing. So in any case, in the end, it got to the point where it was like, "OK, fine. Whatever Caltech might own, we'll license to the company for a dollar." So it's like, "OK, that's fine." And then, Jack Roberts had this bright idea of saying, "But if the university has this license agreement with this company, and you work at the university, and you're also involved with the company, you also have an equity stake in the company, that creates a conflict of interest. So we can't have that happen." So I said, "Well, gee, there's a very simple solution to that. I quit."

So it was kind of an interesting thing because it was the middle of the semester. I think I was even teaching some sort of graduate seminar type thing. And so, I go over to the administration building, and in one of those kind of life event type things, I go, and I ask the personnel department, HR department, it would be called these days, "How much notice do I have to give?" And they said, "We don't really know."

Zierler:

"No one's ever done that before."

Wolfram:

[laugh] But I think the one consequence of that event was, I think it might be the last time, whenever it was, 1983 or something–I'm not a big one for vacations. But I sort of decided, "OK, I'm going to give them two-weeks' notice, and I'm going to take two weeks off." I'm not sure how well I succeeded in doing that, but it might be the last time I've taken a vacation in my life." And I spent the two weeks learning to fly small airplanes. So that was at least an interesting life skill that I haven't made much use of in the subsequent years. But it happened to be the time of the air traffic controllers' strike, and it was an interesting time to learn to fly planes around the LA area with lots of air traffic controllers in training. And so, I left Caltech at that time, and there were a whole bunch of universities I had interacted with. And I became, for a brief while, probably the world's authority on intellectual property arrangements at universities. And it was really kind of interesting.

Zierler:

Because you didn't want this problem to follow you.

Wolfram:

I wasn't going to have this problem ever again. It was definitely a not-to-be-repeated problem. And so, I remember at Harvard, for example, talking to those guys, and talking to some lawyer there, I think, and looking at their policy. And I had become savvy enough by that time to be able to say, after reading a bunch of university policies, "Is that the Wally Gilbert patch? Is that the place where you change the policy to do something different that probably wasn't good for either side in the end?" And they were like, "Well, yes." But there were a bunch of different places. But the Institute for Advanced Study, there was a guy called Harry Woolf, who had been a historian of science, who was at that time the director of the Institute. And he did a very nice job, I would say, as a recruiter, so to speak.

And I think the chairman of their board at that time was a person called Jim Wolfensohn, and somehow, the thing from them was, "Look, we had the computer with von Neumann. We gave it away. So it's not for us to make any claim to intellectual property after that." So I just said, "Look, I'm going to do basic science at the Institute, and I'll probably do some other things. And that's kind of not on your dime and not the work I'm doing there." And that, I think, worked just great. A big piece of this story that I was missing was that partly, as that whole SMP company stuff was brewing at Caltech and making kind of a bunch of noise, I was like, "I'm going to do some fun stuff."

So I'd been interested in this problem of kind of how complex behavior can arise from simple systems for a long time. Sort of a merger of things about self-gravitating gases, things like Ising models, things like neural networks, kind of the intersection point of all those was the simple model that I made up that turned out to be sort of a simplified version of what people like von Neumann had made up, these things called cellular automata. And so, back in 1981 maybe, I started studying cellular automata, and doing these computer experiments on them, and started discovering all these things that I was absolutely sure I would not discover.

But I was sufficiently proficient with doing the computer experiments. They were very easy to do, and I could just run them on the computers that I had access to. And so, it's like, "Oh, I just discovered something that I really didn't expect." And that turned into a lot of work that I've done about sort of the computational universe and the origins of complexity. But that was stuff that I did just before I left Caltech. And then, developed when I was at the Institute in Princeton.

Zierler:

Did you see yourself at this point sort of self-consciously moving away from physics?

Wolfram:

Yeah, I think that I was interested in these kinds of more foundational questions. And I had sort of thought of particle physics as kind of a window into foundational kinds of things, and I kind of realized that the stuff I was then doing was sort of underneath all possible models. And that seemed like an attractive thing. And I only realized a number of things in retrospect. For example, this idea of, "You can make an artificial universe and study what it does," which is not really a very physics-y idea. The physics-y idea is, the universe is what it is, and you drill down, and you try and see what's underneath it as a reverse engineering kind of thing from the universe as we know it.

So that notion of, "Let's just build an artificial universe and see what it does," I realized many years later that the fact that I thought to do that was a reflection of the fact that I'd just spent a bunch of time designing a computer language, and that's exactly what you do when you build a computer language. You start from nothing, and you sort of design this whole world, and then you see what it does. Now, as it turns out, the details of what I did with cellular automata were very different from that computer language. As it turns out, in kind of personal stupidity and irony, so to speak, the models of physics that we have now are actually very close to the core operation of that first and all subsequent computational languages that I've built. And it's kind of an irony that the idea of transformation rules for symbolic expressions, which is sort of the core idea of SMP, the core idea of Mathematica and the Wolfram Language, that core idea is now also on some level the core idea of our model of physics.

But it took me a solid 40 years to realize that that was the right way to think about it. And at the time, if you'd asked me, "Is this a possible model for physics?" I would've said, "Absolutely not. Nothing to do with it." At the time, I did think a whole bunch about AI, and I was thinking about this kind of model as a possible underpinning for AI. And that's something that's sort of developed in work that I've done, but probably hasn't developed as much as it eventually will. But I think it was sort of ironic that when I was working on SMP, some of the things that were sort of at the high end of the computational questions there were things to do with recursive evaluation orders. And I was also, at that time, thinking about gauge field theories. And only a year and a half ago did I realize, "It's the exact same thing. The formal structure of this recursive evaluation orders question is the exact same thing as the choices of gauge in quantum field theory, and in general relativity, and so on."

And by the way, seeing that correspondence now looks like it's going to be a very fertile kind of direction for understanding distributed computing, and it's also pretty interesting to be able to see from the computations side and be able to use that language to understand physics as well. So again, I feel a bit embarrassed, but it's sort of an interesting thing in terms of intellectual development that I was actually thinking about two things, which end up being incredibly closely related, at the same time, and I absolutely couldn't see the relation between them.

Zierler:

What was the gap in your knowledge at that point that did not allow you to make this connection?

Wolfram:

I had to build the whole structure that turned into my New Kind of Science book. Had to build the whole structure of thinking about how space and time work in computational terms. Had to build a bunch of the structure of how quantum mechanics works in computational terms. In the end, the conceptual tower is pretty tall. I would've had no idea at the time that you could think about space and time in terms of discrete computational elements and so on. So it was far away. It's more irony than embarrassment, I would say.

Zierler:

Where are you publishing in these early years as you were thinking about these things? What would your audience have been?

Wolfram:

So I was quite lucky in the sense that I was sufficiently kind of notable in physics and so on that I could publish in a bunch of places. One of the very first articles, I think I published in Nature magazine. And John Maddox, I think, was the editor at that time. And he sort of took a personal interest in it, and wrote some pieces about it, and so on. And it was the cover story type thing. So that was an early piece about cellular automata. Then, my first big article about cellular automata, I was actually going to publish it in Annals of Physics. Because I'd published a couple of papers there, and Herman Feshbach, who I never met, had been the editor there and had been very reasonable in publishing these other papers, which were about quantum field theory.

And so, I sent it to him, and he said, "You don't want to publish this here. You should publish this in some place where more people would see it." So I ended up sending it to Reviews of Modern Physics, where David Pines was the editor, and he ended up giving it to Elliott Lieb, actually, to review. And to his credit, he said, "Looks like a pretty interesting thing." And so, it was published there. And that was pretty good visibility. And then, subsequently, I published a couple of things in Journal of Mathematical Physics. And I would say that some of the audience was mathematicians. Because one of the things that happened at the Institute was, it was a small enough place that I got to know a bunch of mathematicians. And I would say the one I got to know best is Jack Milnor, who I liked very much, very impressive person. I always enjoyed the fact that I would be studying something, and I'd ask him, "Jack, what do you know about this thing?" And he said, "I don't really know anything about that."

So then, I would go look in the literature, and I would discover that the key paper in that area had been written by Jack. And I'd show it to him, and he'd say, "Oh, yeah, I remember that now." So I would refer to him as the person who'd forgotten more mathematics than almost anybody else would ever know. And I interacted with Bill Thurston, also, at that time, and a bit with Enrico Bombieri, number theorist who's, I think, still at the Institute. And I ran into a bunch of the other mathematicians there, and they were very interested in this stuff and kind of felt that the mathematics they have should be able to crack it in sort of the same way as it had been able to crack kind of the Mitchell Feigenbaum-style chaos theory type things. And they were very frustrated that they couldn't crack it.

And years later, I realized that basically what happened is this phenomenon of computational irreducibility that I was talking about at the very beginning here, they'd hit it. And turns out, in these areas, you hit computational irreducibility very early. And so, you can't really say much with traditional mathematical methods. And in fact, one of the sort of big meta discoveries of our physics project is that even though there is computational irreducibility, there are some slices of computational reducibility that allow you to say things about the universe, even when there is underneath lots of computation irreducibility. And the really great thing is that the two great slices of computation reducibility are precisely general relativity and quantum mechanics. But I didn't know any of that back then.

At that time, it was like, "You should be able to get somewhere with all these fancy math methods that you know." It was very frustrating that they couldn't. And so, I did think about that issue a bit at that time, although not in those terms, not as clearly as that. And I then wrote a paper about undecidability and intractability in theoretical physics, which I published in Phys Rev Letters. And then, I wrote another one about randomness in physical systems. Oh, and I also wrote an article for Scientific American at that time, which was kind of a more popular account of some of these kinds of stories. But that was 1984 or so. But the one about randomness in physical systems was kind of interesting because I wrote it, I sent it into Phys Rev Letters, then I was visiting Los Alamos, and there was a person at Los Alamos who had been doing some chaos theory work, and who was, I would say, dead set against this paper. He was like, "This paper cannot be published under any circumstances." And I don't know quite why he was so dead set against it.

It was a person I knew decently well, actually. I think he thought that the paper kind of exploded the idea that chaos theory was sort of the route of all randomness that you see in the world. Which I'm afraid deserved to be exploded because it isn't true. But I learned an interesting thing there, which was a person called Bob Kraichnan, a person who worked on lots of stuff about magnetohydrodynamic turbulence. I think he'd been Einstein's last post-doc. He was a sort of semi-independent person who worked on government contracts and hung out at Los Alamos. And so, I'm there, and I get this lunch invitation from Bob Kraichnan saying, "Come to lunch. I want to talk about this paper of yours." So, "Fine." So he invited this other chap, who was this sort of naysayer. And Bob starts off by saying, "I'm the person who's got this paper to review for Phys Rev Letters. So let me hear both sides of this argument. And so, this lunch, I would say, degenerated horribly. Not on my side, I should say.

And eventually, this other person kind of stormed off. And so, Bob Kraichnan turns to me and says, "OK, I'll send in the acceptance this afternoon." But actually, I learned one thing from him at that time, which was, he says, "I don't believe in anonymous peer reviewing. I think it's kind of cowardly. I insist on kind of signing my name on everything I review." And I said, "That's a good idea." So I started doing that, too. And journals started saying, "Oh my gosh. If you're going to sign your name, we can't have you review." And it's like, "OK, I don't quite know why that makes sense." But it was kind of interesting, I thought.

Well, my very last paper that I published in physics was 1986, which has an unfortunate story around it. But that was a paper that was going to be published in Phys Rev Letters, and then it kind of got blocked by a piece of sort of academic rivalry. And the final person involved with that was Leo Kadanoff. And I said to Leo, "If you don't publish this paper, I'm not going to publish any papers in academic journals ever again." That was 1986. The record stands. So it's kind of interesting. But it felt corrupt, and I thought, "I really don't want to be part of this story." As it turns out, the much longer and better version of that paper, I published in Joel Leibowitz's Journal of Statistical Physics, which I happened to be an editor of at that time, so it was sort of an easier process. But that was one of my pieces of learning, let's say, from my time in physics. I mean, afraid that the main thing that I learned about the process of publishing papers was, "Well, it's always good to be an editor of the journal you're publishing in."

But anything that is actually original is usually very hard to publish. Unless there's somebody who has some vision involved in the picture. If it's just part of the standard stream, it's like there's just a pure natural selection theory of publication, and it's easier to publish a boring paper. It's hard to publish an original paper. And I was quite fortunate that a bunch of the original papers that I wrote had, for one reason or another, a path to being published that involved people with some kind of vision and was successful for that reason. And I have to say, I was finally publishing, in a sense, this big, probably 800-page things about our current physics project. And so, I thought, "I'll offer it to Reviews of Modern Physics." I said, "Look, it's probably not a fit, and it probably doesn't make any sense, but just for old time's sake, because I appreciate the fact that Reviews of Modern Physics published this early paper of mine that led in this direction"–it made no sense, so we didn't do it. But I had a nice interaction with those guys about it.

By 1986, while I ended up spending a bunch of time thinking about kind of building this science of complexity, this complex systems research, and I did a lot of kind of attempting to create kind of a research center and so on around that, I started the first journal in that field, which is called Complex Systems, which still exists today, the editor of which persuaded me to publish a bunch of these physics papers in it. I'd never published in it before, actually. I'd had this journal for 30-something years. But he's like, "I want some papers that are going to get cited. So please publish this in this journal." So I said, "Fine." So I spent a bunch of time sort of trying to launch this field of complex systems research and talking to a lot of universities about, "Is this a place I can launch this research center?" Very interesting kind of Rohrschach test for universities. Some would say, "Oh, if you can raise government funding, then you can do whatever you want."

Some would say, "Well, we're interested in hiring you, but we don't want to hire any of these other people who'd come with you, so to speak." And a variety of different responses. And in the end, the University of Illinois was the one that kind of had the best and most convincing deal. And to their credit, they lived up to their part of the deal. Ironically, some of the money that went into that came from none other than Arnold Beckman, actually. Although, I hadn't made the connection with the other part of the story that I was telling before by the time that was happening. So I think that connection was only made kind of many years after the fact, so to speak. So I went there, started this research center, was sort of, as a 26-year-old or something, kind of an academic poobah or something, and it wasn't a great fit. The university did well by me. I think the people I brought in did well. I think that I'm probably not a very good employee.

Zierler:

Where is SMP in this? What's its status at this point?

Wolfram:

So the company that I had started was originally called Computer Mathematics Corporation. It had merged into a company that was doing kind of early AI work called Inference Corporation. I guess the merged company was called Inference Corporation. I had sort of dissociated myself from it in the sense that I wasn't really spending any time on it, I didn't have any kind of management role in it anymore. I was still a substantial stockholder of the whole thing. I had interacted a bunch with the venture capitalists we had. But I was really pretty much checked out of that story. In fact, the chap who I'd originally sort of brought in to run the thing turned out to be very good at raising money, not very good at making money. He'd raised round after round of financing for the company. Actually, many years later, this was sometime in the 1990s, I get this big envelope from the company, and I was figuring, "OK, finally, white flag. It's a bankruptcy filing."

But no, it was an IPO prospectus. I think they'd gone through most of the letters of the alphabet in rounds of financing, but they'd finally made it through to an IPO. And it wasn't a particularly distinguished IPO, but it gets the checkmark of, "Company went public in the end." And then, it was subsequently gobbled up by some chain of bigger companies. But by that time, the SMP product line had been kind of a cash cow, and the other things that they got into were a lot of stuff that was kind of, actually, pretty much what people use a bunch of machine learning stuff for today, although it was done with kind of rules-based AI, and they were kind of doing fraud detection for credit card companies, and they were doing kind of fault detection for NASA, and these kinds of things as sort of big services projects and so on. But I was really not involved with it at that time.

And in fact, I probably know less about the software stack that they had than I know about many other kinds of things. That software stack is probably closer to some things that I've been interested in more recently, and I should go back and look at it some more. I must have all its documentation. I never really paid all of attention to it at the time. So it was not really a thing in my life. I'd had kind of a hobby of doing technology and strategy consulting for tech companies and for some investment companies, and the company I'd probably been most involved with is Thinking Machines Corporation.

Also, a little bit with companies like Schlumberger that were, at that time, trying to start doing some kind of basic research kinds of things. I remember, again, as the young chap trying to give sage advice, talking to the person who was, at that time, running a large part of their operation. And at that time, Schlumberger was the world's most profitable company. It's an oil instrumentation company, and the price of oil was high, and all was good. And they had wanted to start a basic research operation. And I said--and I was just a kid, probably 23, 24 years old or something--"Look, if you're really going to do this, I think you should create an endowment for this basic research operation. Just put aside $100 million, and then when the price of oil goes down, you won't kill that operation."

And it was like, "Oh, don't worry. The price of oil will basically never go down." Well, yeah. I think that, actually, that person became a very successful leader of that company. But if they had really wanted to have a basic research operation, which may or may not have been a worthwhile thing, that wasn't a great call at that time. So yeah, I'd had this kind of hobby of doing business-related things, and I think giving advice to tech companies and their investors was probably a very good education in kind of the ways of the tech industry, so to speak. So when I came to start my current company in 1986, there were a lot of mistakes I knew not to make. I can't say I made no mistakes, but there were a lot that I knew not to make.

Zierler:

On that point, we're brushing up on the four-hour mark. Let me suggest that we break here for the evening, and we pick back up for next session in 1986 with this new company.

[End Session 1]

[Begin Session 2]

Zierler:

OK, this is David Zierler, Oral Historian for the American Institute of Physics. It is April 17, 20201. I'm delighted to be back with Dr. Stephen Wolfram. Stephen, it's good to see you again.

Wolfram:

Hi there.

Zierler:

We're going to go back to 1986 and talk about the intellectual origins of Mathematica. So I'd just like to say editorially, I talk to physicists all the time who explain to me what Mathematica allows them to do, even today, that they otherwise wouldn't be able to do. So with that in mind, what were your motivations? What did not exist that you wanted to create that ultimately would become Mathematica?

Wolfram:

Well, we talked last time, I think, about some of the earlier origins. But kind of the basic story was, back in the 1970s, I was interested in physics, got involved in doing particle physics. To do particle physics, you have to do all kinds of algebraic computation. Not a thing that I particularly wanted to do by hand myself. The computers could do that. I started using the research systems that people had built to do algebraic computation. I think I became the world's largest user of such things. Then, in late 1979, I kind of decided I had outgrown all such existing systems, and I kind of had this choice. Either I would persuade some of the people who had built those existing systems to build a version two, or I would do what one usually does when one wants something done and one can't define it properly for other people, just do it oneself. So that was what led me to start, in November of 1979, building SMP.

And the first version of SMP came out, I guess, in 1980. By 1981, it was pretty well-defined, and I'd started a company around it and so on. And then, I think we talked last time about the denouement with Caltech and other things about SMP, and that caused me to kind of go off and go into a different mode of what seems to have been a lifelong cycle for me between basic science and technology development, and to go into a basic science mode for a few years, working on kind of the origins of what became complexity theory and things like that. So I was, then, at the Institute for Advanced Study, and I was doing science that I wanted to do, making serious use of computers, not only SMP, but also lots of C programs and other kinds of programs that I would put together.

And I suppose the thing that was most notable for me is that I found myself spending a large part of my time just sort of gluing together pieces of software and writing fairly low-level programs to do things. And I was like, "I know there's a better way to deal with this, but it's a certain investment to build the tooling to make it so that you can sort of automate all that stuff away." And so, strangely enough, some code that I wrote in 1986 to do cellular automata, I just dusted off again for something that we're doing right now. And we'll connect it into Wolfram language, and it'll probably be useful for something that's easy to connect C code. It's C code, which doesn't quite still run, unlike our Wolfram language code from back in 1988, which does still run. But so it goes.

So the main thing was, just lots of different pieces of software, pieces of code, things for doing graphics, things for doing sort of searches, these kinds of things, all sort of separate. And the vision was, "I can put all these things together and make some system." And the concept for me is kind of the technology platform that will do the things I need to do for the rest of my life type thing. That was kind of the concept. Now, from a practical point of view, I think we talked about last time, by 1985 or so, I had sort of outgrown things which were easily possible at the Institute of Advanced Study, and I suppose, the smaller the place, the more intense the academic politics. And also, I think it's one of these things where, if you give people nothing to do with themselves other than research and academic politics, that's not a formula for the best possible outcomes.

Zierler:

On the question of physics at the Institute, this is a quite exciting time, the so-called string revolution in the mid-1980s and Ed Witten's excitement. Were you involved in that at all? Did you recognize that string theory was going into mathematical places?

Wolfram:

It hadn't completely happened by that time, actually. I knew Ed Witten and his wife Chiara. Chiara was actually sort of at the Institute at that time. And I had known John Schwarz, Mike Green slightly. So I had seen the wilderness years of string theory, where it looked extremely unpromising. I wasn't particularly paying attention. I had been interested in things like instantons, and algebraic topology, and Michael Atiyah's stuff, and all those kinds of things. So I had been aware of that stuff when I was in Oxford, actually, before I was even at Caltech. But the high mathematics that developed, I think, a few years later, I wasn't tuned into. The one thing that was really nice at the Institute was the fact that it was a small place meant that I didn't just interact with physicists there. And in particular, I ended up interacting with John Milnor, one of the longtime mathematicians there.

And I interacted with John Milnor, Bill Thurston, the dynamical systems-y, geometry-ish crowd. There certainly were people there, like Andy Strominger, who were definitely doing wilderness kind of work that hadn't arrived, so to speak. I guess Jeff Harvey, who'd been a student at Caltech when I was there, was at the university working with David Gross and so on. But that was, again, still a bit wilderness-y at that time. Not yet heroic. Who else was around? Lee Smolin was there, who went off to do wilder things, so to speak. And Malcolm Perry was a friend of mine there, who has been involved more in kind of black hole-ism than other things. I think at that time, he had a student named Nathan Myhrvold, who's been a friend of mine for many years. Even if Malcolm sometimes found Nathan distinctly hot to handle, so to speak.

Probably the most useful thing I perhaps ever did for Nathan was that, one day, he was a graduate student at Princeton, and I forget how it originally started, but he contacted me and said he's done some software development, and can he come over some evening to the Institute with a few of his friends to talk about the software development he's done and get advice about what to do about it? And he'd developed a window system for PCs. This was at a time when Microsoft had announced Windows, but not yet delivered it. So Nathan and his friends were kind of at a place where, "We've developed this window system. What should we do with it?" And after I listened, I said, "The best thing you could do is go and sell it to Microsoft." And in the years that followed, very impressively, Nathan advanced to eventually become CTO of Microsoft. So that was a good outcome from that, I thought.

Who else was around there? I remember Lincoln and Jennifer Chayes were graduate students. They were an interesting, colorful, leather-jacketed couple at the time. And I remember, must've been a few years later, Jennifer Chayes was quite a theoretical physicist, and I remember talking to her about various things to do with ferrite, and pointing out that that's how all computer disks work, and so on. And it was amusing that she was like, "Oh, I only think about them as theoretical things." But then, she has had a distinguished career at Microsoft, actually hired by Nathan, I think, doing some fairly theoretical work, but less theoretical. To not have any idea how data would be stored from one of these things that she had studied so much from a mathematical point of view was kind of charming. But yeah, there was a whole crowd of other physics types at the Institute at that time.

Zierler:

Did you interact with Freeman Dyson at all?

Wolfram:

Yeah, I saw him at lunch almost every day for years. I'm not sure that we had the best of relationships. Freeman had a tremendous habit of showing up at lunch and expounding on some new idea or another, and me saying, "But, Freeman, that can’t possibly be right because etc." And rather than engaging, he would tend to just fall silent. Actually, it was interesting because just a couple of years ago, I had not seen Freeman in forever, and I saw him at some event in Connecticut put on by a chap I know, agent person named John Brockman, who's, for better or worse, agented a lot of books for a lot of scientists. Not me though. But I saw Freeman at this event, and I ended up exchanging email with him afterwards. My previous interaction with Freeman had been in 2002, when my big book, New Kind of Science, came out.

I think what had happened was, before the book came out, I was sending out copies to various people, and I'd offered Freeman a copy. He said, "Sure, send it." So I did. I didn't hear anything from him. And then, when the book came out, a journalist asked him his view of the book. And so, he had a lovely quote. I'm not sure I'll get the words precisely right, but it was something like, "Most people don't believe that they have grand theories of everything until they're very old and in their dotage, so to speak. This Wolfram chap is precocious at everything, including believing he has sort of a grand theory of the world." I thought it was kind of funny. I thought it was kind of stupid but kind of funny. I thought it was obnoxious, but again, my trajectory through life doesn't lead me to be that concerned about what people like Freeman have to say.

So years went by, and after I'd run into Freeman, I decided I'd ask him, "Did you actually say this?" Because I'd heard from some other people that he'd sort of denied having said it. So I sent him an email and said, "Did you actually say it?" And he said, "Well, yes. And I actually think everything you've done on computational whatever is a waste of time." So I sent him one last piece of mail that said, "You should be ashamed of yourself. You've known me since the mid-1980s. You saw me every day for several years. If you didn't think this was the right direction, you should say something about it rather than just giving snide, snarky quotes to journalists years later." I didn't get a response, and Freeman died a little bit after that. Perhaps not the best of relationships.

But I was sort of outraged, not for myself, because I don't really care that much, but I would say for the development of the young science crowd to kind of be an elder statesman, so to speak, who profoundly believes something is in the wrong direction and never say anything just seems like a piece of intellectual cowardice, so to speak. So I wasn't very keen on that. So yes, I did know Freeman. And another person I knew pretty well at the Institute was Roger Dashen, who I liked a lot. I always felt that his story was a little tragic for having been sort of hired at the Institute very young, and then being a little bit, "What am I supposed to do with myself for the rest of my life?" He had had sort of a hobby of doing oceanography research, particularly with the JASON outfit, and he really liked that. And I have to say that my sort of personal life advice to him, although I was much younger than him, but I knew him fairly well, was, "Just go do this oceanography stuff. You really like it. And don't worry about doing something great and grand in particle physics, which you've kind of lost interest in."

And I was very happy a few years later when he went and took a job at UC San Diego jointly with the oceanography outfit there, although unfortunately, he died of a heart attack shortly after that. But that was an interesting dynamic because I think I was at the Institute at a time when the last people who had been hired by Robert Oppenheimer, when he was director there, were still there. He'd done a lot of high-risk hiring, and some of his best bets had, by that point, moved on. And so, it's a complicated dynamic when you tell people at a fairly young age, "OK, you're set up to just go think deep thoughts for the rest of your life." It sounds good to begin with, but that's not a thing that, I think, most people are well set up to handle. So we were talking about physicists at the Institute. I'm sure I'm forgetting some people.

Zierler:

But you were not necessarily influenced by the physicists at the Institute as you were thinking about what would ultimately become Mathematica?

Wolfram:

No. Much to the consternation of some members of the Institute, I had my own office space on the third floor of the main building that the then-director, Harry Woolf, had kind of gotten set up. Harry had had this idea of starting a school for computer science at the Institute. You had a school of natural sciences, a school of mathematics, and so on. And he had found a potential donor for that. Which frankly seemed quite sensible to me. He was pretty early to thinking about kind of having a school of computer science, so to speak. In a sense, the Institute had been infinitely early with computing with von Neumann and so on. But they dropped that one. But he'd found a potential donor and wanted to use me as an anchor operative in such a school. But I think that was not politically popular with other folks at the Institute, who basically thought of Harry as people tend to think of leaders of academic institutions, primarily fundraising operatives.

And they were like, "Harry, go raise more money, and bring it to us. Don't try and start a new thing and spend the money elsewhere." And it was kind of interesting because Harry had kind of a second life being an advisor to various investment funds. And because I was involved in the tech world at that time, I sort of encountered him independently in his life as a sort of advisor to investment firms and things like that. And he was always like, "Don't tell these other people at the Institute that I'm involved in this stuff in the outside world." And I was like, "I think it makes you more valuable to them that you're out and about and interacting with people who are doing investments and things like that." I remember he had been a historian of science, and he had a whole theory of ties. When you should wear a red tie or a blue tie, all these kinds of things.

And he kind of themed himself as an academic or advisor to financial folk by changing his tie and so on. So I thought that was a charming kind of feature. I remember there were a whole bunch of other people at the Institute, all of whom I certainly knew, Tom Banks, Herbert Neuberger, and then people who would come through as visitors. One person who became a good friend of mine was John Moussouris, who had been a student of Roger Penrose and had done some things, which he always felt had been not acknowledged well by the folks who were working on the intersection of field theory and general relativity. And particularly discrete spacetime, which is a subject that I've obviously come back to, although the approach that was taken there was a little different from what I've done.

Another colorful person there was Brosl Hasslacher, one of the more colorful characters to grace the physics scene in those years. I'd first met him when he was visiting Caltech. But he was always a very high-risk operative. And I think eventually, he formed this company called QED, which stood for Quantum Encryption Devices, which would seem very much a thing of the 2020s. But that was actually 1985 or '86. And it was at a time when encryption was much more primitive than it is today, and it was still believed that it was important to keep secret the way that encryption systems worked, rather than basing their security on kind of the mathematics of what was going on. And so, Brosl had done this strange licensing deal for a strangely unconvincing, I would say, encryption system that had been developed by some person at the University of Wisconsin, maybe.

I do remember, in these years before the web, when it wasn't possible to just sort of look up the story of everybody and everything, one of my interesting Brosl situations was that he had some people involved in this company, and his chief of business was a chap who'd been involved in, I think, CompuServe. And I think I was driving from Los Angeles to Los Alamos, and this person was based in someplace like Tucson, and I may have my geography wrong, but I ended up dropping in on Brosl's vaunted potential chief of business, who turned out to tell me this tale of woe about how he was swindled by his fellow cofounders of CompuServe or whatever and was now kind of a very low-level chicken farmer. And so, I thought, "Wait a minute, Brosl, this person is supposed to be your head of business, who's going to take this company to great heights." Blockchain is full of exotic stories like this, so Brosl was way ahead of his time in being involved in things like that.

There were many colorful characters. But in terms of my own efforts, I had a small group of people, Norman Packard, Rob Shaw, Gerry Tesauro–but meanwhile, I was trying to figure out where to plant a complex systems research institute. I think maybe we talked about this last time, and I kind of went around to many different places. University of Illinois was kind of the winning place. I'd spent the summer of 1986 in Boston as a consultant at Thinking Machines Corporation, and then went out to Illinois. And in the very beginning, it was like, "OK, now I'm set up to run this research institute." And the university was being perfectly reasonable, although as a big organization, it had all kinds of constraints and issues about, "Could you get this student as a graduate student when they did or didn't have this particular piece of certification or were or weren't an American?" or whatever else it was.

I'd realized, as I'd mentioned about Harry Woolf, when you run an academic organization, the thing people most expect from you is for you to go out and raise money for what they're doing. And so, I realized that kind of plan A, which had been, "I'll run this research center with a bunch of people and get things done," was not a great match because in a sense, then, I was out raising money, which I wasn't interested in doing, and other people were doing research, which I was interested in doing. And it's like, "This picture doesn't really add up." And so, I kind of decided that the thing I should do is to build sort of the best tools I could for doing the kind of science that I wanted to do and build myself an environment that made it possible for me to pursue the kinds of things that I was interested in independently without having to worry about whether the particular thing I was doing would fit into some funding priority of this or that organization or whatever else.

And so, probably in August of 1986, I kind of decided, "I'm going to start this company and build what became Mathematica." And its original name was Omega. And so, those who are kind of wags in these things point out that over the course of 20-something years, we went from Omega to Alpha with Wolfram Alpha and so on. But its intermediate name was briefly Technique, and that was eventually lost. At one time, PolyMath was another possible name. I was kind of amused, a number of years ago, I found this list of potential names for Mathematica, and what's interesting is, essentially, every one of those names has been used for some product, not by us, since then, including some of the really horrible names. One name was Igor, which was the name of the assistant in the Frankenstein story, had been suggested by somebody as a name, and I duly had added it as a not possible name. And it was eventually used as a name of some product. Not by us.

Zierler:

In the mid-1980s, this is ancient history as far as computational power was concerned. Were there any limitations in what computers could do at that time that you were aware of, even then?

Wolfram:

Oh, yeah. Once you've built one computer system, the second time around, one makes fewer mistakes. And for SMP, it had primarily been running on computers that cost a quarter-million dollars. So what had happened between 1981 and 1986 was this generation of so-called workstation computers had grown up around things like the Motorola 68000 microprocessor and other microprocessors from Digital and IBM. So there were computers that cost maybe $50,000 to $100,000 that were single-user computers, that were really powerful enough to run sort of reasonable software. The personal computers of the time, which were still kind of the Apple II, and then transitioning to the original IBM PC, had serious limitations. The PC had 640k of memory as a basic limitation. Then, the Mac could come out, and it had fewer limitations, but it was still something that had a megabyte or two of memory.

When I started designing what became Mathematica, one of my concepts was, "I'm just going to build a system that works the way it should work," pandering to the humans, not to the computer. So there were many design decisions that got made where I knew perfectly well that such-and-such a thing would be kind of slow and painful at the beginning, but I was also perfectly well-aware of the fact that computers were getting faster at a rapid rate. For example, the very first version of Mathematica, when it came out, couldn't run on a PC. It was too big to run on a 640k memory PC, and there were some elaborate ways of doing memory extension, but they were hokey, and even at the time when Mathematica first came out, wouldn't allow it to run on a PC.

So it ran on a Mac. That was its lowest end kind of consumer system. And then, it ran on a bunch of these workstation computers like Sun, and Apollo, HP, IBM, and so on. And the main target was these workstation class computers costing in the tens of thousands of dollars range and being used by professional research-y academic people. But with a consumer prong, that was the Mac. Also, when Mathematica came out, one of the important pieces was that I had made this deal with Steve Jobs to bundle Mathematica on the NeXT computer, which was Steve's between Apple and Apple computer. And Steve had this kind of idea of making a more serious workstation computer with the power of the more serious workstation systems, but packaged in a way that was readily available particularly to education.

And although the computer wasn't actually out yet in June of 1988 when we released Mathematica, we had made this deal, and it was running on NeXT's computers. And so, that was another kind of piece of the story of, "Don't make compromises because computers will come," so to speak. And I think it was probably 1989 before the first versions of Mathematica ran on PCs. And at the beginning, it was a bit painful because you had to run with these memory extension systems. And it was a few years later that PCs got to be powerful enough that you could seriously run the system. So the effort of designing Mathematica was, in some ways, easy because I'd kind of done something like it before, and I'd had the very useful experience of making a bunch of fairly radical design decisions for SMP, then waiting five years, and kind of seeing what happened with those decisions because there were lots of users, and you could actually see if people understood these designs.

Most of the time they did, some of the time, they didn't. And that was a very good indication. Reading the training manuals that people have produced for SMP was, in some ways, an interesting experience for a language designer because I remember there were these things called marks, which were not a very good idea. But the section of the training manual about marks started with the sentence, "Marks are the enigma of SMP." And if you're a language designer, a sentence like that means you made a mistake. People shouldn't be that confused. So with Mathematica, as I said, the goal was to build kind of an integrated system that would do all the things I could imagine wanting to do in science and elsewhere. And I was completely aware of the fact that I was building it for the world, so to speak. But it's always very good, when you're building products, to be keen to use the products yourself. And for the last 33 years or so, I've probably used Mathematica or now Wolfram Language every day. And it's let me do all kinds of interesting things and lived up to its promise.

Zierler:

Tell me about your early ideas of Mathematica as a business model. Did you envision this as a subscription service, something that would be sold to individuals, institutions, a one-time license? What were the various options you had?

Wolfram:

So at the time, there were two models for software. The Microsoft model and the Borland model. Borland sold everything for under $100, Microsoft sold things for $500. Nobody knew where the industry was going to go. There was a huge amount of software piracy in the US among other places. And there were all these campaigns, like the, "Don't copy that floppy," campaign. And it was very unclear whether software was a supportable business on its own. There were also neighborhood software stores, there were exotic organizations like Egghead Software that would regularly order large amounts of software with 90-day payment terms, then send it all back on day 89, reorder it on day 91, and so on. All kinds of wild west-y kinds of things that were happening.

So it was unclear to me what the future of the software industry would be, whether piracy would be a huge problem. One of the other things I was concerned about was, we put out this system for doing technical computation, and pretty soon we'd become the homework helpers for the world, so to speak. And technical support would be inundated with people saying, "I'm trying to solve this physics problem. Can you help me?" So there were many things that I thought could go wrong. Actually, my favorite business model was not to directly sell software at all, but to license our software to companies that made computer hardware and to make these bundling deals where the software simply came with the computer system. And we did that with NeXT. Steve Jobs was the first person to sign up for that kind of deal. We, then, made kind of deals for computer hardware companies like Sun Microsystems, HP, IBM, to resell our software on their particular hardware platforms.

And so, we were kind of leveraging their sales organizations. My favorite model would have been to have a company that was purely doing essentially intellectual property development, and not doing kind of the on-the-ground sales and marketing side of what had to be done. The company that kind of broke that model eventually was Apple, actually, because at the time, it was a complicated thing. Apple was trying to spin off a software unit called Claris, and we interacted with all these people. In the end, the people at Apple said, "We can't do anything useful with this. You should sell it yourselves, and we'll help you in various ways." And that sort of forced us in the direction eventually of having to build a sales and marketing organization. And even 33, 34 years later, I would say that the sales and marketing organization in my company is still very tiny compared to R&D.

And our strength in all these years has been mostly innovation in R&D. I would like to think that we've done competently on the business side of things, and certainly I've had good people working on that, but it is not the core sort of objective of the company. Most tech companies have many more salespeople than they do developers. We are most definitely profoundly the opposite of that. But my favorite model at the time was actually Adobe, where they had made PostScript, they had done this bundling deal with Apple for the laser printer, LaserWriter, and had set themselves up as a company that would license PostScript to printer and computer manufacturers and be the kind of source of the innovation, but not the company delivering things to the end customer, and doing support on what was delivered, and so on.

But I realized by sometime in probably late 1987 that that approach wasn't going to work for us. So we ended up starting to build the sales and marketing operation that we needed. Initially, a very modest operation. In the beginning, I was 26 years old or so, and I would say that if you'd asked me my time horizon for the company and what I was doing, I would've said five years or so. And I sort of imagined that this was a five-year process, where something would happen. The company would go public, the company would, in some way or another, get to a point that was very different from the place it had started. I didn't realize that a third of a century later, I'd still be running the exact same company configured in more or less the same way, so to speak. If I'd known that, I would've made a few decisions differently in the early days of the company.

So the original concept, we wanted to build this product that would be something that could be readily used by all of those physicists, among others. And I wrote this book about Mathematica, was published by Addison-Wesley Publishing Company by a chap called Allan Wylde, who had been on a very fast track at Addison-Wesley, but had some health problems and been kind of put out on this outpost of the company, their so-called Advanced Book Program, which actually published many physics monographs that were very good. And Allan, in something of a departure from what his Advanced Book Program was really supposed to do, ended up publishing the Mathematica book, which was very successful. And for me, that was better than advertising, so to speak, to have this book in bookstores that people could pick up and read. And I made the mistake subsequently of making later editions of that book much longer.

The original book was 600 pages or so, and I think people who were physicists or mathematicians who were interested in becoming computational were able to actually read the book from beginning to end more or less. And it was a good sort of introduction to both what we had built and the ideas of how to use computation for science. Subsequent editions expanded to a couple thousand pages. And eventually, about 20 years ago, I kind of gave up and said, "I'm not going to do this anymore." And now, the equivalent of that book would be around 200,000 pages. It became a constraint. At the beginning, it was a very good thing to be able to explain everything in this quite short book. But then, as the system kind of grew, and we added more and more capabilities, it became a constraint to try and fit it in a book. So we didn't do that. But it was a very useful way to introduce people to the bigger ideas on the practicalities of what we were doing.

So what happened in 1988, I was sort of still playing professor at University of Illinois, but spending much of the time going off to the West Coast and making deals as well as actually writing code and writing the book about Mathematica. And I've told many entrepreneurs many times since then that my theory of CEO-ship is, on day one, the CEO has to do everything themselves. And subsequently, once they understand various kinds of tasks, they can find people to whom they can delegate those tasks, who, if they're lucky, can do those tasks better than they would've done themselves. But any task that the CEO couldn't do themselves is a task that has a risk of not being done very well. And so, in the early days, I was very much doing stuff myself, and I built up a small group of people, which gradually expanded. And then, originally, we'd sort of projected April 1988 as the release date. That slipped to June of 1988, which isn't bad for software releases. And so, June 23, 1988, we did this grand announcement event, which was rather fun. It was on the West Coast in Santa Clara, and I got a PR firm that was found by Allan Wylde, who lived in that area and had been a longtime Silicon Valley resident, even before Silicon Valley was famous. And that meant that he had connections in things like the PR world.

And so, in those days, people still did press conferences. And so, we were doing a press conference. But it was nice that Steve Jobs, who had been quite in seclusion, showed up, as did a whole bunch of other industry luminaries, so to speak. It was a very rare case where we had a bunch of warring titans of the tech industry actually in the same place at the same time, along with a bunch of academics. And I think there was a charming review of that event written by a mathematical logician, who described the fact that he realized he wasn't coming to a seminar at some point in the proceedings. And at the time, there were neighborhood software stores. And so, one of the other grand events was actually delivering boxed versions of Mathematica on floppy disks for the Mac to the neighborhood software stores and having lots of excitement with people coming to buy the newly released product.

But I would say that Mathematica spread pretty rapidly in communities like the physics community. At the time, it was still a situation where your typical older physicist didn't know how to type. And when I had built SMP, I kind of pandered a little bit to that with short command names and things like that. We didn't do that with Mathematica. Good idea. But I think it was really nice to see because a lot of people who had always had the point of view that if they had something to compute, they would delegate it to a programmer or a student, discover that with their own fingers, they could actually do these computations. And I was very pleased to see, and have seen over the course of many years, that some of the sort of leading lights in physics, mathematics, and other areas, are not just OK Mathematica programmers, but actually good Mathematica programmers. They really get it.

Zierler:

At the event at Santa Clara, what was most important for you to convey at your public remarks?

Wolfram:

Oh, it was a press conference. We were getting the New York Times, and USA Today, and the usual places to write about the product. And in a sense, we were introducing a new category of software, and part of the importance was to give the idea that there is this thing, technical computing, a new game in town that the tech industry should be aware of. How you get the word out about something has obviously changed a lot in the intervening 30-something years. I had had the experience back in 1984, when the company around SMP, Computer Mathematics Corporation, later Inference Corporation, was up and running, and I happened to write an article for Scientific American about computer software in science and mathematics or something. And so, as part of explaining the ideas, I had this whole page of examples of what was possible now with SMP, basically.

And an interesting thing to me was that that sort of description of a product generated essentially no sales inquiries. And I realized years later, when people read something in Scientific American, it was like, "I'm reading the future. I'm reading something that's sort of science fiction, not something where I can actually pick up the phone and use this myself now." I'm always amazed, I look at magazines, and they'll be talking about some random, weird, innovative gadget. And I'm always interested in these things, so I'll go, and contact the company, and buy one. And I'm really amused at how often I'm buyer number three and things like this. People don't make this connection between the science story and the, "I can use it myself," type story.

And so, that was an important thing for us, to get across the idea, "This is a category of productivity software that's like your word processor, like your VisiCalc spreadsheet." Lotus 1-2-3, I think, was out and about by that time. It was a thing that you can really use in practice. And I think it worked pretty well. And the company sort of grew pretty rapidly. And it was interesting because the arrangements we'd made with computer hardware companies to resell the product didn't work as well as we might've hoped, basically because their salespeople were like, "We're going to sell these machines. And as far as we're concerned, the software is kind of a floor mat to help sell the machines. But we don't understand the software, so it's not something we can really present to customers."

So we'd end up presenting it to customers, and it's like, "This doesn't make sense. We're putting all the work in, and it's going through this reseller, and that doesn't really financially work." And so, we kind of pulled back from that and started pretty quickly being the primary source. And it even became true that, in some cases, it was just hard to get the software because the mechanism for sending out tapes and so on from these companies wasn't well-developed because we were a sort of one-of-a-kind thing. But I think everybody was very happy with the outcome because yes, our software did sell machines.

I think Steve Jobs had made a good deal for himself because it did sell NeXT computers, even sort of pathologically so because people would say, "I want to get Mathematica for free. So I'll buy this whole computer to do that." And I suppose one of the famous examples of that was the theoretical physics group at CERN, which bought a network of NeXT computers to run Mathematica. And their system administer was a person called Tim Berners-Lee, who ended up, then, developing what became the Web. And so, I suppose as a footnote to a footnote to history, the computers on which the Web was developed were bought as a cheap way to get Mathematica, so to speak. Kind of nice. But it's funny because in the early days, there would be these physical registration cards that would come in.

And we had quite a collection of famous physicists. I would occasionally look through these cards, and it's like, "Oh, OK, that's well-known physicist X, Y, Z." My favorite card-looking-through story was about Roger Penrose, actually. He had published this book of his about Emperor's New Mind or something. And I had literally just looked at this article in Time Magazine I think, the headline of which was something like, "Roger Penrose says computers are all dummies." So then, I'm looking through these registration cards, and what should I find but Roger Penrose's Mathematica registration card?

And so, this is kind of amusing. On the one hand, this magazine is quoting him as saying, "Computers are all dummies." And on the other hand, at least he finds our particular kind of way of using computers to be helpful to him. But I think one of the things that's always been strange about Mathematica, and now Wolfram Language, is that you sell a product, and it goes out there, and people use it for all kinds of things. And there's no real way to get any feedback about what they're using it for. Sometimes years later, you discover, "Oh, that really cool invention was made using our technology and product." Something which never really developed, a little bit annoying in some ways, and perhaps we should've done it differently, is people didn't write an academic citation to the software they were using. Which is something that's sort of slowly changing a bit.

And in other fields like biomedicine, it's something which works differently. But in an area like physics, it's like, "Well, we used this software just like we use table of integrals or a pencil and paper." It's just a thing. It's not something that we give sort of an academic citation to. Just a convention of the field. Perhaps, we should've pushed for something different. Another thing that happened early on was selling Mathematica to universities. There was, at the time, nobody really sold software to universities. Some companies would give software away, but nobody really sold software. But I perfectly well knew that a lot of the things that we were building in Mathematica were primarily of interest to kind of the high-end research community that was mostly at universities, and it made absolutely no sense to be spending money building this if we were going to give it away to the main people who would be the beneficiaries of it.

So we developed this model, which I believed very strongly in from a business point of view, of try as much as possible to align the place where you're making money with the place where people are actually getting value from your product. In the tech industry, there are many of these situations where the users are just using the software platform, but the money is coming from the advertisers, and that creates all kinds of strange sort of pieces of contention. For me, it's always been much better if we can set things up so that the people who get value are the people who pay for it, so to speak.

Zierler:

On the question of who gets value, was the idea that it would have an educational value to students baked in from the beginning? Or that developed later on?

Wolfram:

No, no, we knew that from the beginning. But the point was, for most students of most universities, they were just starting to have computer labs that could run Mathematica. And there were a number of early adopters. There was a person called Jerry Uhl, a math professor at the University of Illinois, who'd developed this thing called Calculus Mathematica, starting in version .9 or something. And there were several other people who did things like that. And they were very excited and energetic, and they had a huge number of students go through those programs with great success. So it was known from the beginning. But the main limiting factor there was just that students didn't have personal computers at the time. The computers on which it was going to run were in computer labs. And there was a limited rate at which those computer labs were getting rolled out. But one of the issues was, with the universities, it was like, "What's the business model for selling to universities?"

And we ended up with this kind of site license model, which for the first few years, universities were all kind of up in arms about. But eventually, I think, it all got smoothed out. We figured out the right policies. And in the end, I think it's been a very successful mechanism for universities to get access to our technology and for the whole thing to work. It's funny because, at universities, the battle for power in the IT organization, could a professor actually buy a piece of software for their group themselves, or did it all go through the central IT organization? And eventually, that sort of got worked out. And primarily, I think at most universities, the site license is managed by the central IT organization. Which is fine.

I think some opportunities are lost because as more fields become computational, there isn't the motivation to make sure that technology can get used in that humanities department and things like that. It's not clear who's responsible for doing that. And that's something, actually, we're sort of actively figuring out right now, this tremendous opportunity for computational X for all X, but how does that really work, and how do people really learn about those possibilities? In the early days, we sort of had made this deal with universities, "You teach people about our technology, we'll give you a huge discount." I'm afraid a lot of universities started reneging on their side of that deal, and in the end, our sales organization just sort of gave up on it, which I think was a big shame. Big shame for the students, particularly, because there would have been a lot more people who would have understood computational thinking a lot earlier had the universities actually been sort of forced to provide that kind of education, which they still often don't do. So it goes.

To have really pushed that would have required more business energy than I suppose I was prepared to put into the whole thing. Because I made the decision very early in the decision of the company that CEOs of companies like ours can spend all their time selling products. I said, "I'm going to spend none of my time doing that. I will not be involved in any specific sales situations." And that's why we had a sales group. And that was, for me, personally, a very good decision. For the company, who knows? The company may have done a lot better were I actually the one at the front line selling things. But I decided that wasn't what I wanted to do. And so, I didn't do that.

Zierler:

With institutional sales, obviously, the product is now available to a far broader spectrum of researchers. Who was starting to use Mathematica that you may have been surprised at or never thought it would catch on with?

Wolfram:

In the early days, it was the mathematical sciences crowd, whether that was in ecology, applied mathematics, or physics. Those were the early days of mathematical finance and physicists going to Wall Street. I would say that was an early kind of notable use. All over the place. It was a very good kind of measure of where the mathematical sciences methods penetrated, so to speak, whether that was in the computer graphics companies, various areas of business operations, and so on. Very little, I would say, was dramatically surprising to me. There were certainly plenty of experiences that I had where I was pleased and impressed that some particular person or group would've identified Mathematica as a thing that could be helpful to them. And that was really nice to see.

I would say in education, college education, we were not surprised. We knew it was also relevant for high school education. One of the frustrating things about new technologies and education is, even back in 1988, there were some high schools using Mathematica, but is it widespread in high schools now? No. Wolfram Alpha is. There are high schools that use Mathematica, and particularly this product called Wolfram Alpha Notebook Edition, which is sort of a hybrid between Wolfram Alpha and Mathematica, but it's a very confusing thing because in a field like education, there are early adopters that really adopt early. But it's a very long, long, long process before things become widespread. Much longer process through the institutions of K-12 education among students.

When Wolfram Alpha came out, it became sort of a staple of the student crowd, particularly in the US, within six months to a year. I think there are lots and lots of good stories about uses of Mathematica in the early years. Must've been the early 1990s, a classic Mathematica user story was a chap called Mike Foale, who's a NASA astronaut, who had, it turned out, taken a copy of Mathematica with him on his trip to the Mir Space Station. So at the time, we had all of these anti-piracy measures, which would check if it was running on a computer in this time zone, is set up to run in English, etc. And one of the things that was a dead giveaway for something bad happening is the computer is a Russian computer.

And so, what happened was, Mike Foale was on the space station, and there was some accident. And the computer that Mike Foale had was an American computer that had Mathematica was pulled out into space or something. And so, he being a well-prepared astronaut had some kind of backup disk in his pocket somewhere. He got it out, loaded it onto a Russian PC, and the thing says, "Sorry, I won't run. I think I'm being pirated." So our customer service group gets this call from some ground control person in Houston about such-and-such license number. And this customer service person was initially very skeptical about this because you hear all kinds of strange stories. And it was one of these, "Just turn on a television, and you'll see."

So happily, we were able to unscramble that and get Mike Foale's version running. And then, very impressively, even as the space station was spinning around, he was able to solve some equations of motion for it. I don't think the Russian ground controllers let him do the things he wanted to do. They said, "We have a procedure. We'll follow that, not this thing that you've just worked out on this software on this spinning space station." But I think that was our first off-planet use of our software. But it's a funny thing because a lot of times, I'll hear, a decade later, that some key piece of technology was built using our software, and it's just not something one particularly knows at the time.

So I think the kind of decisions that we made at the very beginning of the design of the system were, happily, pretty good decisions that we've not regretted, and we've been progressively able to build on those designs for the last 30-something years. And I suppose the thing that, for me, has always been notable is you build to another level of technology, and then you can kind of see further and realize what new things you can build. And I suppose at the very beginning, for the first few years, I was pretty much doing things for the second time. They were things that I had sort of pioneered at SMP, and I was doing a better, stronger version in Mathematica. After a couple of years, I kind of ran out of things that I had already done.

And so, then it was really doing new stuff that was really being done for the first time, and the discipline of doing good language design, I think by that time, I'd become reasonably proficient at. And so, the result is, when I take something I did in 1991, in Wolfram Language version 2 or something, and run it in version 12, it still runs because we didn't make any horrible mistakes. And so, we've been able to keep compatibility over the course of 30-something years, which is incredibly unusual, and probably almost the only example where that's been the case in the software world.

Zierler:

As Mathematica started to become appreciated and widely used, how frequently would you receive offers to be bought out? Did you ever take any of them seriously?

Wolfram:

Oh, in my mailbox today, I probably have things from private equity firms. Back in 1991, the company was growing very rapidly, and I did toy with taking the company public. I got to the level of talking with investment bankers. Amusingly enough, a friend of mine from elementary school, actually, had become an economist at Princeton and subsequently Harvard, who had just done a bunch of work on the underpricing of IPOs by investment banks. And so, on my “go visit the investment bankers thing,” I was coming with these preprints of all these papers from my friend John Campbell about underpricing of IPOs. And that created an interesting dynamic because it's been going on as long as anything, the whole dance of the underwriters, and their preferred clients, and the actual company. It's a complicated dance.

But I considered taking the company public, and I was talking to some of my employees, some of whom would've made a fair amount of money from an IPO. And I remember one of them, who's still with the company today, actually, saying, "Well, doesn't that mean that a bunch of pension fund managers are going to be telling us what to do?" And I was like, "Well, yes, potentially, that can happen." And then, the other question was, "So we raise a bunch of money and IPO. What are we going to do with the money?" So we started thinking about, "If we have a large cash infusion, what do we actually do?" And frankly, the things that we came up with were not that convincing. And so, it's like, "Why would I take the company public, lose control of the situation?"

The other issue was that I owned enough of the company that at least in those days, it's probably different today, it's a little bit concerning to underwriters and the public market when you have one shareholder who controls such a large faction of the company and makes the public not feel like they have enough control. So we weren't well configured for that kind of setup. And so, we didn't take the company public. And obviously, at many times over the past 30 years, I could've sold the company for lots of money, but I've never been interested in that because as far as I'm concerned, the company is a tool, a machine for turning ideas into real things that I think are interesting or hope are valuable in the world. And the only reason for making money is to make it a stable operation that can continue to do interesting things and turn ideas into reality, so to speak.

And over the years, we've been fortunate, touch wood, to be a profitable company every year for the last 33 years, we've done OK, but the primary strategy has been spend less than you make, which is a surprising strategy in the tech industry. But it also has the feature that there are limitations of what one can do because we can't just go out and say, "We're going to hire a ton of people to do this random thing because we don't have the resources to do that, and the vast majority of the revenue we've made has been put back into R&D." And I think that has had the feature that we've been building things that are certainly decades ahead of the market, so to speak. I think most of the market, if we stopped doing R&D, wouldn't really notice for probably 15 years or more. And for me, why do we continue to innovate? And we've been very successful at consistently innovating for a third of a century now. It's primarily because I think it's a good thing to do. And I've built up a team of people who are also very motivated to sort of do good and interesting things, so to speak.

And that's been sort of the driving force, and it certainly helps that we have a large group of enthusiastic customers who also are kind of cheerleaders for what we're doing. The academic community is a large-scale user of our products, but sometimes there are parts of the academic community that are more involved in kind of software algorithm developments and so on much closer to sort of making their own versions of what we do and things like that. Years ago, now, I remember some of our R&D people giving talks or sending in papers to conferences, and the papers would be rejected. And it's stupid. We have the definitive version of this technology. Why is this random academic organization rejecting our paper? This is just a negative for our R&D people.

And so, I started saying, "No, we're not going to pay for them to go to these conferences. Come to conferences with our users," where they get hugely positive feedback, and it's very valuable feedback. I think perhaps one of the things, I should say, about what's happened with Mathematica and Wolfram Language which is just wonderful and not necessarily something I would've anticipated is, at least the users that we run into are such an interesting high-end crowd. If I run into somebody at one of our user conferences, and they work in some field that I know nothing about, it is a very good a priori assumption that the person who showed up to our user conference is a leading innovator in that field. Which is really nice. It's allowed us to kind of have this network of people, who are kind of big innovators in lots of different fields. And it wasn't something I particularly thought about or knew about that we would end up sort of engaging with all these people who are leaders in their fields.

I remember when we first had a user conference back in 1991 in California, we were all standing around, waiting for this conference. And of course, 1991 is before the Web, so we knew nothing about the people who had registered for this conference, other than their names. And so, we were all wondering, "Are these people going to be old or young? What characteristics are our enthusiastic users going to have?" And it was a reasonable hypothesis that new technology would be adopted by the young. Turned out, it wasn't true. It was a more or less uniform age distribution. And I think one of the things that I sort of felt the most excited about there was a whole bunch of, I would say, mid-career professors who had done decent work, gotten tenure somewhere, but were kind of not that excited about what they were doing. And then, Mathematica comes along, and suddenly they have kind of a new direction.

And they end up doing really valuable work, whether in education, research, experimental mathematics, or some such other area. And for me, that was satisfying to see and quite unanticipated at the beginning. But in terms of the flow of what happened, the company grew very rapidly first few years, and by 1991, a dynamic had developed that was kind of complicated, which was that I had many, many ideas about what the company should do. We had maybe 200 employees or so. And the rate at which I could inject new ideas was far greater than the rate at which the company could absorb them. And the question was what to do about that.

And I ended up deciding, "I originally got into this as a way to have the tooling to do science that I wanted to do. It's time for me to take a sabbatical and do some science." And so, in the middle of 1991, I went to California, got a house there, and thought I was going to take a six-month, maybe a year sabbatical to work on basic science along with CEO-ing the company. But at that point, I had a chief operating officer who was kind of running day-to-day operations, and I thought I would have a brief period when I worked on basic science. As it turned out, the brief period stretched to ten and a half years because I ended up discovering a lot more than I expected to discover.

And during that time, the company did very well. And I would say that the innovation rate decreased during those years, but having one's act together coefficient definitely increased, and the company became pretty well-organized and pretty efficient at software engineering, software distribution, those kinds of things. And I also became a remote CEO back in 1991, and I've been a remote CEO ever since. With this pandemic, I felt a bit guilty because I've been a work-from-home, remote CEO for 30 years. And so, when the pandemic was beginning, I did some Ask Me Anything livestream events for tips for being a remote executive, so to speak. But it's something where, for better or worse, it turned out I was ahead of my time in that. But my reasons for that were mostly that I had started off wanting to kind of separate myself from the day-to-day operations of the company because I was basically injecting new ideas at too high a rate, and I wanted to concentrate on basic science kinds of things.

Zierler:

That gets me into the 1990s. I did want to ask, what did you want to do in the basic sciences apart from Mathematica? Or even, what did you want to do in the basic sciences that might've enhanced what Mathematica was able to accomplish as it was maturing?

Wolfram:

Well, what I had done in the mid-1980s was kind of make the case for this field of science that I called complex systems research, which had to do with having a simple system represented, let's say, as a program, and then you say, "What does it do?" And the big surprise is, it doesn't just do simple things, it can do very complicated things. And that had led me to do a bunch of work on cellular automata, it led me to realize that there was this close connection between the theory of computation and physics. I had had kind of a successful academic run with those kinds of things from about 1983 to 1986. And I would say the papers that I wrote went down very well. And it was clear there was this new direction in science that had to do with the study of systems with simple components but complicated overall behavior. And my plan A had been just raise an army and get other people to work in this interesting area. Plan B was just do it myself as a one-person army, so to speak.

And that was what I embarked on doing in 1991. And what I ended up doing, the basic question was, I had found that these simple programs like cellular automata, had this very complicated behavior. And the first question was, was that a general phenomenon of the computational universe? If I didn't pick a cellular automata, which has all these cells arranged and doing things in parallel, if I'd picked a Turing machine that does things sequentially, or a register machine that works just with numbers, or some other kind of system, something in two dimensions instead of one dimension, what difference would all these things make? And so, for the first couple of years, what I was mostly doing were computer experiments to explore the computational universe.

And what I found was that the phenomenon that I'd seen in cellular automata was a very, very general phenomenon. And it kind of caused me to formulate this thing I call the principle of computational equivalence, which is sort of a general principle about what kinds of computations even simple programs out in the computational universe do. And that then led me to the question of, "So we've now discovered this kind of source of behavior with simple programs. How does that behavior relate to the kinds of things we see in nature?" We're getting into this whole complexity thing to begin with back in the beginning of the 1980s. The question was, we see lots of complicated things in nature. What is the sort of underlying model? How should we understand what's going on there?" And I thought initially, "Well, I know all these techniques from physics and mathematics. These should allow us to make progress in these areas, whether it's studying snowflake growth, or fluid turbulence, or whatever else." And probably around 1980 and '81, I was realizing that those methods didn't really get anywhere for me. If it was, "Why does a snowflake have this complicated structure?" it didn't really tell me anything about that.

Zierler:

Why not? What was the problem?

Wolfram:

The problem is that those methods, as I would describe it today, are all about computational reducibility. You have a system, it does what it does in nature, but we want to be sort of smarter than it so that we can come in with this piece of math, and immediately jump ahead, and say, "OK, system, you're going to do what you do. But this math is going to tell you the answer is going to be such-and-such." And what I realized, and what this principle of computational equivalence explains, is that that is fundamentally not doable for many systems in nature. In a sense, those systems are as computationally sophisticated as any of our mathematics is. And so, we can't expect to let our mathematics kind of jump ahead and figure out what the system is going to do much more efficiently than the system itself does it. So it's kind of a fundamental limitation of theoretical science.

And what had happened, that fundamental limitation was seen, or glimmerings of it, in Gödel's theorem and things like that. But it was not recognized that that was something of relevance to physics, even conceivable relevance to physics. And I think the things that I did in '84, '85, writing papers like, “Undecidability and Intractability in Theoretical Physics”, which were kind of things from two different worlds at that time, and the thing that I sort of slowly realized was, there had kind of been this whole development of mathematical sciences that started with people like Galileo, and Newton, and so on, that was this kind of idea, "We can use mathematical equations to represent the natural world." And it had a very good run.

And it gave us lots of great physics. But it also was clear that there were phenomena that it was not going to be able to explain. And so, I kind of got to realizing that these sort of, fundamental computational phenomena were going to be the things that were important for explaining these more complex natural processes, and that was the thing that I got interested in. And as I said, I'd done a certain amount of that with cellular automata up through 1986. I kind of stopped in 1986. I had a bad academic experience, which caused me to decide not to publish papers in academic journals ever again, which I haven't. And in the end, it was an interesting dynamic having to do with this kind of method for doing fluid dynamics using cellular automata that I had originally invented as kind of a use case for the Connection Machine computer more than anything else.

I had been interested in the question of fluid turbulence, and one of the things that tends to happen in fields is, there are really obvious questions, but by the time you go five academic generations from the origination of the field, nobody asks the obvious questions anymore. And people are only asking the questions that can be answered with the particular methodologies that have been developed. And so, for fluid turbulence, the dead obvious question is, why is there apparent randomness in fluids? You have rapid fluid flow, the things producing all kinds of random behavior. Why is there that randomness? What is the origin of that randomness? And people had sort of said, "Oh, it's chaos theory. It's sort of excavating initial conditions." It didn't make a whole lot of sense for all kinds of reasons.

And so, I was interested in what the fundamental, formal origin of randomness in fluid turbulence was. And so, I started developing this model for fluid mechanics based on cellular automata using the fact that the second law of thermodynamics tells us, "You're going to get thermal equilibrium whatever the individual molecules underneath are. Whether the molecules are shaped like this or shaped like that, or work in exactly this way or that way, the force of thermodynamic equilibrium is stronger than those things, and you'll always get this kind of behavior that follows the equations of fluid dynamics and things like that."

So my idea was to start from these very idealized digital molecules, so to speak, that could be represented by cellular automata, and then sort of build up from that to look at what real fluids do with complete control, so to speak. Because what happened typically in fluid dynamics, when people did simulations, is they would say, "We take the Navier-Stoke equations, the differential equations for fluid flow, and we will find some numerical scheme, which will give us an accurate representation of the solution to those equations. The problem is, when you don't know what the solution is going to look like, it's really hard to validate your numerical scheme. And when the flow became turbulent, people basically just couldn't validate that their computational methods were actually following what the equations would say. They didn't even know whether the equations would validly give that kind of randomness.

So I was interested to see if we could make a properly grounded approach to fluid mechanics that was based on something that was just bits, and cellular automata, and things. No floating point numbers with approximations or anything like that. And it so happened that that kind of methodology was an ideal fit for this Connection Machine computer. And so, that was kind of the thing that sort of pushed me over the edge to really analyze that. And it was interesting because the aforementioned Brosl Hasslacher was at Los Alamos at the time and somehow got involved in this. They'd developed this bizarre competition between some French physicists, the Los Alamos people, and me about all this stuff. And it all came to a head when there was an article on the front page of the Washington Post about these fluid dynamics methods and the belief that this was going to tip the balance of the Cold War, so to speak, by providing very different capabilities at Los Alamos versus in France versus wherever for doing these fluid dynamics applications.

And I have to say, I didn't think that I thought this was going to be a nice method for doing fluid dynamics. My interests in it were more basic science interests rather than the applications, although I was certainly aware of the applications. But there developed this very intense kind of situation around that, which I may have talked about last time, that ended poorly. Although the method has had a happy life. And as a matter of fact, the core idea there is the same as the core idea that now gives us the structure of space in our current models of physics. So it's had a very good afterlife, so to speak. But back in those days, the thing that I had kind of realized is, we need a new methodology. It's going to be computationally based. We've had 300 years of development of this mathematical science, this methodology.

Now, I'm going to try to use the tools I've built to race ahead and develop a different methodology based on programs instead of mathematical equations as the foundational stuff for science. And so, what I started doing was saying, "What does this mean for crystal growth? What does this mean for biological growth? What does this mean for fluid dynamics?", etc. And I had a very interesting time for a whole series of years, where I just kept on looking at this area, that area, that area. And it's like, "Gosh, I don't know if I'm going to have anything to say about this." I thought in biology that the adaptive character of biological evolution might be something different from the things I'd studied. Turns out, wasn't the case. Turns out that the sort of strength of this computational phenomena is much greater than the strength of natural selection, in some sense.

And so, I was able to figure out interesting things about foundations in biology and then later on, in mathematics. And so, this whole process took ten and a half years. And I'd had kind of a fixed table of contents for my book, which was the way I was organizing this. And I'm proud of my personal tenacity for having actually finished the project. It's the single most difficult and personally demanding project that I've ever done, and no doubt that I'll ever do. But I kind of had this table of contents, and it was like, "Can it be applied to biology? Can it be applied to the foundations of mathematics?" And each time, the answer was yes. And that yes turned into six months of work, so to speak. And I was CEO-ing the company kind of in the later part of the day. I would wake up late, as I still do, and kind of CEO the company in the afternoon, then go to work for real in the evening up through the night as a third-shift worker on science, so to speak.

Zierler:

And in science, do you have collaborators at this point? Or are you working on your own, essentially?

Wolfram:

I had a series of research assistants.

Zierler:

But I'm talking about peers.

Wolfram:

Well, they were good people. They figured out some good stuff. In terms of the world at large, here's the dynamic that would happen. I was pretty hermit-ified at that point. And I would see friends of mine who were in the science business, so to speak, and I'd talk to them about the stuff I was doing. And here's how it would always go. They would say, "That sounds very interesting. What about this, and this, and this?" And I'm like, "Well, yes, I could investigate that, but I have a plan, a table of contents. This is what I want to do." And actually, a few times, I was kind of led to, "Well, let me look at that." And it wasn't as interesting as the things that I'd planned to do anyway.

So I kind of decided, "I'm never going to get this project finished if I'm talking to people about it all the time. I'm just never going to get it finished." And so, I went into this mode of not communicating with the world and getting the project done. Worked OK. If I'd still been in the academic papers business, I could've been publishing hundreds of papers in different journals, and it wouldn't have worked well. Every paper would've had an introduction that says, "This is part of a set of papers that are about this new kind of science." And the people reading it would've said, "What the heck is this? I don't know what this is. All I care about is this little detail in this particular area of biology," or whatever it is.

And so, the bigger points would've been lost. So I thought the only way to communicate these bigger points was kind of with a single integrated presentation, and it wasn't really that useful to spread it out in the academic world. Academia is not built for large-scale innovation. Academia is built for incremental innovation. And the things I've done that represent large-scale innovation, you're kind of on your own. It's something you as an individual and whatever organization you build have to push. By the middle of the time I was working on what became New Kind of Science, the field of complexity had started to get funding, there was starting to be complexity institutes, and so on. It's an interesting dynamic. The most important thing in communicating what I was trying to do was to communicate to people that it was a big thing.

Because as soon as you think it's not a big thing, you say, "Oh, it's going to fit into this micro-piece of something. Is it like the renormalization group? Or is it like some other small innovation in physics?" And it isn't. And so, as you try and fit it into those things, you say, "Well, it doesn't really fit." It's the same reason that the phenomena that I've found in these simple computational systems hadn't been understood or recognized before. It's not that people hadn't done things where they had observed similar phenomena in experiments they'd done. The problem was, they didn't have a big context into which to fit those things. And so, they were just like, "Well, yes, the distribution of primes has a great deal of randomness in it. The digits of pi have lots of randomness in them. We don't really know what the significance of that is. Let's go prove theorems about the regularity of primes, not the randomness of primes because we can prove theorems about the regularity of primes."

And so, this phenomenon that you get, complexity from simple programs, it's not like it hadn't been seen. It's just, it hadn't been put into a context. And so, that sort of process of putting something into a context is not something that happens in this incremental socialized way, I think, in academic science. And so, after a while, it got a little bit crazy because it became this best kept secret type thing of people wondering about what I was up to and all that kind of stuff. I wasn't really making a big noise about the fact that I'm up to anything. But some people would hear about it and be very curious. And I would talk to people when I would run into them, but frankly, mostly, it was, "I'm doing this big thing, and it's off in this direction. And it's a direction that is very different from directions people have taken before."

And like textbooks charmingly say sometimes, or people giving courses charmingly say, "You're not expected to understand this, so to speak." I was expecting that I would have to put a great deal of packaging around the ideas to make them be understandable. And I think in the end, the big book was very successful at that in presenting these ideas in a way that was sufficiently well-explained that a wide range of people could get access to them. And that just wasn't something that was doable on an ongoing basis.

Zierler:

On that point, because as you say it, A New Kind of Science easily could've been an encyclopedia and not a monograph, what were the limiting factors that you placed on yourself, both in terms of the writing and the scope?

Wolfram:

Well, I had a table of contents, and at the end, 1,280 pages was the most that the binder could bind with the technology we were using. So that was the limit. [laugh] I think the scope was really, "Take a look at the foundations of pretty much every area of science." And I did. And some I got further with than others. I have changed the way that I present ideas in more recent times. NKS, New Kind of Science, is a very tightly written document. I would spend a day writing a page. And the pictures in it are very elaborate, algorithmic diagrams that would take a long time to produce. In more recent times, I've been writing a lot of stuff about basic science, and I've written it vastly more quickly. It's not nearly as tight.

The NKS book, there's been an online version of it forever. I refer to it many times a day every day, and I'm actually pretty pleased with how particularly the notes in the back of the book are very kind of crisp summaries of a lot of different kinds of things and details that I worked out. And so, my goal was to write something that anybody who was interested could understand. I full well knew that in specific fields, that would mean that people would say, "Oh, there can't be anything here because it doesn't have all the fancy stuff that I'm used to seeing in a fancy technical document about my field." But as far as I was concerned, the more interesting objective was to reach the broadly interested intellectual world, people not just necessarily in academic physics or something, but people in academic social science, people out in the world of technology and other places. And I would say that the book was extremely successful in doing that.

Again, much like Mathematica, you never know who buys your book. Years later, people will say, "That was a really crucial thing in determining the direction of my career and the things I studied," which is terrific. You don't know that until you happen to run into somebody, so to speak. But the thing that happened, I think, and I don't know to what extent it was going to happen anyway, I don't think it was, is this transition from models getting made using equations to models made using programs. And for basically 300 years, the serious models got made with mathematical equations. In the last 15 years, most new models get made with programs, not equations. And that's kind of a big deal. I know some historians of science who have been paying attention to that transition. It's been a surprisingly silent transition. It's kind of almost obvious to people.

"OK, we have computers. We can make these pure program-based models. Who needs an equation to describe this or that thing when we can have a program we can run to simulate it?" In the preface to the NKS book, I kind of explain that "At the beginning, lots of things in this book will seem completely shocking. But in time, they will come to seem obvious." And that's the path we're on. And it's kind of interesting to see that ideas like computational irreducibility, for example, which people thought can't be true and was shocking, particularly among the younger generation of scientists, is becoming a, "Yes, that's just how things work," thing. And as you know, I've been involved quite a bit in studying the history of science. The picture for many years later is, "Such-and-such a change of paradigm happened instantaneously." But on the ground, instantaneously is decades. And decades is a lot of a life, so to speak. And it tends to be the case that living through these kinds of paradigm shifts is interesting. I suppose I got particularly interested in the history of science as I was working on the NKS book.

Because I'd made the decision, which some academics absolutely hated, that by 2002 when NKS came out, the Web was a thing and had been for nearly ten years. My calculation was, if I can give you the right people's names and the right buzzwords, you can find a paper on what I'm talking about. I don't need to tell you that it's in volume such-and-such of journal such-and-such, that you would have to go and find it on a dusty library shelf. But what I did want to do, because I thought it was important, was to tell the historical story of the precursors of the ideas I was developing, where different pieces had come from, and so on.

And so, I got into a very serious effort of doing the history of science for the sake of writing these notes at the back of the NKS book. And in retrospect, there were plenty of these notes where it was just a few sentences about how some idea came to be, but that was based on hours, and hours, and hours of interviews with people about where this came from. I've tried to do some of your kind of business a little bit, and sometimes in the finding of where ideas came from, the amount of drilling you have to do is remarkable. Because people forget, people make things up, people don't have records. It's a mess. And I was pretty pleased with myself for a lot of the things where I did manage to disentangle, often from primary sources, what had actually happened in the history of things.

So I must say, I was a little bit disappointed, when the NKS book came out, with academics saying, "Oh, you don't describe where all these ideas come from." "Guys, I've done a much more serious job than any of you have ever done of actually trying to disentangle what the history of these ideas is." And I would say to people, "Did you actually read the historical notes?" "Well, yes, I did read them. They were actually very good." And it's like, "Yeah, well, that's the point." And I think it's a feature of the large institutional structure of academia that there's this sort of veneer of certain kinds of things that people expect, like, "Put a lot of references in." So I just did this. I'm just finishing a book that's collecting some things I've written about these things called combinators, which I've been interested in for years, but they just had their centenary on December 7, 2020. And so, I decided I would try and sort of see what's happened to combinators over the past century. They're a very early idea about universal computation.

And the person who created them, Moses Schönfinkel, had a very mysterious life, which I have gone to a lot of effort to try to track down. But in any case, I was just putting together some of these pieces that I wrote about combinators into a book, and just yesterday, actually, I was finishing off. I decided, "Let's give the academics what they want. I've written this elaborate history of combinators, but I'm going to give them the complete bibliography." It's actually been kind of fun because there are a lot of people who are like, "Well, yes, I'm 90 years old, but I still have this pile of papers, and yes, I think I can scan this for you." And so, we've had a lot of help from the combinator community. We've now been putting together the definitive bibliography of combinators. But the bibliography doesn't tell a story. It's just the beginning of a story, so to speak. It's a giant. There must be several thousand entries.

And I've categorized them and looked at every one of these papers. But it's an interesting thing because it's like, "You want academic references? Here are all the academic references to this field, complete with the manuscripts written by the now-90-year-olds back in the 1960s.” And things which we've now got scanned copies of and put on the web, and so on. I suppose I'd been paying attention to history, but I'd not been directly interested in history until I was trying to disentangle the history of the ideas in NKS. Because the other thing about NKS was that I was, frankly, surprised that people hadn't noticed a bunch of the things that I'd noticed. And I realized, only after the fact, this key point, that yes, you can notice a phenomenon, you can observe it in an experiment, but without a broader intellectual context, you don't really have a place to put it, and you just ignore it.

And so, sort of the main story was having this place to put this broader idea of what the simple programs do. And the people criticized the A New Kind of Science title, but it's a pretty good title for what it is. And it's kind of funny because people at various times were saying to me, "What is this thing? Is it this? Is it that?" "Oh," they realized in the end, "it's a new kind of science. That's what you called the book as well."

Zierler:

As you were conceptualizing NKS, academic professors at this point generally are not writing very big books with very big ideas that are so broadly conceived. Was there a particular literary tradition that you saw yourself in that was perhaps somewhat anachronistic at this point?

Wolfram:

I was just a practical person. At the beginning, I got the then-potential publisher of the book to do some market studies because I was curious. There was sort of a trend of people writing pseudo popular science books. And I wanted to know who was buying it. The answer was interesting. It was people who would've bought philosophy books in a different age, but the philosophy books were now too academic, and they were buying science books as intellectual fodder, so to speak. And I thought it was interesting. So I was certainly aware of what was out there, but I made the decision, "This is going to be a piece of primary science. This isn't a popular book in that sense." I was certainly aware that plenty of people in the past had written big books, from Charles Darwin, Isaac Newton on. I remember when NKS was coming out, I was considering getting back cover quotes for the book.

And so, I talked to Steve Jobs about back cover quotes. I was asking him if he'd write me a back cover quote. And eventually, he came back to me and said, "Look, Isaac Newton didn't have back cover quotes on the Principia. Skip it. Do yourself a favor, don't have back cover quotes." Which was right, actually. It was a good piece of input because they would've been very anachronistic very quickly. But I was certainly aware of the big books of the past, D'Arcy Thompson's Growth and Form, another big primary science book. One of the things I suppose I felt I'd learned from Darwin was never write a second edition.

So the first edition of Origin of Species is much cleaner than subsequent editions. In subsequent editions, there's a lot of stuff about, "Let me respond to the argument of Professor So-And-So," which nobody's ever heard of anymore. So that taught me to never write a second edition. And try and be as definitive as you can to begin with.

You asked what was my scoping of the project. My objective was, pick all the low-hanging fruit. In any one of these areas, if there's something that can be done with these methods, just do it. And with the tools that I had, I was pretty efficient at doing those things. As it turned out, if I had been savvier about history of science, I might've realized differently. Because one of the problems with picking all the low-hanging fruit is, then there's no other fruit for people to pick that's low-hanging, so to speak. And that means that when you develop a community that's working on something, the fruit that they have to pick is already quite high or not very interesting.

For example, with our physics project now, perhaps not out of awareness to the fact that it's not bad to leave low-hanging fruit, there are plenty of things, which we've been seeing over the last year, people starting to engage very well with, which are maybe not as low-hanging as they might be, but it's not the case that we've mined everything out, so to speak. There's a lot of very juicy stuff that is not so far away to be explored there. And so, I think I had considered, in the early days of the NKS book, calling it A Science of Complexity. That title was a terrible lose because everybody who I would say that title to would say, "That sounds very complicated." And when I started saying I was going to call it A New Kind of Science, the first question people asked was, "What's new about it?" Which was a much better question. And that's a question that you can kind of dig your teeth into.

Another person who I'd sort of learned in a sense from was Benoit Mandelbrot, who wrote his book on fractals, which was another primary science book that was presented kind of in a public setting, and that had gone quite well for him. What hadn't gone well for him was, he then tried to write endless academic papers in endless fields and tried to sort of insert fractals as a field of inquiry. And he did that by dropping in on all these different fields and coauthoring papers in these fields. That worked very poorly for him for reasons that took me a little while to understand. The main problem was, he would come into a field, and there would be some little subfield of that field that was sympathetic to his ideas. He'd write a paper with those people, but the detailed political dynamics of that field were such that he was entering in some weird corner of the field, and other people didn't believe in it. And the fact that he personally was in there was a negative.

And so, I had kind of made the decision, "I'm going to encourage people to write all the cool papers they can." And we started the summer school, which is still going today, for kind of teaching people about New Kind of Science kinds of things, and so on. And lots of people have written lots of great papers, and I haven't been a coauthor of any of them. Because it's like, "Just go do your thing in your area. I don't want the social dynamics of doing that." I think the Benoit Mandelbrot example was one where it's sort of the downside to go publish 500 papers. Much better to have one book. That's the thing for which Benoit Mandelbrot is remembered, for what that's worth, and the thing where he made the most progress was this one book, rather than the 500 papers, maybe more than that, for all I know, in all these different fields.

So that's kind of a motivating example. And his book came out in the late 1970s. And I knew Benoit Mandelbrot reasonably well. I had a complex relationship with him. And when he died, I was going to write an obituary, and I went and found all of the letters I'd exchanged with him. And my staff said, "You cannot write this obituary. These letters are too incendiary and horrifying." And so, I didn't, and I waited until his autobiography was published and wrote a review of it, which was a bit simpler, and didn't get me into some of the terrible things he'd written to me. I was able to just characterize them without quoting them. But it's been interesting. For example, with this physics project, we live in a different time now. And the delivery mechanism for the project has been partly a book, partly online, partly other people writing papers, and so on.

And I think it's been a pretty successful delivery mechanism. But it's a different time. 2002 was before social media. The Web existed, but online books were not really much of a thing. And had I been doing NKS today, I probably would've done it somewhat differently. I certainly would've done the launch of it somewhat differently with social media and things like that.

Zierler:

For all the attention that NKS got, including some negative critiques, among the negative critiques, in the academic community, is anything valuable for you? Were there any critiques that you took well that you thought, "This is legitimate. I appreciate this"?

Wolfram:

No. Simple as that. It was very disappointing. I would've been happy for that. I looked at this again a year ago when I was writing some things about the backstory for the physics project, and I really haven't looked at most of these reviews, and things people had written, and so on. The thing that really struck me was, the center of negativity towards NKS was physicists, people doing fundamental physics. Which is interesting because it was the one field that I'd been a card-carrying member of, so to speak. Actually, there was one other branch, which was the people who'd gotten into complexity as the second generation of the people who I'd originally gotten into complexity in the 1980s with. So this was kind of a bizarre thing. It was like, "Well, yes, you did start this field for us in the 1980s, among other people. But we're now the second generation. We don't want the thing to be changed."

So that was an interesting dynamic, but that was a smaller crowd. It was interesting because looking back on it, the response from every field other than physics was very positive. I think I said in this backstory piece that I wrote, I characterized it as a parade of Nobel Prize winners with pitchforks. And I was disappointed at the lack of intellectual integrity. I was surprised by the level of emotional response. My assumption had been that most physicists would say, "OK. Kind of interesting. Not what I do. Whatever." That it wouldn't be seen as something directly about physics. I don't know whether it was a good or bad thing, but chapter 9 of NKS is about fundamental physics. And it basically sowed the seeds of what's now our physics project. I don't know to what extent physicists were honing in on that chapter. It's not what most of them talked about. Most of them were talking about the general idea of doing science based on programs rather than equations. And sorry, guys, you lost that one.

Zierler:

So on that note, what dogmas do you think the physics community felt were violated, so to speak, in NKS?

Wolfram:

Well, to quote one member, when I got on the phone with them, the first thing they said to me was, "You're destroying the heritage of mathematics that's existed since ancient Greek times." In other words, mathematical physics and the mathematical tradition in physics. To the credit of this person and others, they did at least understand what the paradigm shift was, so to speak. They were not confused about what the issue was. And that particular conversation was charming for the fact that the next sentence in it was me saying, "Well, isn't it the most amazing irony, then, that I've made such a good living purveying the fruits of that tradition?" since Mathematica's first critical use case came from mathematics and mathematical sciences.

But I would say that the main thing is, modern physics was created between 1900 and 1920-something. We are now seven academic generations past that time. Perhaps more. And it is the nature of things that when you reach many generations later, people don't even wonder about the foundations anymore. For example, with our recent physics project, if I was able to go back and chat with Einstein, or Hilbert, or somebody about it, they would be like, "Yeah, OK. Fair enough." In fact, Einstein himself, back in 1916, was talking about, "I'm sure space will turn out to be discrete, but we don't yet have the tools necessary to analyze that." When a field is originated, nobody is surprised that its foundations may change. By the time you're many generations later, people are like, "This is the paradigm in which we live, and this is the only thing we feel comfortable with." And I don't think that's necessarily hard to sympathize with.

People have grown up and spent 40 years working in some particular paradigm, and it's an uncomfortable thing for somebody to come along and say, "I'm going to smash your paradigm." The thing that surprised me was that they wanted to engage in battle. What I had expected is, "We're doing our paradigm. We're very happy doing our paradigm. We think our paradigm still has legs. You do your paradigm in the next valley over, so to speak. We don't care about that." That was what I'd expected. Which would've been fine. The thing that was surprising to me was the emotional level of the reaction. And I think that spoke to the weakness of physics, actually. I think it spoke to an insecurity that I didn't know was there. Because I think the point of view could be, "We know what we're doing. Physics is fine. String theory's going to deliver the fundamental theory of physics. We're all good here. Let's just keep going."

And in terms of some of the naysayers, I would say I was disappointed for two reasons. People show me new ideas all the time. It's the nature of my place in the world. And sometimes I don't understand them, sometimes I'll say, "I don't really believe in it," whatever else. But generally, when somebody who I kind of know has done real things in the world shows me some new idea, there's a certain a priori assumption that it's not completely crazy. Because it just turns out that history is a good guide, that people who do things that turn out to make sense, sometimes even when people didn't think they would make sense beforehand, the next thing they do that doesn't seem like it's going to make sense, maybe it won't work out, but it's not absolutely crazy, so to speak.

And so, I was sort of disappointed at the level of, in my view, poor thinking about, "Look, guys, you use our software every day, and I was in your business 20-something years ago. It's probably the case that what I'm doing right now isn't completely crazy. You may not care about it. It may not be useful for your work. But it's probably not completely crazy." And I was surprised. I also was surprised that people would commit themselves to saying, "This is a bad idea." It's going to look more and more silly. It already looks silly. It's going to look more and more silly, and people will say, "Oh, Steve Weinberg wrote this thing that said this was all stupid. And he was completely full of it. He didn't need to write it. It was a mistake." And it's like, "Why do that? What was the point?" And that was the thing that surprised me. These are people who should have some knowledge of history, should have some sense. As I say, they might not care. But to decide that they understood what it was, and that it was nonsense was just dumb. Plain and simple. I don't know whether I had sort of an official opinion of that crowd, but my opinion kind of dropped to nothing.

Zierler:

Who were some allies or supporters in that crowd? Who in the physics community got it, as far as you saw? Surely, this is not a uniform block of reproach to NKS.

Wolfram:

No. There were plenty of very positive pieces of feedback. The other thing was, I went around the country and gave talks about it. Which were very well-attended by physicists, among others. And the only thing that I was disappointed about in those talks, which were usually kind of filled to capacity, was too much gray hair. Which was interesting. That is, the older crowd, who might've been the naysayers, showed up to the talks. And I was by no means heckled. They asked questions, and the questions were a bit repetitive, but it was perfectly reasonable. So I don't know. I didn't read these where somebody was saying, "It's all nonsense." I literally just looked at some of these things for the first time in any detail a year ago, when I was writing this history of physics project.

There were just tons of physicists who were very positive about it. The thing that would most make me remember them is if they'd quickly written papers and things based on it, and there weren't so many of those. Because as I said, I'd picked a lot of the low-hanging fruit. So it took a while for there to start being good follow-up. It'll be the 20th anniversary of NKS next year, so I'm about to initiate a project to go and do the analysis of what came from the book, so to speak. The tenth anniversary, I did a study of what kind of response the book had gotten. And although one might focus on the negative reviews, the fact is, I made a pie chart of the sentiment of the different reviews, and it's overwhelmingly positive. So this is human nature, that people tend to focus on the negative, and not on the positive. Which is a good reason to not read the reviews, positive or negative.

But it is a shame at that time that social media didn't exist, and there wasn't really a venue to engage and have the discussion, so to speak. I remember with Steve Weinberg in particular, who wrote an eloquent and foolish review. I had lunch with him. I'd actually never met him before that time. But I had lunch with him maybe a year or two later. And he was saying, "Oh, I didn't understand that," when I explained some things in the book. And it's like, "How did you read this whole book? This is a book many people have really understood. How did you not understand that fundamental point about what the point of the book was?" Actually, now that I think about it, it's actually sort of interesting because in a sense, perhaps what I would have to say is that despite the title of the book, he didn't believe it, so to speak. As in, if you tell somebody, "This is a new kind of science. If you think about it using your existing approaches to science, you will be confused," which is pretty much what I said in the book, he probably just said, "That's obnoxious, and I'm going to try and understand it using my existing paradigms."

And with respect to those existing paradigms, it just doesn't compute. Doesn't make sense. I did say that very explicitly in the book because I had also observed it among particular people that I had interacted with about the concept of the book. I said, "The first thing you need to understand is that the book is trying to do a big thing. You don't necessarily have to believe in the big thing, but if you actually want to understand what this book is about, you have to understand that it's trying to do a big thing." And I think I even had, among my many notes at the back, a note that was mocked by some, but I think appreciated by others, that was literally a note called something like Clarity and Modesty. And it was basically a note that explains, "I can say, 'Oh, shucks, there's nothing much here,' or I can say, 'Look, I think this is a big thing.'" If you want people to understand, you have to explain that it's a big thing.

And so, it's an interesting dynamic. I think that the main result of sort of negativism from physicists was, I had intended to continue my efforts in fundamental physics after NKS was done and try and explore whether the methodology that I'd kind of invented for understanding physics might really pan out. And I did a little bit more on that in 2003, 2004. But basically, my market study indicated total negativity among the target market. Which took two forms. One was active hostility, like, "Please don't do that project," type hostility, and the other was that literally, when I would talk about the project to my friends in the physics community, and I do not believe that I managed to keep anybody's attention for more than 15 minutes over the course of the 18 years between the NKS book and the physics project. Now, what was strange about that was, in the general public, and even among other academics, the interest in this direction in physics was quite high.

And so, for me, it's a sort of fascinating study and history of ideas because I told literally millions of people about this approach to physics. And I gave a TED Talk, which one of the people working with me on the physics project re-watched when we were about to watch the physics project, and he pointed out to me that I had said in whatever it was, March of 2010 or something, "I hope that within this decade, we'll get to kind of figure out what the fundamental theory of physics is." And he was pointing out we missed by a month in the launch of our new project. But what was interesting about that was that millions of people were exposed to that, and yet, basically, nobody did anything. And that tells you, that you have an idea that is not in its time, so to speak. Well, when I say nobody, there were a few students who came to our summer schools who were interested and did do some things.

And in the end, one particularly persistent student, who maybe we'll talk about later, really wanted to work on this kind of stuff. But it was interesting, I would say it's a pretty pure play example for history of science of there is a paradigm, seven academic generations in, it's just going to do its thing. Now, I've asked people with the current physics project has been going very well, and its absorption in the physics and mathematical physics community has been very good. And it's like, "What's different?" I think it's a different time. Physics has somewhat lower self-esteem. Fields with high self-esteem are particularly hard fields to introduce new ideas in.

I think, also, the fact that I didn't expect, which has been quite wonderful, that our physics project has these close relations to a lot of other areas in mathematical physics, that's helped. It's helped that the whole notion of computation being something that might be relevant to physics has kind of entered through the back door of quantum information theory, and I think that's been another dynamic.

Another thing that helps is the world of science is in some ways more closed, but in some ways, more open. The institutional structure, there's journals, and peer review, and this, and that, and the other. And it's as locked down as it ever was, perhaps more so. But there's livestreaming, there's posting things on the web. There are lots of ways to reach the world that don't go through any particular academic or other gatekeepers. For example, when this physics project came out, I was like, "Are we going to pitch this to newspapers? Well, we're in the middle of a pandemic. Let them write about the pandemic." It's like, "We're not even going to pitch it to newspapers. I don't really care." A number of newspapers did write about it, but I frankly just don't care. Because I know that the people we're reaching, I'm perfectly well able to reach by things I write and things that exist as livestreams, podcasts, this, that, and the other.

The world of sort of the gatekeeping of information of that type is over. And I think that that's been another reason why it's a different dynamic in what's happened with this project. I think I've also been fortunate that we've got a number of very energetic young folk who've been working on the project, who've been kind of able to be a less threatening place to go in talking about some more technical aspects of the project, which maybe I also wouldn't understand, so it's just as well. But I think our current project is going really well. And obviously, 18 years will be compressed to nothing in the history of physics in time.

But it is certainly something where people will notice that something was delayed by 18 years, which would've happened 18 years ago. I didn't know what would happen with the physics side of the NKS book, but I full well knew that the ideas in the book are inexorable. This idea that programs are going to take over from equations as foundational ways of making models, the idea that computation is going to be a key paradigm for thinking about science, these are not things where I wondered, "Is this going to happen?" No. I knew that was going to happen. It's only a matter of how long it would take.

Zierler:

So now that we're almost 20 years since publication, you've already given your views on the utility or not of a second edition, but in light of any advances in theory, observation, experimentation, computation, is there anything that you would have chosen to emphasize more or less in your initial effort?

Wolfram:

No, it was pretty good. Honestly, for the 20th anniversary, we're going to have sort of an annotated version on the web with updates to different things. There are very few substantial updates. The only one technical one was, I had come up with this Turing machine that I thought would be the simplest universal Turing machine, and that's a piece of evidence for this principle of computational equivalence of mine. And in 2007, I put up a prize for somebody to prove or disprove that it was a universal system, and this young chap, Alex Smith, in England, proved that it's universal. So the book says, "I think this may be universal." But now, we know, yes, it is universal. But there are surprisingly few. There were surprisingly few mistakes.

It's interesting because the book has 12 chapters, and people, for better or worse, who know the book well refer to different things by chapter number. For example, chapter 10, which is about processes of perception an analysis. For some people, that's an important chapter. For me, that hasn't been an important chapter for a long time. It's about to come out and have its day in the sun, so to speak, because it turns out it's important for understanding a bunch of philosophical implications of our physics project results and so on. I guess my approach to these things tends to be, try to put in all the effort to make the thing you do be definitive, even if it takes longer at the time, and then expect that you kind of save time by not writing 17 revisions of that later. The things that are still not properly worked out, like the use of NKS for technology is only still in its infancy.

And I had only a tiny section on that in the book. And I hadn't worked it out at the time. And it's still not properly worked out. The most important technological consequence of NKS came, essentially, through a philosophy point, which was that I had been interested in building something like Wolfram Alpha, a way of taking the world's knowledge and making it computable, since I was a kid. And I had kept on thinking, "The only way to do this is to build an AI. How am I going to build an AI?" And what came about from this principle of computational equivalence was the realization that there isn't a bright line that separates the intelligent from the merely computational, so to speak. And so, I was thinking about this. "Well, can I build something like what's now Wolfram Alpha?" And it's like, "Well, gosh, according to my own paradigm, this should be possible. So I have no excuse to not try to actually do it." And so, that was one of the important driving forces behind building Wolfram Alpha. And that was a strange use of basic science because it came through, essentially, the philosophical implications of the basic science.

What's been remarkable with the physics project now is that, if you'd asked me a little over a year ago, "Are there going to be technological applications of the physics project?" I might've said, "Give it a couple hundred years, and maybe there will be. But not any time soon." And it's really kind of interesting because we just got this better way of doing circuit optimization for quantum circuits, a better way of doing calculations in numerical general relativity, and we are in the process of understanding things about distributed computing, and one that will be quite dramatic if it plays out is a distributed version of blockchain, which would be a much more practical way of doing distributed consensus kinds of things that directly uses the physics project, and actually, ideas from quantum mechanics. And one of the things that I find kind of charming about that is, people find quantum mechanics very mysterious. If everybody is used to it from the way financial transactions work in the world, perhaps people will find quantum mechanics less bizarre.

We're still a little ways away from that. But it's been a real surprise to me that particularly the formalism from the physics project has implications in chemistry, potentially biology. It's really quite fertile. Even more fertile in the short term than NKS. The kind of idea of searching for programs that do things, which is sort of a meta idea from NKS, is an idea that we've used a lot, and other people have started using, too. It's an idea that, in a very distorted form, is kind of the deep learning story, although that history is largely separate from NKS, although I don't know whether it's completely separate. One of the most bizarre applications of NKS, which I absolutely did not see coming, is this idea of computational irreducibility, the idea that even if you have simple programs, it can be an irreducible amount of computational effort to get to a result. I'm pretty sure that idea was the thing that stimulated the bitcoin proof of work concept.

We won't officially know for sure until we officially know who the creator of bitcoin was. But I have good reasons to believe that there was a connection there. If somebody had said, "What's computational irreducibility going to be relevant to?" The idea that it's going to be people burning electricity as a way to provide proof of some kind of value would not have been one that would ever have conceivably arisen. So it's kind of a strange thing. I suppose I was interested with computational irreducibility a couple of years ago now. I got roped into doing some testimony for the US Senate about, basically, how to tell if an AI-based system is feeding you content that it shouldn't be feeding you in a news feed, search engine, or whatever else. And that was my opportunity to get the term computational irreducibility into the Congressional Record.

And I think it's interesting that computational irreducibility, you can't just open up the algorithm and be able to say, "I can see the algorithm, so I know it's not going to do anything bad." The fact that that can't work is something that will be increasingly important societally and something that is kind of bizarre, that this idea that came out of a basic science direction ends up being something that's an important policy issue of how to decide whether the thing is doing what you wanted it to do. Just saying, "What does the program do?" is not enough. "How is the program built?" is not enough. We've kind of elided, and perhaps for the best, the many years of development of Mathematica, and then Wolfram Language, and then Wolfram Alpha. And that was an interesting example of a quite different phenomenon than had happened to me in the case of NKS.

Zierler:

And on that point, how far back do the ideas go for you that would ultimately become Wolfram Alpha?

Wolfram:

Oh, 1972.

Zierler:

So these are things that you've been thinking about for that long.

Wolfram:

Yes. In detail, we started building it in 2004, 2005. But I didn't have the context to formulate it properly when I was a kid. I was sort of collecting information and trying to sort of organize it, figure out how to compute things from it, except that I didn't really have the proper notion of computation at the time. So the final, "Let's do Wolfram Alpha," I didn't have any idea if it was going to work. Because it's like, "How much knowledge is there in the world? Can you make a decent amount of it computable? Can you understand natural language? Can you just take random questions people ask and understand them?" I remember the team that I assembled to do that, we went on a little field trip to a big reference library. And I was like, "OK, let's look around. Our mission for the next couple of years is to take all this information and make it computable."

And I suppose this is part of the innovation leadership thing, to not have people just say, "You're crazy. We're running away," at that point. Which they didn't. But there has been a long history of people using AI to try and build these question answering systems, and it hadn't been a successful history. And people hadn't been able to do these kinds of things. When Wolfram Alpha was going to come out, the AI community was not that large. It was kind of the AI winter, so to speak. AI was out of favor and out of fashion, and nothing seemed to work. And so, I kind of wondered, how would that little AI community respond to Wolfram Alpha? Would they be like, "This is great"? "This is horrible"? "This isn't really AI"? What would their point of view be? And I kind of had an interesting experience with Marvin Minsky, who was a longtime friend of mine who was an AI pioneer. A few weeks before Wolfram Alpha came out, I ran into him at this event.

And I say, "Marvin, we built this thing. It's a question answering system. You should look at it. It's interesting." I get out my computer. I'm typing something in, and Marvin's not really paying attention. And he changes the subject. Kind of the same thing as the physicists with the physics project. And so, I said, "No, Marvin, this time it actually works." And so, he says, "OK, OK, let me try some things." And then, a couple minutes later, he's like, "Oh my God, it actually works." And then, we're at this event where he's pulling people in and saying, "You've got to see this." So to his credit, it took two minutes for him to decide, "It actually works," even though things like this hadn't worked for 40 years. It's more obvious in a case like that that it actually works as opposed to some paradigmatic approach to science, where it takes more investigation to know what's going on.

But back to NKS, one of the things that I found charming as the piece of philosophy, a conceptualization of science–the NKS book is mostly a book of experimental mathematics, in some sense. It's a computational book. It says, "You have this rule. It does this. Here's a picture of it." And that's its main kind of mode of discussion. And so, when people said, "The book is wrong," it's like, "What could that possibly mean?" The book says, "You start from this program. You run this program. It does this." What does it mean that's wrong? You can say, "It's irrelevant. Why would I care about this?" But it isn't the type of thing that has the character of being "wrong", so to speak. So to me, anybody who would say that would immediately disqualify themselves from any serious intellectual understanding of anything because it's kind of like, "Look, you run a program, it does what it does. The fact that it doesn't do what you think it will do is not, 'It's wrong.' You can run the program, too, and it will do the same thing it did for me. I'm sorry if it doesn't do what you thought it would do. But that's not the nature of science."

The good news about these kinds of things, I suppose is the sort of arc of my own career has been such that I've done plenty of things where people said, "This is crazy. It couldn't possibly work." And so, when people say, "This is crazy. It couldn't possibly work," it's like, "Great. I'm onto something now." I remember when Mathematica was under development, there were plenty of people who said, "This is too big. You can't possibly make something this big work." Or things like, "People won't understand how to use this," etc., etc. All kinds of different, "Couldn't possibly work," type things. Probably because I had reasonable success pretty early in my career, I just completely ignore any such, "It couldn't possibly work," type statements.

Even in my own company, there was an interesting dynamic with Wolfram Alpha, actually. Which was, when Wolfram Alpha was under development, we had a very successful business going with Mathematica and what would become Wolfram Language. And people in the company were like, "We don't really need to do something different." It's kind of like academics. They have a paradigm, they do their thing, they're good at doing it, it's working. Why do something different? And I'd had this project that was kind of just a fun little project called Wolfram Tones, which is a music generation project that came out in 2006, 2007. Which was kind of a fun thing, and kind of a nice PR thing. But what commercial aspects there were to it were just a complete disaster. It wasn't really a commercial kind of thing, and it required people downloading ringtones through cell phone carriers, and the whole thing was a mess.

But Wolfram Alpha was under development, and there was a certain amount of, "It's going to be another Wolfram Tones story." There wasn't a high degree of belief in the whole project. And it's one of the advantages of the sort of entity and platform that I built that I was able to kind of hide 200 people working on this project. And it's like, "Well, yes, we happen to have a geo-distributed company." I think they were on a different floor of the building, as well, in our headquarters. So that probably helped. But in the end, I had a lot of people working on this project, but I didn't want to have the battle of convincing my management team that this was a good idea. Just like I don't need to have the battle of convincing the AI community that Wolfram Alpha is going to be a good idea before it exists.

And to the credit of my management team, for example, once I was able to actually demo Wolfram Alpha to them, it was like, "Oh, this is pretty cool. This is interesting. This is going to work." And within half an hour, there was a lot of discussion about strategy around other things we were doing. And it was the right decision for me to not expose that to the team early on when it was still kind of embryonic, and people would've been saying, "No, this is not going to work." And in a sense, for NKS, during its development, the ten and a half years that I was working on it, that was sort of the methodology I was adopting there. Don't tell people about it and have them tell you, "Oh, you should go chase this or that," or, "It couldn't possibly work." Just ignore them and just go do your thing.

And the thing that I have put a lot of effort into is building up a company with very talented people, so that I have a pretty good tool, effectively, for taking ideas and being able to just go and explore them and do things with them. This physics project is small compared to Wolfram Alpha, but it's one of these things where I can just do it. It involves a few people, it involves a bunch of resources, and so on. But it's like, "We'll just do it." And again, I have to say, I didn't think that project would be nearly as successful as it has been. These dominoes just kept on falling in the last few months of 2019, and it really became clear that physics is a lot easier than we expected. I really thought physics was going to be a lot harder. I didn't think I was going to see in my lifetime a bunch of the things that we've now figured out.

Zierler:

What comes to mind?

Wolfram:

The fact that we can see how quantum mechanics works, the fact that we can see the whole structure of spacetime and the structure of quantum mechanics. All these details about how all these different physical phenomena work. The fact that all of this stuff is so tied into all these different approaches in mathematical physics that people have been pursuing, that seemed to have quite untethered for many years. It's interesting because what does it take to get to this point?

For example, particularly the applications of our formalism to other fields like mathematics, economics, various other fields that we're exploring. In time, those applications, people will be able to say, "Why didn't we get here directly? Why did we need to go through this crazy physics project to get here?" Because it's basically a methodological thing. It's a particular formalism and approach, so why does it need the physics project? The answer is, because to execute this project, you need several things. You need a good intuition for how the computational universe works, what simple programs typically do. That intuition is limited to a fairly small number of people at this point. And that's point number one.

Point number two, you need an actual knowledge of physics. It doesn't do any good if you don't understand general relativity, if you don't understand quantum field theory. You are not at the starting gate in terms of making a model for fundamental physics. You're ignoring everything that happened in the 20th century if you don't know about those kinds of things.

So for me, there was this sort of incidental fact that I had spent a bunch of years when I was younger doing mainstream physics of that type. Also, I can read a modern physics paper. If I was in other fields, so much has changed over 40 years, for example, that I would have a hard time reading, with knowledge only from 40 years earlier, the current papers. That's true in areas of biology, for example. Although, 40 years tends not to be as long as you think in the history of any science. But in physics, it is funny because things will come up, and the people who are working on this project will talk about this or that, and I'll say, "What are you talking about?" And they'll explain. And I'll say, "Oh, I know what that is. That was called this." The idea is still the same, it's just changed its name in the past 40 years.

So understand the computational universe, know about actual physics, have the ability to do actual computations, have both the tools and the actual wherewithal to just say, "Oh, we're going to search a billion cases. We can just do it." Or we've got some thing that can visualize hypergraphs. "We can just do it," type thing. You have to have those practical computational skills and tools available. Plus, you have to have the idea that you can do something big, which most people, for example in academia, do not have. The idea that you would try and figure out something about the fundamental theory of physics just seems crazy. As I say, seven academic generations ago, in the early 1900s, it wouldn't have seemed crazy it all. It would've been the obvious next step.

But much later in the field, it's like, "You can't do that. That's much too big." And I was fortunate that a couple of young folk, Max Piskunov and Jonathan Gorard, got involved in the whole thing and basically were like, "You have got to do this project." And I had all these excuses for why we wouldn't do it. Actually, the one thing I had figured out, my problem had been for a long time, "If I do this project, its target market hates it." So I'm spoiled because most projects I've ever done, the target market loves the project. And that was even true with NKS. The number of people for whom NKS is an important book, and the number of people who thought all those nice pictures were really great was lots of people. And the number of people who are like, "Thank you for making Mathematica, Wolfram Alpha," it's terrific. It's lovely to be in that situation. So it's easy to feel spoiled by, "Why would I do a project for the people telling me, 'Please don't do that project'"? But I thought for a long time, "How could I do the project and have it not be an unhappy story?"

So I finally have figured it out. I figured out the right way to do the project is as an education project. That is, to just say, "We're going to try and climb this mountain. We don't know if we're going to get to the top. We don't even know if it's the right mountain. But we're going to expose what we're doing to the world, and hopefully people will find the view interesting." So I had made the decision that if I was going to do the physics project, I was going to do it as a kind of public education project, so to speak. And where the customers are really the people who are getting excited about physics, and getting excited about research, and so on by seeing what's involved in the frontlines of doing that. And if the people who are existing who are currently physicists like it, great. If they don't, whatever.

So I kind of made the decision to do it that way. And then, we started actually working on it. And within a few months, it had just gone so much better than I had expected it would go. And so, we kind of changed the way that we even thought about presenting the project to the world. And it would've been out and about by March of 2020, except for the fact that there was a pandemic going on. And it was kind of weird because we realized that the things we'd figured out about causal graphs in the physics project were directly relevant to digital contact tracing. And we actually started looking at using some of the ideas from the physics project. And by the way, that's a very important feature of what's happened conceptually. These applications to whatever they are, mathematics, economics, what's happening is, because we're able to use the same formalism as the physics project, we're able to leverage all the intuition and results of physics.

And so, that means when we can talk about event horizons, we can talk about light cones, things like that–I've been thinking recently about economics. For me, the causal graph is a supply chain. The light cone is those things that are consequences of some change in what happens, in the transactions that happen in the network and so on. There are ideas in economics that relate to those kinds of things, but there also is a very rich set of ideas in physics. Because physics was very successful for 100 years in making use of a big stack of ideas and results that we can make use of. And we can make use of it in other fields. And the physics project has made use deeply of a lot of ideas in computation. It's also made use of a bunch of ideas from metamathematics and from mathematical logic. And I kind of ignore that point. I don't think of that. But that's another thing you kind of have to know to execute this project is a bunch of stuff from mathematical logic, which I happen to know because those ideas are foundational ideas for our Wolfram Language system. The foundational ideas of mathematical logic are things that we've made deep use of in the design of Wolfram Language and Mathematica.

And so, if you say, "Well, what do you need to have all in one place to make this project possible?" I guess that's another one I forgot is knowledge of mathematical logic. Because the kinds of things, which again, I did use them a bit in NKS–what physicists have heard of the Church-Rosser property? Basically, none. And that's something you need to kind of understand to set up a bunch of these things for this physics project. So, I actually feel it's one of these cases where this project came incredibly close to never happening.

Zierler:

Why did it?

Wolfram:

Because these two young student folks were enthusiastic and wanted to work on it. And I said, "Great. If you're going to work on it with me, I'm in." It happened because the company had done rather well, and I was like, "I don't really need to put as much effort into this for a little while." It happened because I had a 60th birthday party where I gave a speech, and I said, "I think this is the next project I'm going to work on." And at the beginning, like many big projects I've done, I was just pushing it a little bit, seeing what would happen. And then, the rock started rolling downhill, so to speak. Things started just working. So many things that I just never expected would work.

Zierler:

These students who expressed interest, what weren't they getting in more traditional educational contexts that they sought out from you?

Wolfram:

So the first student is Max Piskunov, who'd gotten a master's degree in physics at Moscow State University and came to our summer school. He can be very droll and sometimes has said things like, "I only came because I wanted to come to the US." But he had been very interested in foundational questions in physics. And that was seven or eight years ago, now. So he started working on this at summer school, and then he was going to come to graduate school in the US and thought he was going to join some group that worked on network science, which would be a place where he could continue working on the physics project. And he said to me, "Will you be my thesis advisor?" And I said, "I'm not in that business. I can't go do that."

So he ended up, poor fellow, getting dragged into doing some traditional cosmology project, which he really hated. And sort of got most of the way through his PhD doing that. Then, the other person who got involved is Jonathan Gorard, who came to our summer school when he was an undergraduate in England. Very bright young fellow who I didn't know quite what to make of initially. Because he was interested in so many things. And at our summer school, we always sort of suggest a project for people to do there. And I suggested a project for him to do with automated theorem proving and the generation of proofs in it. He said, "I can't decide between doing a project about cosmology and doing this project," that I'd suggested to him about automated theorem proving. He ended up doing the automated theorem proving project. But this turned out to be, unbeknownst to anybody, a wonderfully prescient choice of project because it caused him to learn a whole lot about mathematical logic, and metamathematics, and so on.

And that pair of things about general relativity, and cosmology, and mathematical logic was an almost perfect background for the things that ended up being what we needed for this physics project. So Jonathan came back, I think, in a subsequent year, as an instructor at our summer school. And then, Max came back. This was in 2019. Meanwhile, I had been kind of thinking about this project in the back of my mind forever, and I had little structural breakthrough about how to think about these models and how to structure them. And so, I talked to them about it, and they were like, "You have to do this project." So I'm like, "OK." And that's how that got started.

And if it hadn't been for those two guys, I might've tried to do something. I think it would've been slower. As actual contributors to the project, they were important. They continue to be important. What's happened now is, we have about 50 research affiliates who are involved with the project, and there are a lot of people in the outside world working on things related to it. I would say the predominance of people working on it are in the rather abstract end of mathematical physics, which is just fine because that's a place where there can be good interweaving of existing ideas and our new ideas. And I think that's interesting. It is remarkable, the contrast between the absorption of this project and the physics non-absorption of NKS. In other areas, NKS was absorbed well. If you go to areas like art and architecture, there was quick, good absorption. Areas like biology, social science, some areas of computer science, pretty good absorption fairly quickly.

So it's off and running. It's a funny thing for me because it's like a trip to the future, so to speak, for me. Because I was doing physics 40 years ago, and I was just hibernating for 40 years, more or less. And then, I come back, and in some ways, it's funny because there are people that I knew who were eager young students when I was last doing physics, and now they're late career professors. People who were of the older generation are either dead or retired. And I paid a little bit of attention to physics in the intervening years, and I paid enough attention that I stayed more or less current, which is probably less of a good statement about me than it is a bad statement about physics, so to speak. There are areas I didn't pay that much attention to. Quantum computing, I had worked on back in 1981 with Dick Feynman, and I didn't pay a lot of attention to what happened, although I realize some interesting things got done.

And it is kind of ironic that the issues that now seem to exist with quantum computing from our current physics project are direct echoes of the issues that I suspected were there back in 1981. In fact, I recently found a referee report for Physical Review Letters from 1981 for a paper about quantum Turing machines, in which I kind of voiced the exact same issues that I'm talking about. I've been saying the same thing for 40 years. But I haven't really paid attention to it for a long time there.

Zierler:

One question I'd like to return to, I asked about the state of computational power in 1986. So between Wolfram Alpha and Wolfram Language, by the beginning of the 21st century, what advances technologically took place that allowed you to do things that you might not have been able to do before, even if the intellectual origins were there from long before?

Wolfram:

The dirty secret of the computer industry is, computers actually haven't gotten that much faster in the last 10 or 15 years. In fact, that's been disappointing to me because there were strategic calculations that I made, which assumed a more rapid rate of computers getting faster. Because there are things where this is the right way to do it, but it's going to be kind of slow right now. For example, a big thing that happened in the 1990s for us was, we couldn't expect to run sophisticated code as part of the user interaction loop in a user interface. The user interface had to be like the autonomic nervous system of a human, so to speak, in the reflex arc somewhere. Not something that could go all the way through the cortex. Because it was just too slow. And in the 90s, that stopped being the case. And that allowed us to develop a lot of dynamic interfaces that weren't possible before.

But I think one of the things that did help with Wolfram Alpha was the existence of the Web and the fact that a lot more stuff had been put online in a somewhat accessible way than had been done previously. Although most of the data that winds up going into Wolfram Alpha is not stuff that you just find on the web somewhere, the fact that you have the Web to go find that it can be found is important. And also, things like the delivery of Wolfram Alpha as a web service obviously relied on the existence of the Web. Blockchain, for example, is a new factoring of the idea of computing. It's computing that happens with computational contracts autonomously in computers, even though humans set the whole thing up, but then sometime in the future, a computer will do something, and some contract will trigger, make some other computer do something, and so on.

And both the cloud and blockchain are a refactoring of the foundational ideas of computing that are fairly interesting from a conceptual point of view. I didn't at first recognize that. But it's been something quite interesting to refactor one's ideas about language. The one that is the issue du jour about these things is something I've been thinking about since the 1980s. How do you think about doing computations in distributed parallel way? Normally, when we use natural language, it's a one-dimensional sequence of things. I have words, and they're all in a line, so to speak. What if I was able to have parallel threads all operating at the same time? How would I think through this parallel operation of different kinds of things?

That's been a really difficult thing for humans to do. I started working on that in the 1980s, trying to make a language approach to representing distributed and parallel computation. I didn't get as far as I would've liked. What I realized from my physics project is, this is what physics does. Physics is a giant story of parallel computation. All the atoms of space are doing their thing all in parallel all the time around the universe. That's kind of how the whole thing works. But the very fact that we humans can make sense of all this tells us there's some way to parse distributed computation so that we humans can deal with it. And I realized only very recently, in the last couple of months, this realization of the relationship of the foundations of consciousness and this sequentiality of events, the notion of it's all about having a definite thread of time that you can think in terms of, is mostly leading me to realize that us humans are not well-built for parallel computation.

Our brains don't wrap themselves around that very well. It's very unnatural. It's at odds with our perception of time and having a definite thread of experience. So that's one thing. But now, that's made me all the more eager to see what I can do in actually making a way to have humans understand distributed computation. And so, you're asking about advances in computation. Everybody would love to do distributed computation. It's just really understand how to do it.

Zierler:

Does quantum computing square the circle at all, as you see it?

Wolfram:

Well, no. The problem is the formalism of some of quantum computing is relevant to this. We have, now, a more foundational version that leads to that formalism. We now know that these multiway systems that we have are sort of machine code that operates below standard quantum mechanics. In fact, a recent thing by Jonathan Gorard and others was a thing where you're literally taking quantum circuits of the kind people use for quantum computing, and you're compiling them into multiway systems as an abstract compilation step. Then, you can do computations at the level of multiway systems, which are more powerful than the computations you can do using quantum formalism, and then you can go back to the quantum formalism, and you can see what the results should be. But that means we really do know what quantum computers do, and it's part of our multiway formalism.

And the story of distributed computing is kind of a story of the conflict between quantum mechanics and the measurement of quantum mechanics. It's a conflict between quantum processes, which have this whole collection of parallel threads of activity, and the measurement process, which is an attempt to sequentialize things in the world so that we can have a definite thread of experience about the world. And this question of, "Can we do successful distributed computing?" sort of, "How do we do this quantum measurement process?" We have much less control over it in the actual physical case because we're having to do it with actual physical components. In the case of distributed computing, it's similar kinds of problems, but we have much more control over how we do it. And so, it's not that a magic quantum computer will solve the problem. That won't happen that way.

It's actually a very bizarre situation because the formalism that we have with multiway systems is absolutely applicable to classical distributed computing, but yet, it is a formalism that maps into quantum computing in the sense that it is the same formalism. So you could say, "I've got this molecular computer, and it's a quantum molecular computer. But it's not quantum in the sense of quantum mechanics. It's quantum in the sense that it's doing distributed computing using a formalism that is the same as the formalism of quantum mechanics. But it is not quantum mechanics in the sense of h-bar, and Planck's constant, and those kinds of things." So it's my expectation that the field of quantum computing, which has done better than I expected it to do, I have to say, there is a very important need to explore ways of computing things that don't just use metal oxide semiconductor technology, that have things more than just semiconductors.

And so, one part of the quantum computing initiative is this idea of, "Let's explore other physical processes that can be used for computation," which is something very much in line with all the things I've done for years about all these different systems compute. "Well, we could use ion traps instead of using semiconductors to do computation." That's a perfectly valid thing to do. The question of whether. Then we get to inject quantum formalism on top of that, well, you might, but it's a separate factored thing. And it's not that you are using quantum mechanics as the h-bar kind of thing to do computing. You're using the formalism, and you're using new ideas that may have quantum mechanics as part of the idea. But you're not doing some kind of magic that is to do specifically with the quantum nature of the computation. The formalism is quantum mechanical. The underlying physics may be quantum mechanical.

But they're not intermingled in that way. And I think we'll see what happens to the quantum computing industry. It's kind of funny because I have many friends in the tech investor community who ask me, "Is this going to work? Should we invest in this stuff?" And I tell them, "The magic quantum brand is not going to work. The investigation of other ways to do computing with physics is a fine idea. And so is the formalism." But I don't know how long that industry is going to keep going. It's a funny thing because it's a tech direction that's had some ups and downs. From this paper that I refereed in 1981, that was the story of quantum Turing machines.

There was a lot of important work, particularly by people like David Deutsch, clarifying the relationship between logic and quantum mechanics that came in the intervening years. And many other people as well. Again, quantum formalism is its own separate valuable thing. And that's grown up over that period of years. But it's a long story. People think, "Oh, quantum computers are very new." They're not. It's 40 years and counting.

Zierler:

With the physics project, do you see this to some degree as coming home to physics? Had you never left? Had physics left you? How do you see these things in the broad sweep of your intellectual trajectory?

Wolfram:

I would never have done the physics project if I'd stayed in physics. Simple as that. Embedded inside a paradigm like that, I might've had a fine time computing Feynman diagrams and working out mathematical physics kinds of things. But to make a big shift of the kind that's needed in this physics project, it would be–it almost didn't happen, even given all of the advantages, in some sense, that it had in being able to be possible. But from within physics, it would've been an order of magnitude more difficult.

Zierler:

So let's parse out the distinction between the discipline of physics and the sociology of being an academic in physics.

Wolfram:

What does it mean to be in physics? Well, the NKS book has 100 pages about physics. And they're 100 pretty tightly written pages about physics. So that's a bunch of physics. And that was something that I spent many years doing.

Zierler:

I mean it as in had you stayed on a much more traditional career path. PhD at Caltech, you're at the Institute, and today, you'd be a tenured professor at Harvard or something like that.

Wolfram:

I'd gotten to the point of being a tenured professor before I jumped ship, so to speak. But I would've stayed as a tenured professor. I think, to some extent, for the people who I knew at the time, who were bright people who stayed in physics, there's a certain amount of, "There but for the grace of God go I," type thing. I've been able to do more interesting things because I've had kind of a more diverse environment in which to do them. I think there are things I haven't done that I might've done. My company is comparatively small. 800 people. There are things that I might've done that would've cost $300 million that I haven't done because my company is too small to do a speculative project of that size. But I've had the ability to do a pretty diverse collection of things.

And the nature of the product that we've built, a computational language that tries to make everything in the world computational, the side effect of that is, I have to understand everything in the world computationally. And that's not a bad piece of background or education for doing things like the physics project. The fact is, over the years, all these areas in mathematics, engineering, computer science, and so on, I've had to understand fairly deeply. And I have a strategy for understanding new fields because I've done it a lot of times. You read these books, you go talk to the experts, you go try and work things out, you try and do some project. That's a wonderful education. And it's a unique education because I've had this reason to be educated about all these different areas and pieces of methodology. And there's no way I would've encountered that if I'd stayed as a physicist because in a sense, the value system of, "What do you do as a physicist?" "Well, you're writing papers." That's what you're trying to do.

Maybe you're trying to discover something you can write papers about, and it's interesting, but for me, the value system that's been pretty valuable in building Wolfram Language, for example, is it's about understanding everything computationally. And I'm kind of the only person who's ever been in this position of having gone through all these different fields and tried to understand all these different things. The reasons I've done that are, at some level, prosaic and practical. Like, "I'm building this product. I want it to be as useful to as many people as possible." It's also internal because I find it interesting to learn about all these different areas. But it's given me sort of a unique education that simply wouldn't have been a thing that I would've gotten in physics. Having said that, to start in physics was great. Physics is a wonderful launchpad. The three contenders, in my view, as launchpad fields are physics, computer science, and economics. And computer science is a new entry.

For many years, computer science was not an export field. The only people who did computer science wanted to be low-level programmers, or they were doing very specialized things. But just by the fact that computer science has been applied to so many things, even in the academic process, it's gotten input from different areas, has become more of an export field. And I sort of realize, even though I don't necessarily have the highest opinion of some of the things that have happened in economics, it is an export field because it's a field where you get to think about things. Maybe not come to the right conclusions in all cases. But the extreme version of that is philosophy, which was, for a long time, the export field, so to speak. That was the field where you learned it, and then you went on and did other things. Maybe that will come back. I think it would be good if it did because I think philosophy is very interesting. I haven't appreciated it as much in my life as I might've done because my mother was a philosophy professor.

Zierler:

Well, we're all doctors of philosophy if we have a PhD. Perhaps, that will mean something once again.

Wolfram:

Right. What's interesting about philosophy as an export field is, it's something where everybody can have an opinion about questions in it. Whereas if you say, "Does everybody have an opinion about questions in algebraic topology?" the answer is no. You have to learn a huge stack of stuff before you can even understand the question, so to speak.

Wolfram:

So for me, physics was a terrific starting field. When I was starting out, computer science didn't really exist as a field, and I was able to learn the canon of that field very quickly. And it subsequently expanded. Since I've been continuing to learn it, I think I've gotten a reasonable breadth of knowledge of that field. But it was very easy 40 years ago to learn that field because there wasn't much in it. But I hadn't perhaps internalized that as much as I should've done. That's why it's fun to do things like this. In a sense, I've had the unique education of computational X for all X. Literally, nobody else has had any reason to go on that kind of path. And it'd be interesting to know what I would've done in physics. By the beginning of the 1980s, the obvious stuff that I thought was interesting, I'd kind of poked my nose into already. And I wasn't initially very drawn to these very mathematical kinds of directions in physics. I was –

Zierler:

Such as string theory?

Wolfram:

Well, string theory didn't really exist as a serious thing by the time I left physics. It was still a few years before that got out of the wilderness. At the time, there were things like axiomatic field theory. Mathematical physics was not ascendant yet.

Zierler:

And you got out just as cosmology was beginning. Inflation was just starting.

Wolfram:

Well, I'd been involved in that. I wrote papers about particle physics meets cosmology before those were a thing. So yes, I was aware of all that stuff. That was going to be a long road experimentally. I knew that perfectly well. And it was a long road. It was a good road, but a long road. And I'm glad that I wasn't waiting for the discovery of the Higgs particle or something because I would've been waiting a long time. I was also perfectly well-aware of experimental particle physics and the long time scales of those kinds of activities. I have good archives. I bet that I at least exchanged letters with people about–there were certainly people who said to me, "How can you leave physics?" I suppose I left in two stages because when I started working on complexity and the computational universe, I was still publishing things in physics journals. I had not obviously departed. I'd departed particle physics, but not physics.

And it wasn't until something that was almost invisible because the development of Mathematica wasn't really particularly visible as it was happening. It was only visible after it was finished. At that time, there was still the kind of divide between academia and industry. And I think people in academia had this view of, "How can you go into this nasty, dirty area of the commercial world?" so to speak. And maybe I've just been lucky this last third of a century, but honestly, it's been a much more pleasant, cleaner world for me than the academic world. It probably depends on what area of the commercial world one's in. Selling highly intellectual products to people who are doing very interesting, highly intellectual things is probably one of the more intellectual ends one can imagine of the commercial world.

But still, for me, in the academic world, there was a question of the value system, there was a question of what people were trying to achieve. There were very intangible kinds of things. "I want to be famous somehow." But what it turned into is, "I want to get the next grant. I want to have a bigger office than the next guy," type thing. And there was a certain sense of, "We're part of a giant procedural thing that's been going since the 1200s." I think it's a pity. I think it would be better if universities saw themselves more as stewards of knowledge that's been around since that time. I think sometimes they lose track of that, particularly in the humanities. But I think that in a sense, universities are big organizations, big bureaucracies. And the whole system is not set up for, "Let's do a strategic, important, innovative thing." That's not what it's about.

It's about, "We've got this big system. It's got hundreds of thousands of people in it. It's got large amounts of money flowing through these very well-defined pathways." There's a boat, and it's in motion. And you can't really rock the boat very much. Entrepreneurism has come up a great deal. Since the mid-1980s when I started my current company, and now, being a tech entrepreneur has gone from being, "Oh, I guess some people do that," to, "It's the coolest thing in the world." I think it's slowly cooling down from being the coolest thing in the world. But for a while, it was the coolest thing in the world. And the thing that's really interesting about entrepreneurism and entrepreneurs is, they just think anything is possible. And that's not what academics think.

Zierler:

On that point, let me ask you two last questions for our discussion. On the point of academia, with your career trajectory, it's obvious that your ambition, your intellectual appetite, your curiosity was never going to be well-contained within the constraints of an academic environment. For other people who might share your sensibilities, but might not have your business acumen, your appetite for risk, your sense of adventure, for those who might want to stay within the boundaries of a traditional academic career, what might need to change about academia so that the people who are users of your products and identify with your worldview can flourish and tackle the kinds of big problems and topics that you've tackled, but not as the executive of a private company, but as a professor in academia?

Wolfram:

You're packaging many different pieces here. You talk about ambition, and risk, and so on. I will tell you that from the inside, I take no risks. Almost nothing I've done do I consider to be a risk.

Zierler:

You left a tenured position to start a company.

Wolfram:

So what? You can get another tenured position.

Zierler:

You might not call that a risk, but others absolutely have and do call that a risk.

Wolfram:

I understand. But one important point that I'm making is, internally, I don't see that as a risk. Other people would see that as a risk and say, "Oh my gosh, I shouldn't do this." So my calculation there was, "Whatever. I'll get another job if I need to." But I've been fortunate that I've done a bunch of big projects, and they don't fail. And the reason they don't fail is not necessarily because the project I originally signed up for is the one that I do. But something comes out that's interesting. And you talk about ambition. It's also interesting because I'm curious because I deal with a lot of young people, and I have a hobby of mentoring people. And they're basically two populations. Tech CEOs and kids. And what's interesting about that is, I hadn't realized why I find both of these populations interesting.

And I finally realized, I wrote some piece about it a year or two ago, those are the two sets of people who believe that anything is possible. If you're a 15-year-old kid, it still seems that anything is possible. To some kids. And if you're a tech entrepreneur, everything is possible, even if it isn't. In both of these cases. Again, from the outside, I might look ambitious. I don't from the inside. From the inside, it's just like, "That project seems interesting. I can do that project." Am I aware that that project will have an effect on the world? Absolutely. Pragmatically, I know, "If I do this project, it will have an impact on the world, which will then be useful for other projects." I'm certainly pragmatically aware of those things. But from the inside, I think it's been important to me in my career, such as it is, that there are people who do a big thing when they're young, and then they're completely trapped because anything else that they do has to be bigger than the thing they already did.

Otherwise, it doesn't fulfill their ambition. But for me, it's just like, "I'm going to do another project, and it's interesting. Is it going to be bigger than all the projects I've done before? I have no idea. And I don't really care." If it's a project that I think is important and a match for my skills–my basic meta skill is, you take a big area, you grind it down, you understand what the foundational issues are, and then you build up a big structure around it. That's basically what I do.

Zierler:

Hans Bethe made this exact joke about Nobel Prize winners. It's very difficult to take on smaller projects after you've won a Nobel Prize. I don't know how much of that was a joke and how much was the truth, but it's certainly a prevalent idea.

Wolfram:

Yeah, well, Nobel Prizes are so slow that by the time most people have them, they're long past the years when they could put energy into research. But if you ignore that point, it's a big cross to bear, so to speak. I think that's why, perhaps, the fact that from the inside, I don't see myself as ambitious is important. Because it means that I don't say, "Gosh, I need to do this because it achieves some ambition." It's like, "I can do this. I think it's interesting. I think it'll have an effect on the world." These are all motivations for me. But what would I be ambitious for? What's the point? You can be rich and famous. Great. I've done this survey among kids that I've interacted with. It's a, "What do you want to do when you're grown up?" survey. And one of the things I had in it was a list of different accomplishments that people might have. Rank these accomplishments from, "I really want this," to, "I couldn't care less." They range from things like, "Make a billion dollars. Write a bestselling book. Be in charge of hundreds of thousands of people. Go to Mars."

All these kinds of things. And the thing you find if you do those surveys is, people, by their early teens, have pretty much decided which of those checkboxes they really care about. And I don't think it changes much after that. I could've said, "I have the ambition to find the fundamental theory of physics," since I was 12 years old. Yeah, it'd be interesting to do that. It's not an ambition that one has because an ambition is, in some sense, "Why do I want to do that? Do I want to do that to feel better about myself?" I'm feeling fine about myself, thank you very much. I don't need it as a piece of kind of self-worth or something. I see people where it's kind of the financial ambition of, "I'm going to work really hard, I'm going to make all this money, and then I'm going to retire and do what I really want to do." This, essentially, never ends well. Very rarely. So for me, it's important that I be doing things that I actually like doing on an ongoing basis. And you're kind of asking a couple of things.

This thing about doing large-scale innovative stuff, I don't know how you build a system for that. It's like people saying, "Let's build a system for minting entrepreneurs." It's like, "Let's send everybody to creativity school." It's a necessarily self-defeating activity because doing big things, they'll be different every time, and they'll be hard. What do you need to enable that? I don't know. For example, I might give some credit to the MacArthur Foundation because it was nice to get some extra little feather in one's cap, so to speak, that only applied to one particular area of theoretical physics. And so, that was one of the rare kinds of things, a feather that is nonspecific. Most of the other nonspecific feathers in the world end up being things that are much more pragmatic, like you made a bunch of money, or something like this.

So there are things that will absolutely kill innovation. It's a very complicated thing because, for example, one thing you could say is, bet on people earlier in their lives, and give them the freedom to do whatever they want. I was mentioning earlier what happened at the Institute for Advanced Study, where Robert Oppenheimer, with the best of intentions, did that a bunch of times. And for some people, it worked out well. For others, it really didn't do. And in England, Oxford and Cambridge colleges would do that. "Oh, you've got a fellowship for life," at some young age. And that sometimes worked well, and sometimes it didn't. It's complicated. For society in general, those kinds of things, in the aggregate, probably would work well. But they look unfair because it's like, "Oh, this person is age 25. You just set them up for life."

There are some countries that have their funding system basically do that. This person age 25 was set up for life to just hang out and think. That's not fair, look at all these other hardworking people. But societally, the alternative is that you have only these very incremental kinds of progress that you have set up a system for. That's going to achieve something societally. The bet on individual people early is going to achieve something societally. For the people themselves, it's a, "Be careful what you wish for," type of situation. If you're set up at age 25, and somebody says, "OK, for the rest of your life, you're just thinking," that's not a great personal situation for many people. Some people will come out of that, and they'll do something amazing. Some small fraction of people will come out of that and do something amazing.

The majority, it'll kind of destroy their lives. I see that with people who make money in the tech industry, but they were not the people doing the heavy lifting to make the money. They were in the right place at the right time, and it's a difficult thing. It's a little easier for the people who were doing the heavy lifting themselves. So in terms of making a machine for making innovation happen, I don't know how to do that. In terms of the kind of science that I've done and perpetuating that, it's painfully slow getting that set up. I have the misfortune of the fact that science is big. And so, injecting something new is pretty hard. And I remember when NKS came out, a friend of mine in neuroscience said, "You should stage a leveraged buyout of physics." That was the way he put it. What did he mean by that? What he meant was, physics has done what it's done, but there are a lot of issues in physics where it's kind of run its course. The really interesting stuff has been done.

This NKS stuff is a different kind of science, but you could, in principle, take over the institutions of physics and make it in terms of that kind of science. In other words, the American Institute of Physics could start saying, "Let's study the dynamics of cellular automata, and we'll just call that physics." And that's happened before. It's a perfectly valid kind of thing to imagine happening to a field. You change the definition and have it be something different. Now, I looked at that. I thought, "Is it worth trying to do that? Is it worth trying to sort of take the institutions of an existing field and turn them in a different direction?" And I decided it's just not worth the trouble. It's too difficult. It's easier to build something new. But on the other hand, the new thing hasn't yet been built because, basically, the thing one realizes pretty quickly in these things is, all these things take leadership.

And I can build a company, I can build some areas of science, whatever. Building an institutional structure to do science is a different activity from building a company, building an actual science itself. And I'm not a particularly good person to do it. It's not my skill. And so, it's been strange for me to see the field of complexity that I kind of pushed in the early to mid-80s, now there are hundreds of complexity institutes around the world run by a huge diversity of people. Do I think they're all going in the right direction? Absolutely not. Does it even matter to me that they exist? Honestly, not much other than as a piece of interesting episode in the history of science. It's cool, but if there's an intellectual direction that I created, it's been so diluted. I think it's not been sort of concentrated in those particular places. The whole thing about computational X for all X and how that gets injected into universities and academia is an interesting problem which has not been solved.

And in fact, before I was working on the physics project, I was about to do this kind of effort to write a definition of what I thought universities should do about that. Because a bunch of university presidents and provosts who I know reached out to me and said, "What should we do about this? Should we let the computer science department take over our university?" Carnegie-Mellon, Waterloo, that's what happened. My definition is, by the time the computer science department is teaching creative writing classes, it's taken over the university. And that's the computational paradigm kind of taking over. But that's probably not right for every university. And then, the question is, what do you do? Because there's this different paradigm. And then, it's the question, where do you teach the math courses? Do you teach them in the math department or in the department that's going to use the math? And for physics, it's had that a little bit, but honestly, I think the engineering departments teach the physics for engineering themselves. It isn't mostly the physics departments that get to do it, at least in the US.

And it's bigger even than the things that I've tried to do. It's this whole computational X story. NKS is to computational X as pure mathematics is to the applications of mathematics and science. NKS is the kind of pure, abstract end of computational X for all X. And in a sense, once computational X for all X is really up and running, NKS will inexorably be brought along because it's the pure thing you teach. Just like you teach pure mathematics. If mathematics hadn't had the applications it has, it would've gone the way of logic, for example. In the middle ages, math and logic were neck and neck in terms of their educational impact. But math got all these applications, and all the engineering, mathematical sciences, and logic did not.

And so, there was a whole pure mathematics that developed as an educational enterprise, and logic didn't. And computational X is here, and the pure version of computational X will be an important piece of the intellectual foundations of what people learn when they try to understand the world computationally, which is what, basically, every field is going towards. But I think it's a challenge for universities. What do you do with computational X? And if I had more time, I would finish writing the piece that I started writing, which I have been requested by a bunch of people who run universities because they really wanted to know what to do. And I did figure out a little bit, but again, it's not really my skill. I was asking them, "So how does this play into the politics of your university?"

And they're like, "Oh my gosh, it's a complicated story." And I was suggesting things like, "Maybe you offer departments that from the president's office, you'll pay for the development of some computational X courses for your X in department X." Because that's something that the president's office can do, so to speak. Whereas if you tell the department, "Go hire these different kinds of people," they'll never do it. They'll say, "Well, we run our little fiefdom, and leave us alone." But if you offer them the money to develop computational X courses and to buy people out of teaching to do that, that's at least something you can do. And then, in these different fields, you have a computational X couple of courses for students to take, and then those get taught, and that drives the faculty hiring, and so on. But I'm not the person to figure that stuff out. I'm at best a distant observer of the ways of academia.

And one question you might ask is, "Where should basic research be done in the world?" In the US, it's done in academia. It's done to some extent in national labs in the US. It's done to some extent in successful companies. Basic research is typically associated with monopolies. If you have a monopoly, it makes sense to do basic research. Otherwise, your competitors are going to take whatever you do. So it tends to be done in these small pockets in the world. And it's an interesting question, how should basic research be done? And one of the things that I've found interesting, and I don't know how it's going to come out, I've been doing this physics project, in part, as a public education, public entertainment project. And there's an interesting question. Like we have some membership program. Get swag, get other cool things.

Partly because people actually want to be involved. And it's like, "Why not? Why are we going to be snooty and say, 'Just because you didn't do all those physics PhD courses, that means you can't be part of the story.'" Why say that? It doesn't seem right. But there's an interesting question of, does the public want to see basic research done? Will the public pay for it? How should that work? And I think finding the fundamental theory of physics is one of the cooler things that you could try to do. And it'll be interesting to see. We have a sort of office of large-scale philanthropic funding for the project, which so far, frankly, my first question is, "How would we use the money?" I'm not going to take somebody's money if I don't know how we'd use it. I have some ideas now. There's a certain value to us having fellowship programs and so on that are independent of universities where we can basically pay for somebody, and then plant them at a university. That has some value.

And I can see in the current cohort of people who have been involved in our research affiliate program–like I was happy to see one young woman, fantastic person, who was having a difficult time getting into physics graduate school this year of all years. She finally got in somewhere. I've been paying for her to work on this project and other things. But now, she'll be part of the system, so to speak. But I think that's an interesting dynamic, and there is a certain amount of philanthropic interest in physics. To diss a little bit on the physics community, I remember when the supercollider was happening. That was a poor moment in the history of the physics community. And I remember I was mostly a hermit at that time, but I went to a dinner at Fermilab. I was living in the Chicago area. And Leon Lederman was there, and I was sitting across the table from him. And this was right before some critical vote associated with the supercollider.

And I was saying, "You really should think about what to say if this vote goes the wrong way. Because you have one moment. If you have a really good quote, it will be remembered." And I spent the whole of this dinner, couple of hours probably, going over, and over, and over it because I knew this was going to be an important issue. I think Robert Wilson had had the quote about Fermilab, actually. "Fermilab doesn't contribute to the defense of the United States, but it's part of what makes the United States worth defending." Just kind of a nice, somewhat memorable quote. But to just say, "We're so sorry that the public won't pay the $10 billion for our latest toy," isn't a very interesting and useful quote. People were reaching out to me from the physics community at that time because they knew I was sort of a pseudo ex-physicist.

I was talking to physics community people about full-page ads in the New York Times and what it would say. I wasn't the worst person in the world to ask a question like that, but the problem was how dismal their thinking about what kind of a thing it might say would be. "You're going to take out an ad, you're going to spend the $20,000, $30,000, whatever it is, take out the ad, this is your moment to make a statement about physics. What are you going to say?" And that was not extractable. It was a lot of, "Oh, this condensed matter physics, it's as important as particle physics. Why do they get the big toys?" and so on, and so on, and so on. And for me, it's a really interesting question. How should basic research be done in the world? Because I saw the Institute for Advanced Study, which in the time I was there, was not an outstandingly great example. It had a $100 million endowment or something like that at the time. It's like, "OK, you're going to spend $100 million. If you want to make basic research happen, what do you do?"

Not obvious. I would say that in some areas, like in mathematics, the Institute had been quite successful. I think the reason it's different, perhaps, is that in pure mathematics, there's less of a fiefdom concept than there is in fields like physics. And so, if you're a successful pure mathematician, you can be just a solo guy or gal just doing your thing. And you're age 45 or so, and you're like, "Oh, gosh, I have to teach all these classes. If only I could just work on number theory." And then, you get hired by some place where it says, "OK, great, you can just work on number theory. You don't have to do anything else." And that's a win because you've set your life to be doing those kinds of things. I think in physics, that wouldn't work as well to get later-career people because my impression is that in physics, there's a certain amount of fiefdom building.

There were certainly counterexamples to this. Dick Feynman was a good counterexample of somebody who just kept on puttering away at physics their whole life. But I have this slight impression that that's less common in physics than it is in the higher end of mathematics. And so, I don't know. I always used to find it amusing that, particularly among the computing-oriented commercial research labs, there was this crowd of samurai or something, where one company was very wealthy, they'd work at that company for a certain amount of time when it had a basic research lab, and then the fortunes of that company would go down. So they would migrate to another one and so on. They weren't necessarily the greatest achievers in many cases there, but that was sort of a dynamic. And it's very difficult for a company to do basic research. It's very difficult to justify it over a long period of time. People come to us and say, "Will you fund me to do some piece of basic research?" No. Not what we do. There's a small amount of stuff that is kind of the hobby of the CEO, so to speak. But other than that, we're just a company that makes tools for people.

And in the making of those tools, we do a certain amount of basic research. But it's not just kind of do anything type stuff. So I consider it an interesting and very unsolved problem. It's something where I try and do what I can. I've been fortunate enough o be kind of a mentor of sorts for a bunch of people who have been able to be pretty successful. And I try and do my part to try and help people navigate. It's a challenging thing because every unique success is unique. And if you say, "I'm going to take this pattern, and you should be doing it this way. You should go and get a PhD in this, and do that." It won't be that any pattern that's gone before is probably not the one that's going to be the big success this time around. And one of the things you see in the entrepreneurial world, unlike the physics world, is the tremendous diversity, at least along some axes, of people who end up being successful, like tech entrepreneurs.

They're people who are very intellectual, who've got the fanciest education, they've been professors and so on, they're people who've dropped out of high school, who are very unintellectual. It's a huge range. And I actually really like that. I find that very invigorating because not the case that everybody's more or less thinking the same. And you see a field where that's been particularly evident is blockchain, which has a very diverse collection of people involved in it, from the extremely intellectual to the make money fast type crowd. I would say that is one of the things that's probably a negative of academia. Academia, as it is today, which is different from 40 years ago, it's my impression that it's extremely constraining. Very much of a groupthink, herd mentality.

And that happens in the subject matter, the value system, even the real-world politics of academia. It's in one direction. And I see that in people who come to work at our company who got fed up with that. And maybe they agreed, maybe they disagreed with the particular details, but they got fed up with the one-dimensionality of the academic community. The tech industry, particularly in Silicon Valley, is also going somewhat one-track. But the commercial world more generally isn't like that. And if I were to wish something on academia, so to speak, it's very hard to achieve. People talk about diversity in certain kinds of axes, but there's a lot of non-diversity in academia on all the axes.

But you find, in terms of socioeconomic background, the vast majority of academics come from a thin slice. The vast majority have a thin slice of beliefs about the real world, which have nothing to do with the content of what they do as academics. In the humanities, it has more to do with the content, and that's its own independent disaster. But in the sciences, that isn't so much of the issue. So I don't know. It's an interesting problem. And I think that universities, to some extent, their original purpose for hundreds of years was as kind of stewards of knowledge from one generation to the next. And that's a perfectly fine purpose. Whether you bundle in with that doing frontline research is not obvious. There are plenty of universities that are in the business of passing down knowledge from one generation to the next, and it's a great thing for society. Then, do you get better professors because you glue on the research? I don't even know about that.

Zierler:

On that very question, let's end by returning back to physics. So if we were able to strip away the politics, the infrastructure, the bureaucracy, and just focus on the ideas, connect them to your intellectual trajectory, everything from NKS to the physics project now. Of course, today in physics, there remain intractable problems that are just scientific problems, not just bureaucratic, academic problems. We don't have to look at your contributions as a hypothetical, "What if you had always stayed in the field?" Let's just look to now going forward. Best case scenario, if your approach to computation does become widely–40 years is a short amount of time in the grand scheme of things, right? From NKS to the physics project that you're working on now, how might we get to greater understanding to so many of the problems that remain elusive and mysterious in science? From condensed matter to cosmology, what might that look like from your vantage point?

Wolfram:

It's interesting because if you take the really difficult problems, they fall into a certain number of buckets. You've got fluid turbulence. You've got the structure of quantum field theories. Basically, you know the rules, but you don't know how the system behaves. That's precisely what the NKS story was about addressing. So now, what we've also learned is this phenomenon of computational irreducibility means it's genuinely hard. In other words, there are things where you might say, "Science will eventually solve every problem." Well, no, it won't. And what we've achieved in engineering is, even though we can't solve every problem about how the world works, we've found ways to build engineering structures that are useful for us. We can build airplanes, even though we can't solve turbulence. Because we build things that operate in regimes where we're not having our nose rubbed in the things that we can't solve.

And so, I think what will happen is, there are things we just genuinely can't solve. There will be other, as I call them, pockets of computational reducibility that we find and manage to build both science and engineering out of. And there will be some that will be attributed to physics, some that will be attributed to other things. That's where there's going to be a lot of progress, I think. Where it says, "Can you fundamentally solve the three-body problem?" well, eventually, somebody will prove that the three-body problem is computation-universal, and then the game is over. There will be no final solution to it. This is what I think we'll see. Why do you want to do physics? Well, there are things that are just interesting. But in the end, what usually drives these things is some technological application. Will we understand how to make high temperature superconductors?

We probably will. I think Rutherford was famous for making the comment, "There are two branches of science. Physics and stamp collecting." It's kind of an anti-chemistry story. Well, I finally understood at some point why that was true. In fact, the reason it's true is because physics carved out for itself a region of computational reducibility where the things that it was looking at could be solved with mathematics. Chemistry did not have that freedom. Chemistry just was dealing with the materials and compounds that were presented in the world. And so, it has its nose rubbed in computational irreducibility. And so, the fact that there are all these random facts of chemistry is a fact of computational irreducibility.

So in a sense, what Rutherford was saying was, in the end, one could've said, "Physics made itself too easy for itself. It never even addressed the things, which in physics, would've been the hard problems, so to speak. Chemistry had to." And I realized recently that Newton was very lucky that he did rigid body mechanics instead of fluid mechanics. If he tried to do fluid mechanics, he wouldn't have succeeded. He would not have been able to find that pocket of computational reducibility that he found with Newton's laws. And so, I think the future of physics and technology is going to be a story of finding other pockets of computational reducibility and realizing and coming to terms with the fact that there are things that science won't be able to tell us. And that's been a 500-year story, basically, from Copernicus on, of, "We're going to figure out everything with science." And it's been striking in the pandemic, so to speak. The immense belief that everything can be figured out from science.

And while I'm as much of an enthusiast for science as anybody, what I have discovered in science is that science, from within itself, shows its own limitations. That is, it isn't true that science proves that science has fundamental limitations. And that's something people have not come to terms with. It's beginning. More people understand this computational irreducibility idea. And I think it may come in, actually, through things like understanding AI, even regulation associated with AI, things like that, where you kind of really have to understand computational irreducibility, or you just say things that are crazy. And it may come in through those routes before it comes in so directly in basic science. What other kinds of pockets of reducibility will be found? I always used to tell Benoit Mandelbrot, who had the belief that the science that I would have done would kind of eventually crush fractals. He thought fractals are an example of simple rules, elaborate behavior.

But obviously, the stuff that I was doing had similarly simple rules, but a much wider variety of complicated behavior. And I kept on telling Benoit, "No. What you've found is one of these essentially slices of computational reducibility that's important. And that's what drives science forward." Much of what I've found is the wall that science will never get through. And we're finding more of these sort of little doors through this wall, and that's what a lot of it is about. And predicting what those doors will be is tough. One of the things that I've realized in our physics project is, our view of the universe, the way we think about space and time, is not the only possible view. But it's kind of like, "What will the extraterrestrials think?" To imagine another completely different view is really hard. I'm kind of hoping that some of my science fiction writer friends will tackle this.

What is it really like to be an intelligence that's the size of a planet? What would it really be like to be an intelligence the size of an atom? You have a very different view of physics, and a view of physics which will seem incomprehensible to us, I think. And I think the future of physics is understanding those kinds of things and figuring out, "Where do you find these sort of pockets of reducibility on which you can build, essentially, a science?" But first, the thing is to recognize that there are limitations to the kinds of things you can do and to understand the shape of those limitations. And I think eventually people will understand this. It's happening slowly. You asked about the arc of my life. As an on-the-ground operative in the paradigm-shifting business, it's just as well that I've had a bunch of things to do because I would've gone crazy from impatience waiting for stuff to happen.

Take the complexity area. Launched in the mid-1980s. Nothing happened ‘til the late 1990s. Fortunately, I was doing a whole bunch of other stuff. And I was happy as a clam doing what I was doing. And I think, for me personally, it was a mistake that I didn't pay more attention to what was happening there because I just completely disengaged. And physics is kind of funny, and this is another thing I might criticize the academic world for, to some extent. For 40 years, I was kind of orbiting physics, let's say. I hired tons of physics PhDs at the company. I had lots of friends who were physicists. I would go to sort of social events that were physics-y. But there wasn't a particularly good way for me to engage with the physics community. And that's probably a mistake, now that I think about it. If I were the physics community in the time of the supercollider–I think they've tried to start doing this. In fact, I think I offered to help them do this. Even get an advisory board for physics of people who are out and about and care about physics.

Even that would've helped. Because people would've said, like I did, "You should run a full-page ad in the New York Times telling the world why you shouldn't cancel the supercollider." It's kind of an obvious idea. It's a way that you get an idea out if you have one to get out. I found it interesting to see physics go from the golden age of the late 1970s to its later stages. And if I manage to do something with our physics project, if it manages to breathe new life into a bunch of things in physics, it'll be fantastic. I'll be thrilled. Does it matter to my self-esteem? No. Would I be very happy to see that happen? Absolutely. I remember visiting Fermilab in the late 1970s, I guess. Very vibrant place. I was there maybe ten years ago. It's like, the trees in the lobby had all died. The cars in the parking lot were all beaten up. This is something that has its glory behind it, so to speak. Which is a shame.

And I get to see this from the flow of people. So I'll tell you something about physics there. There was a time when the best and brightest would go into physics. That was the coolest thing to do. 1970s and 1980s. In the US, that's not true anymore. It could be true again. Maybe we'll make some contribution to making it true again. It isn't true today. The best and the brightest go into computer X. And they migrate from field to field. They go into machine learning. And in other countries, it's still true that the best and brightest go into physics. Some other countries. And it's kind of sad to see because they go into physics, they get PhDs, and then there's nothing for them to do in physics. And so, we get to hire a lot of terrific people from those sources. And their physics training is highly useful, although they're not doing conformal field theory for us. At least mostly not.

So yeah, I shouldn't end on such a negative note for physics, but fields go through these cycles. And they last many human generations. Chemistry, for example, seemed like such a boring field for a long time. And between nanotechnologies, materials science, lots of things, it's sort of come to life again. There are certainly areas of physics that have been quite lively. And it certainly doesn't help when your average experiment costs a billion dollars. That definitely puts a certain damper on the rate at which things can be done. But there are wonderful people in physics, even if the societal structure isn't as perfect as it could be. If we can contribute something methodologically to defining a new direction for physics, that's great. Honestly, the early signs are extremely good, which is really nice to see. And I see that as independent of what we're doing, which is what it is, might be undiscovered for 50 years, and then people say, "Oh, that was really cool. Pity those guys aren't around anymore," type thing.

Independent of that, I think it's sort of almost a good sign for physics that we're seeing decent absorption of these kinds of things. It's a sign that there's sort of an opportunity for innovation there that maybe wasn't present for a while.

Zierler:

Well, on that note, Stephen, I'd like to thank you so much for spending this time with me. I really appreciate it.