Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
We encourage researchers to utilize the full-text search on this page to navigate our oral histories or to use our catalog to locate oral history interviews by keyword.
Please contact [email protected] with any feedback.
Credit: Brigitte Lacombe
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of Sheperd S. Doeleman by David Zierler on May 21, 2021,
Niels Bohr Library & Archives, American Institute of Physics,
College Park, MD USA,
www.aip.org/history-programs/niels-bohr-library/oral-histories/46822
For multiple citations, "AIP" is the preferred abbreviation for the location.
Interview with Sheperd Doeleman, an astronomer at the Harvard Smithsonian Center for Astrophysics, founding member of the Black Hole Initiative, and founding director of the Event Horizon Telescope. He surveys his global initiatives and his interest in fostering black hole research in Africa and he describes how the pandemic has slowed down his work. Doeleman affirms that he is of the generation for which black holes were always “real” and not theoretical abstractions, and he provides a history of the discovery that supermassive black holes were at the center of galaxies. He reflects on the applied science that was achieved in the course of creating EHT, and he describes the unique values that land and space-based telescopes offer. Doeleman recounts his childhood in Oregon and his admission to Reed College when he was fifteen. He explains his motivations in completing a solo research mission in Antarctica and he describes the opportunities that led to his graduate research at MIT, where he worked with Alan Rogers at the Haystack Observatory on the 3mm VLBI. Doeleman narrates the technical advances that allowed his team to achieve an eight-fold increase in bandwidth, and he describes the EHT’s administrative origins and the events leading to the measurement of the Sagittarius A* black hole. He describes what it meant to image the black hole, and he conveys the deep care and caution that went into the analysis before EHT was ready to publicize its findings. Doeleman discusses winning the Breakthrough Prize as the public face of a large collaboration, and at the end of the interview, he considers the ways that EHT’s achievement can serve as a launchpad to future discovery.
Okay. This is David Zierler, Oral Historian for the American Institute of Physics. It is May 21st, 2021. I'm so happy to be here with Dr. Sheperd S. Doeleman. Shep, it's great to see you. Thank you for joining me today.
Thanks, great to be here.
Shep, to start, would you please tell me your titles and institutional affiliations? You'll note that I pluralize everything because I know you have a lot going on these days.
Right. So, I'm an astronomer at the Harvard Smithsonian Center for Astrophysics. I'm also a founding member of the Black Hole Initiative, which is an interdisciplinary center at Harvard that studies black holes across multiple fields. And I'm the founding director of the Event Horizon Telescope.
Do you have teaching appointments at Harvard? Do you have responsibilities to teach? Do you supervise graduate students? Do you have opportunity to serve in a mentor role at all?
Well, I don't teach at this point in my career. I did a little bit earlier, but at this point, I'm primarily a research astronomer, and I do have a lot of opportunities for mentoring. So, at any given time, I'll be working with one or two graduate students, and these days primarily postdoctoral associations. So, I think at the moment, I have something like five postdoctoral associates who work with me on various aspects of the research.
Shep, just a snapshot in time, what are you working on these days? What's compelling to you? What's been taking up your weeks these past few months?
Well, I think it's fair to say that the Event Horizon Telescope is a hard act to follow. We imaged a black hole, and that was really spectacular, not just for the science that came out of it, but also there was a human dimension to it. We're coming to grips with its impact across multiple fields, not just mathematics, astronomy, physics, but even in the areas of philosophy, for example.
But it turns out that with all great discoveries, you want to ask, what's next? And in this case, what's next is something that we called the Next Generation Event Horizon Telescope, which is going to take us from still images of black holes to movies of black holes. The driving motivation there is that you can say a lot from the still image, which shows you the strong signature of light-bending around this ultra-massive, compact object, but you can say even more if you can understand the dynamics. This will give us some sense of how gas and particles with mass move around the black hole, as opposed to photons that are lensed around the black hole. This is hugely exciting. This is going to take up my attention and the attention of many people over the next decade, I think. We hope to come with a full, built out, next generation array by 2030.
Shep, your day has already been full. What was your presentation this morning for Africa at 5 a.m.? What were you talking about?
I was invited to give a talk at an East African astronomy conference, and it was an absolute pleasure to do so, primarily because with this technique of building a virtual, Earth-size telescope that we use to image black holes, it's all about location. If we can put telescopes in Africa, we would be able to increase the imagining fidelity, the crispness of the images that we view, and also be able to start our black hole movies early, because of course, the sites in Africa are the first ones to acquire sources as they rise over the global array. Whether we’re observing Sagittarius A*, the supermassive black hole in the center of our galaxy, or M87, the 6.5 billion solar mass behemoth in a distant galaxy, African sites will see them first. So, if we can put a telescope, let's say on Mount Kilimanjaro, or in Namibia, or in South Africa, that would significantly enhance the array.
Shep, as a worldwide endeavor, of course, you hear about interesting physics projects in Europe, in Asia, in Latin America, but unfortunately, you don't hear much about what's going on in Africa. To what extent is black hole research bringing Africa, and more importantly, Africans, into the fold?
The possibilities for black hole research in Africa are exciting, and there are a few reasons why there's global interest in this project. The first is that black holes are the most mysterious objects in the universe, bar none. Perhaps the brain and consciousness is the only thing up there that is as mysterious. Black holes represent a part of the universe that's just closed off from our view as described by the cosmic censorship conjecture. We understand that you really can't observe what happens inside of an event horizon, and we also know that the central singularity is the point where gravity and nuclear forces and quantum mechanics all start to play on an equal level. It's the only place we know of in the universe where gravity is as strong as the other forces. All that is shrouded by the event horizon, so we all want to get as close as we can to the event horizon, and really understand the basic fundamental physics and astronomy questions that can only be answered at that level. This is a deep scientific goal, so it draws everyone together.
The second thing is, if you're going to build a telescope to observe this, you need to involve the whole world. This truly is a global problem. You can't put telescopes in one part of the world and hope to tackle this problem. To image a black hole, we need to link radio dishes from across the globe together, and that includes drawing on expertise wherever it is. So, in that sense, there's a leveling effect. If we need to go to New Zealand to put a telescope there, then we need the local experts in New Zealand to help us build that telescope. If we want to go to Africa, then we need the local expertise in Africa to help us build that telescope. And the last part is the wonderful public outreach potential of this project. It's a great chance for us to bring people in, but it's also a chance for us to, I would say, help take regions that are underrepresented in astronomy and bring them into a cutting-edge project, so they can in short order start to contribute to something that is really at the discovery level. This includes several regions in Africa. It is a special opportunity, and where we can we absolutely want to do it.
Right. Shep, what are the overall sources of funding that are most important to this project? Where is the money coming from, both government and private?
At the end of the day, no matter how visionary the science, funding is always a factor. There's a lot of perspiration; there's a huge amount of elbow grease that goes into developing interest among financial supporters. Primarily, we get our funding in the United States from the National Science Foundation. They were early contributors, and when the project was still in a high-risk phase, they came through with some key grants that allowed us to make the first discoveries of horizon scale structure. That was huge. To partner with the National Science Foundation on that made the entire project move forward very quickly. And then there were some key funding opportunities from private foundations. The John Templeton Foundation and the Gordon and Betty Moore Foundation both contributed to the project before we even had the image. The private sources of funding were especially flexible; they told us, "We are investing in people. We are investing in the group here." That was very important, because it allowed us to nimbly reallocate resources exactly where they were needed, and the combination of the National Science Foundation funding and this private foundation funding really gave us the resources we needed to complete the project.
The other thing I would say is that later on we also had considerable funding from Europe. The European Research Council provided funding. Also, we had very important resources coming from Taiwan, and eventually from China. We used telescopes in Chile which were resourced by large groups from around the world. The ALMA Telescope at the San Pedro de Atacama site is funded through Japan, Europe, and the United States. So, all of this came together to give us the resources we needed. I would add one important point: that the EHT was the right telescope to be built at the right time. It couldn't have been done earlier. The reason is that right around this time in the 2000s, that's when we finally had a critical number of existing telescopes, which were originally built for other purposes, that could be equipped with specialized electronic instrumentation to make this measurement. Up until that time, we didn’t have the right number of telescopes in the right locations to do it. It was a combination of having those telescopes available, having a huge leap in the instrumental capabilities for the instrumentation that we needed, and the ability to get everybody behind a coherent science vision, that ultimately allowed us to build out this array.
Shep, has the pandemic slowed you down at all, or were you so remote and collaborative over videoconference before that this wasn't so much of a big transition for you?
So, the pandemic certainly has slowed us down because we're still in a very creative phase, and it's just difficult to be as creative as you can be on a Zoom call.
There's no spontaneity, you're saying.
There's no spontaneity. What we find is it's very difficult to have silence in a Zoom call. People want to fill that space with chatter. People want to keep talking. And if you watch people who are being truly collaborative in front of a black board or a white board, there are long stretches where they're just sitting there looking, and thinking, and marinating, and then somebody will finally say, "Hey, I've got an idea." And somebody will say, "That's a good idea." And then they will go and start to make progress on it. With Zoom, it just seems as though there's no space for people to be quiet. It's a small thing, but I think it's an important point for any global collaboration. And there’s the fact that you can't read people’s expressions well, and you can't see everybody in a meeting at a single glance. For all these reasons, Zoom has been difficult, and the telepresence has been difficult. That said, we are a disaggregated project. We exist only because we can work remotely at many different sites simultaneously around the globe. So, in that sense, of all the projects that I know of, we were probably one of the best prepared to go into this extended period of remote work. But it's still taken a toll, and we're anxious to get back to the telescopes and get back to meeting in person.
Shep, before we go back and develop your personal narrative, I'd like to ask some broadly conceived questions that I think will help narrate our subsequent discussion; about nomenclature, about sociology of science, and about the science itself. First, this is probably as much a generational question as it is a scientific question, but for example, when somebody like Bill Unruh tells me that as a graduate student at Princeton, black holes were very real in the early 1970s, but at a place like Harvard at the same time, they might not have been, which is just a funny thing to think about in terms of who accepted black holes as a physical reality and when. So, for you, in the course of your career, in your educational trajectory, in your intellectual development, have black holes always been real, or are you part of a generation that saw that transition to some people who thought they were, to the entire field accepted that they were?
What a wonderful question. I'm of the generation where everybody accepted black holes, and not just stellar mass black holes, but supermassive black holes. And not just supermassive black holes in some galaxies, but by the time I was a graduate student at MIT in the late 1980s and early 1990s, we had come to understand that black holes were likely at the center of most galaxies. That was just the lore. That was what we were taught, and the reason was because there were many lines of evidence for this: stellar dispersion velocities, gas dynamics around the centers of galaxies. It was becoming increasingly clear looking at quasars and the increasingly high angular resolution radio results that showed unresolved radio sources in the centers of these galaxies. Everything was pointing to the fact that there were supermassive black holes in the centers of galaxies.
The term black hole itself -- Fred Hoyle coined Big Bang, but he did so derisively. John Wheeler coined black hole, but he did so not derisively. What do you like about the term black hole and what do you not like about the term black hole?
They say that people grow into their names. I'm a firm believer in that. You can get too dogmatic about naming something perfectly at the very beginning, and this is really a case where perfection is the enemy of pretty good. And I think no matter what the name was going to be, these things are so mysterious, and we read so much into them, that whatever it was called would have been fine. “Black hole” winds up being a perfectly good moniker for what these things are.
To refine that a little bit, for our broader audience that might not know the distinction between supermassive and super-duper, right? Can you have a black hole that's super and not massive, and can you have a black hole that's massive and not super, and when you put all of these together, what does that tell us exactly?
Yeah, it turns out that it's pretty clear how stellar mass black holes form. These are black holes that have a mass of a few times, maybe five or six times what our sun weighs, up to maybe 100 times what our sun weighs. The mass of those black holes is determined by the size of the star that exploded to give rise to the black hole in the end stage of the star's life. So, the star explodes, the whole core collapses, you wind up getting a black hole. And then there are supermassive black holes that weigh millions or billions of times what our sun does. And the big question is, how do those form? And the answer is, we don't really know perfectly how they form. It's likely that you wind up with mergers of these stellar mass black holes, and we've seen these events with the LIGO interferometer, so we know for a fact that black holes that weigh many tens of what our sun does merge together to make even larger black holes. That part of the merging process is pretty clear. And then we know that there are supermassive black holes that are eating voraciously at higher redshift.
So, farther away from us or earlier in time in the history of universe, there are periods when a lot of the black holes we see at the centers of galaxies are accreting almost maximally. As fast as they can accrete matter, they're accreting matter. So, they're growing rapidly. And somewhere in between, you wind up with this hierarchical merging that likely happens where kind of big black holes, like hundreds of solar masses, start to merge. And then, supermassive black holes in the centers of galaxies merge when the galaxies collide, and ultimately you wind up with this range of masses from the small, few solar masses, up to the billion solar masses. There's been a search for the mid-range. Where are the black holes that weigh 1,000 solar masses? Where are the black holes that weigh 10,000 solar masses? The problem is that they're likely not accreting as fast as other black holes. They may not be as luminous, and they're difficult to detect. There have been some detections that I think are pretty creditable recently that begin to open a window on this mid-range of black holes. But predominantly, the ones that we can see clearly are the ultra-large supermassive black holes, and the stellar mass black holes.
Shep, as you well know, all fundamental discovery requires an interplay of theory and observation or experimentation. Sometimes the theorists provide guidance to the experimenters, and sometimes vice versa. So, both in historical perspective and where you are right now, who is leading whom? Where is the guidance coming from for the questions that are most important to you now, and how has that developed over the course of your career?
That's a great question. At the current time with the Event Horizon Telescope effort and the Next Generation EHT that's forming, I would say there is a real 50/50 contribution here, especially with the Next Generation instrument. I mean, this is a wonderful opportunity. Here we're asking ourselves, where do we put telescopes on the surface of the planet in an optimal way to make movies of black holes? When we ask what is optimal, it's not just where we put these telescopes. Sure, we could put them in a number of different places, and it would probably give us pretty good images, allow us to make pretty good movies.
But we'd like to tailor it. We'd like to ask ourselves, where do simulations point us? What are the open questions in the plasma physics around the black hole? What are the open questions in jet launching from the north and south poles of these spinning black holes? Can we fine-tune the instrument to be directly responsive to these kinds of questions? And we also don't want to leave anything on the table. God forbid, we wind up building this instrument and someone says, "You know, if you had just expanded your bandwidth a little bit, we could do this." Or, "If you had just been able to be responsive to this polarization of light, we could do that." So, we're at the point now where we want to make sure that we have the most traction on the most pressing problems. That happens, truly, when you have a mixture of the theory and experiment.
The other part of your question that you asked is also very interesting. How do things get started? What's the spark to get things going? And in the Event Horizon Telescope, really it was for a long time instrumental. The real developments came from an instrumental point of view. So, in the 1960s it started to become apparent that there were supermassive black holes in the centers of galaxies. In 1974, we discovered the radio emission from Sagittarius A*, the compact radio source in the center of our Milky Way Galaxy. And all of a sudden, we had our own private active galactic nucleus. We had our own private supermassive black hole in our backyard. And there was a race at that time to resolve it. Was it really a point source, or did it have some dimension to it?
So, all the way dating back to the 1970s, people were focused on this one source, and building ever-better interferometers in the radio to try to resolve it, to see if it had extent. And this happened in two ways: one, people started to move the telescopes farther apart which gives you higher angular resolution. But you quickly run into a problem. The problem you run into is the interstellar medium has a lot of free electrons in it, and these free electrons scatter radiation. So, even if you had a point source at the center of the Milky Way Galaxy, and you looked at it with an interferometer that had infinite resolution, and you could see with crystal clarity, it would still look fuzzy because the light that comes from that point source is scattered as though you were looking through shower glass, through this haze of electrons between us and the galactic center.
So, the first thing you realize is you have to go higher in frequency because only at higher frequencies does that haze begin to become transparent, and you can see through that haze all the way through to the center of the galaxy. And then, another challenge arises. Sagittarius A* is accreting all this gas. Magnetized plasma is madly scrambling to get to the event horizon. You can imagine the chaos that ensues because a huge amount of gas is trying to get into an impossibly small volume. And of course, it heats up just as when you rub your hands together and they get hot. And in this case, the plasma heats to a 100 billion degrees. Now, plasma that's that hot is opaque to its own radiation at longer wavelengths.
For example, if you try to observe that very, very hot gas at long wavelengths, you will only see the outer boundary of the accreting gas. You have to go higher in observing frequency (to shorter wavelengths) to see all the way through to the event horizon. So, you've got to get through the interstellar medium, you've got to get through this hot gas, and then once you've got the right wavelength of radiation, which is around 1mm wavelength in the radio, or 230 gigahertz if you like frequency, then you have to build a telescope that has the native angular resolution to resolve what you think you're going to see at the core of this black hole. And there, you need to build something that has an angular resolution of just a few tens of micro arc seconds, equivalently the size of an orange on the moon, or something like that. Or if you like money, reading the date on a quarter in Los Angeles if you're standing in New York. So, the technical challenge was to observe at high frequencies, and to create an interferometer that could really see what was going on where the action was.
Now, where does the theory come in? Well, the theory was driving us because there were a lot of models that told us how the emission would be layered as we got closer and closer to the black hole. So, we had some predictions about that. But mostly, we were just driving forward instrumentally, and we knew that we would see something interesting if we could keep pushing higher in frequency and higher angular resolution. When I was a graduate student, it was all about just building the best interferometer that you could. It was understood this that would be the key development that would push us past this barrier of not knowing what was happening at the black hole.
Now, there were a few other things going on at the same time. In the late 1970s, a true visionary by the name of Jean-Pierre Luminet created the first visualization of what a black hole would look like if it were surrounded by an accretion disc. This was phenomenal. I've talked with Jean-Pierre before. I visited with him here in Cambridge, Massachusetts, and I recently sent him an image, like the one that's hanging behind me here, of the black hole, and I know it was very well received. He was the first person who ray traced a real simulation, and he also made a dynamical simulation of it. But I think that work for a long time was under appreciated and kind of lost. It wasn't until the early 2000s when there were some more simulations done, and people like Heino Falcke, Avery Broderick, Rohta Takahashi, and others, started to ask the question anew, kind of rediscovering Luminet's work - what would we see if we could resolve this black hole?
But at that point, the underpinnings of the event horizon telescope were underway. So, we'd already started to push to 1mm. We'd already made successful observations at 3mm. Those were heroic days. Everybody's driven my something different, and for me, it's getting new equipment and going to a telescope to make it work. The exhilaration of working with cherished colleagues in those early days, going to tops of mountains, making reel to reel tape recorders work in low humidity conditions, it was really all about the instrumentation. Until we made the first detections of compact structure in Sagittarius A*, it was all academic.
On the question of instrument building, if you can reflect on the divide between basic science and applied research. In other words, black hole research as a global initiative obviously needs to be understood in the framework of basic science. There's nothing more fundamental in terms of discovering how nature works than learning about black holes. But -- and I'll share with you a really important point that Nergis Mavalvala made to me about LIGO, which is in the course of building all of this instrumentation -- so much stuff was created in terms of detectors, in terms of dampening, that have value and applications that have nothing to do with LIGO. So, I wonder if you might reflect on the extent that's true for the research that you're involved with.
What a wonderful question. What I would say is that the instrumental component of the Event Horizon Telescope really drove things, but it was not in the area of fundamental device physics. We weren't creating new kinds of radio receivers. We weren't creating new optics as LIGO does. We weren't pushing the boundaries of the very receptors that turn the photons into 1s and 0s. The secret sauce, the kernel of the EHT, was linking to an explosion in bandwidth. Just as our lives have been completely changed by the fact that you and I can have a Zoom conversation that requires bandwidth that we couldn't have dreamed of 20 years ago, the Event Horizon Telescope took advantage of that burst of capability, of high-speed commodity electronics and computers and protocols, and harnessed it in the service of fundamental physics. I think it's a wonderful story. It's really a model for how you can change completely the paradigm of how instrumentation is built.
So, the example is that the interferometric systems that we used to use took decades to design, test, build, and commission. It would take a decade for one of these systems to come into being. And they had names like the Mark IV, or the Mark III. It was as though we were building tanks, like we were going to war. That's the effort that was required to make these systems. Teams of engineers would get together and build the Mark IV. And it was a monumental and heroic effort. Something really changed. There was a transition in the early 2000s. I got a call from someone at Berkeley, a collaborator of mine, Geoff Bower, who at the time was junior faculty at Berkeley. And I had been talking with Geoff, and I'd been telling him we have to break through this sensitivity barrier. The only way to make the Event Horizon Telescope work, the only way to detect Sagittarius A* on long baselines, where you're only receiving part of the energy of the black hole -- the longer the interferometer baseline, the less energy from the black hole it can detect – was to increase the bandwidth of our instruments. And he said, "You should talk to some people that are here that are specializing in using field-programmable gate arrays." These are small chips that can be nimbly reconfigured to do what floor to ceiling racks filled with analog electronics were being used to do in VLBI.
So, I got in touch with someone named Dan Werthimer, who was running the CASPER collaboration. CASPER stands for the Collaboration for Astronomy Signal Processing and Electronics Research. And I began to talk with Dan, and we came up with this idea of building a new kind of instrument. Over the course of about 18 months, we designed a new instrument that had about 10 times the bandwidth. We could record 10 times the bandwidth in something that was about 10 times smaller in volume. So, we were using industry-driven trends in order to build an entire new generation of VLBI instrumentation. At the same time, we were transitioning from reel-to-reel tape recorders, which I had cut my teeth on as a graduate student, that required vacuum tension in order to speed the tape through at thousands of feet per minute, to hard disk drives. One of these tapes was 3 miles long, 18,000 feet, and if you weren't careful, it would take your fingers off. There were real injuries associated with the fact that we were working with machinery. You almost felt as though you were putting your hands into heavy machinery to do astronomical work!
And we graduated to hard-disk drives. Everything now started to be put on hard-disk drives, and that wasn't possible earlier. So, a couple things happened. We linked our wagon to Moore's Law. We said, we're only going to design and build with things that we know will be carried along by industry so that we don't have to worry about upgrades. Industry will do that for us. If we want to build the next generation of EHT instrumentation, we'll just go to the shelf and buy the next generation of commodity electronics. That was huge. So, the cost came down by a factor of 10, the capacity went up by a factor of 10, and most importantly, the time to science -- the design time when down by a factor of 10. So, all of these effects magnified the impact of this new instrumentation. And this proved to be the key.
In 2006, we took our first new systems to Arizona and to Mauna Kea, and we set up an experiment to try to detect Sagittarius A* on that long baseline, and we failed miserably. We absolutely crashed and burned, but not because the instrumentation wasn't working. The receiver at one of our telescopes had malfunctioned, and that had destroyed the phase response of our interferometer. In other words, the receiver at one telescope was receiving the total energy from the black hole, but we were not able to record the troughs and crests of the radio waves accurately that were coming from the black hole. A very insidious failure mode because you could see all the energy, but it had been scrambled in a way that would not let the VLBI technique work. And it took us a while to recover from that, but then we dusted ourselves off, we picked ourselves up, and we went back the next year, adding a new site in California. And that time, we succeeded. That was the moment in 2007 that launched the Event Horizon Telescope, because that was the discovery of horizon scale structure. Up until that point, we just didn't know if the event horizon would be shrouded from our view, but after that point, it was abundantly clear to everybody. Funding agencies, all of our other collaborators, everybody realized at that moment that this is now possible. And that is what launched the final phase of the project.
On the question of bandwidth, do you have enough brainpower in the collaboration and sufficient computer power to make sense of all the data you're collecting?
Yeah, so you can rapidly get into what I call data poisoning. It is possible to get too much data and not enough people to start to work on it. I'm going to point out that the amount of data we record in this technique of very long baseline interferometry is measured in petabytes. The amount to data that we recorded in 2017 was about 3.5 petabytes, approximately 5,000 years of mp3 recording. So, a huge amount of data. Now, as we process the observations the data volume is rapidly decimated. When we combine that data together in a correlator, and we compare, for example, the data that recorded in Chile with the data that are recorded in Hawaii, we reduce the amount of data by factors of a 100 million. That’s because we're comparing signals with fluctuations on nanosecond timescales. Once the recordings are perfectly aligned in time, we can average them for seconds, or for tens of seconds. So, you go from the nanosecond scale to the 1 or 10 second scale, and you immediately decrease your data volume by factors of a billion or a 100 million. And then we also average over frequency, which further decimates things. So, this is how you go from 3.5 petabytes of data, where you're carting hard disks around with forklifts, to the point where you can look at the final image on your cell phone, and it's only maybe a megabyte of data. So, that's the raw data.
But still, after we correlate the data, you need to analyze it. And we were lucky with M87. I mean, we were really lucky. When we first correlated the data in 2017, we started to see some early signs that we were seeing something interesting. And by the time we were a year into the analysis of the data, we were making plots which showed clear signatures of ring structure. I mean, just to the naked eye, you could look at plots of data, and for those of us who are trained in recognizing Fourier transforms, we could see immediately that something interesting was happening. The key development here, because you've asked about brainpower, was -- I also want to go back to something you said before which is that -- you asked about the developments that led to it. It was really the advancement of sparse imaging techniques that was a key. Because, of course, we don't get all the data that we need to form this image. We only have telescopes in a few places on the Earth. Even when the Earth rotates and fills in some of this Earth-size virtual lens, it's still a sparsely sampled problem.
We don't, for example, play in the realm of the optical astronomers, where their entire mirror reflects everything from the sky to a focal point where they put their camera. We have essentially a mirror with most of it blocked out, and we're trying to make an image with a lot of the data missing. And starting with some fundamental theories from maximum entropy techniques used to form images, our group, in a series of papers, tailored these techniques into regularized maximum likelihood imaging procedures for the EHT case. These new methods addressed the question of how to optimize the image recovery when we don't have enough data? And that was really important. We drew on expertise from the computer vision world, where they face these same kinds of problems. We thought innovatively and creatively about how to make these algorithms. I would say that over a period of about four years of sustained development, we defined the state of the art. The algorithms that we've developed as part of the EHT are the most advanced for making sense of sparsely sampled data. And really, the key, and what surprised many people, is that a lot of this was driven by the early career astronomers.
So, this was really a case where it wasn't the old gray-beards, it wasn't the absolute experts in radio interferometry that did this. There was a real palpable sense that it was the earlier career people, the students and the postdocs, that took on this challenge and used experience from different fields to launch this effort. There were some theorists in the collaboration, I think it's fair to say, that were surprised by it. There were some senior theorists who thought that they were going to figure everything out. And then all of a sudden, this group of upstarts, this group of young astronomers and physicists started coming with these imaging techniques that were astounding. It's been a real joy to watch, and a privilege to work with this group. To give you an idea, this group started what they called the imaging challenges, and they would take an image of a black hole, they would sample it with the Event Horizon Telescope coverage, they'd create fake datasets, and they would corrupt those fake datasets to mimic actual observations. They would say, what if we observe this through the turbulent atmosphere of the Earth? What if the telescopes have gain fluctuations? What if the telescopes are not pointed always optimally on the source, and the black hole signals comes and goes? And they would create these datasets and they would give them to all the people who were developing algorithms. And there'd be a contest. Who could reproduce the best truth image?
It was that sense of friendly competition -- I call it creative tension, which we can talk about a little bit later -- that drove everything, and it was really the fact that these young astronomers, I will say, had not become jaded. These young astronomers just loved working with each other, and there was not that sense of competition that sometimes gets baked in with the more senior astronomers, and the more senior researchers. That created a real potency, and they just loved working together. Even when they were specialized in different algorithms, there was a great sense of comradery, and I think that's going to stay with that group forever. I know that when I was young, it was the people that I was working with in hugely challenging circumstances at the tops of these mountains, those are the people I strongly identify with to this day. And I think that for this generation of astronomers in the EHT, that's what they'll remember. They'll remember sitting with Oreo cookies and huge flasks of coffee around conference room tables, and making algorithms, and testing each other. I saw that in real time, and I know that they'll remember that for the rest of their lives.
Shep, as a question of categorizing observational sightings. You have land-based telescopes, you have space-based telescopes, and you have -- and this is something for example Joe Silk is really excited about, possibly moon-based telescopes. You can categorize that -- I'm not sure if that would count as a space-based or land-based, or something in between. But currently and projecting into the future, which of those categories are most important?
So, I'm going to say that it's a sequence. They're all kind of important. I'll tell you what I mean by that. We have the EHT right now, and the next question to ask is, what are the first improvements we can make. What is the best bang for the buck that we can achieve? Of course, an associated question is, what is the ultimate that we want to go for? You never want to limit yourself just by what you can do now, but you do want to aspire to the future, and they do build on each other. So, the EHT was extreme value for the money. I would say that in astronomy, it was the best result for the minimum amount of money in the history of astronomy. And that's because we levered billions of dollars of existing infrastructure. Telescopes that already existed. We brought modest cost new electronics to all those sites, allowed them to interoperate in a way that was novel, and that no one telescope could do by itself. We aggregated the data from all those telescopes to create a virtual Earth-sized array that could image a black hole, and we did it for tens of millions of dollars. Compare that to something like LIGO, which also came with a phenomenal result, but with a price tag of $1 billion.
Or, even worse, compare it to the SSC, which would have cost $12 billion and was never built.
The mind races with analogies, right? You're absolutely right. I'm not sure if we can do that again, but I'm not sure the EHT has to do that again. We've already proven what we can do. So, now it's fair to say, okay, where do we put the next ten telescopes? And we think the sweet spot is adding ten new telescopes at optimal locations around the globe, which can work with one or two large telescopes, like the ALMA Array or the Large Millimeter Telescope, so that even a number of small modest-sized telescope can fill in the gaps in the Earth-size virtual lens. That will take us to making the first movies of black holes, to studying the dynamics in real time around Sagittarius A*, for example. That will open whole new areas of inquiry, including how energy is extracted from black holes, the dynamical interplay of magnetic fields that launch jets from the north and south pole of these black holes, and testing Einstein's theory in new ways. We're still limited though, and one dares to dream. The next step would be to go into space, because once you've built an Earth-size virtual telescope, there's nowhere to go but beyond the Earth.
There are two ways to do this. You could put something in low Earth orbit, where the orbital period is an hour. And then, instead of waiting for the Earth to rotate over 24 hours to fill in your virtual lens, you just wait one hour, and you get near perfect coverage. So, if you had four telescopes in low Earth orbits, for example, you would get near uniform coverage in just a couple of hours, which would give you phenomenal imaging fidelity. But then, if you want higher angular resolution, you have to go to geosynchronous orbit. That will give you an increase of about a factor of 3 or 4 in angular resolution if you link that orbiting platform with telescopes on the Earth. But it's becoming clear through some theoretical work now, led for example by Michael Johnson at the Center for Astrophysics, that the black hole image really consists of nested rings of finer and finer resolved emission around the black hole.
What I mean by that is that most of the light we received from the black hole is lightly lensed around the singularity. So, you wind up seeing a tufty ring that reflects the diffuse emission around the black hole. But some of the photons make a U-turn around the black hole. These are lensed onto a ring that is much sharper, much more defined, very much more closely related to the geometry of spacetime. And there are other photons that make a full orbit around the black hole. Those are even constrained to be even closer to the photon orbit and are even finer. And you wind up with an infinite series of nested, concentric rings. What I find phenomenal about this is that apparently the history of the universe is stored in concentric rings around black holes. So, there are photons that date back to the time of the dinosaurs, still circulating around these black holes. So, the question is, can you resolve those? And because an interferometer is essentially a spatial filter - in other words, the farther apart you put your telescopes, the more you just average out all the large-scale structure - at some point, you can have your telescopes far enough apart that you're only sensitive to the photons that make a round trip, or that make a U-turn.
And then, you can sensitively look for a signature that is very closely bound to GR. It gives you a much more precise test of Einstein's theory. Now, for that, you want to go to much larger baselines. You could go to the moon. You could go to the second Lagrange point, which is 1.5 million kilometers from Earth. And when you do that, not only can you study the rings around M87 or SgrA*, but you can start to resolve black holes that are at very large distances from the Earth. Because your angular resolution is so fantastic you can resolve the rings around supermassive black holes at high redshift, seeing large numbers of black holes as they existed far back in cosmic time. And that is probably the ultimate goal, being able to build an interferometer that's much, much larger than the dimensions of our planet.
Shep, those ultimate goals lead to the ultimate questions, the most fundamental questions that are important to you. Those fundamental questions, they are what they are, but for better or worse, they necessarily slot into academic disciplines. So, there's cosmology, there's astrophysics, there's astronomy. We can subdivide that. Particle astrophysics, and so on. For you, for your work, for your collaborators, what is your home department? What are the ultimate questions that are of most relevance as we divide these fields, and have those distinctions between cosmology and astronomy and astrophysics, have they shifted over time in your career?
That's a very good question. I personally feel most at home in astrophysics, in the study of the emission around the black hole, how it lights up. And then, related to that, how we can test theories of gravity by looking at the strong lensing around the black hole. And I think that instrumentally the EHT really has a home in radio astronomy, in all the techniques that have been built up over the many years to push the project forward. And really, there is a symbiosis there, because in instrumental astronomy, we do Balkanize into the different wavebands. And once you decide, well, I'm going to make my measurements in the radio, then you want to tailor your efforts and your instrumentation to be optimal for that radio wavelength. And then, that also affects the theory you do. So, you start to look at the theory and say, how can I predict what kind of instrumentation I will need in the radio to make the optimal measurement? So, there is a real interplay between the theory and the instrumentation. But we find our home really in radio astronomy and in the theoretical astrophysics that governs the plasmas around the black hole as well as general relativistic questions.
Now, these also affect cosmology. As I think we discussed before, black holes, especially supermassive black holes, through their feedback mechanisms on large scales affect the way the night sky looks to us. And that's cosmology. So, if you're interested in how galaxies evolve over cosmic time, you need to understand the fundamental processes by which black holes redistribute matter and energy on galactic scales. So, how do they launch these jets? What are the physical processes? When do these jets light up? When in the course of a galaxy's lifetime does a jet start to emerge to potentially disrupt star formation in that galaxy, changing the fundamental fabric and nature of that galaxy itself? These are all processes that trace back to the boundary of the black hole. So, it's a touchstone in that sense. I think it's fair to say that cosmologists are eagerly anticipating a lot of the work that we're doing with EHT just so they can refine models of how black hole feedback works. And the extension of the EHT with ultra-long baselines to look at black holes across cosmic time is of extreme interest to cosmologists, for example.
But stepping back even further, at Harvard, we started an institute called the Black Hole Initiative. And the basic premise of this institute, launched five years ago, was that black holes were at touchstone for a number of fields: astronomy, physics, mathematics, but also history and philosophy. You'd feel right at home at the Black Hole Initiative! I encourage you to come visit. And what we're finding is that there is real interdisciplinary traction with black holes. The Event Horizon Telescope has energized a lot of collaborations. Now I find myself working with people like Andy Strominger, who is a well-known theoretical physicist who works on information theory, and the information paradox with black holes. He's looking for soft hair. How can you tell one black hole from another black hole? Is there some indication that what went into the black hole is somehow encoded in the surface of the black hole? Could you possibly observe that? And Andy is now working on some of these infinitely nested rings with Michael Johnson and others. It's been a real joy to see this cross fertilization. And you really see this interdisciplinary effect when you're making discoveries at this level, and when you're looking at an object that has its home in multiple areas. And the mathematicians, by the way, are consumed with understanding the stability of spacetime and whether or not new observations can inform that. So, there are ways in which black holes cut across a lot of disciplines.
Well, Shep, let's do some oral history now. Let's take it all the way back to the beginning. Let's start first with your parents. Tell me a little bit about them and where they're from.
Oh, wow. I'm somewhat unprepared for that. So, my parents were -- well, I have a kind of colorful history. My parents were New Yorkers. My father came from the Bronx, and my mother came from Brooklyn, and they were part of the diaspora in that time and in that place. My mom specifically retained her Brooklyn accent throughout her entire life, even though we relocated to Oregon. People would often comment on her accent, and I just never noticed because that's just what I grew up with. So, I grew up mostly in Oregon. I was born in Belgium, actually.
What took your parents from New York to Oregon?
I think just adventure. They had grown up in the city and they wanted to explore the west.
What was their work? What were their careers?
My father at the time was a journalist. Ultimately, he worked for the Oregonian newspaper out there. My mom did various jobs. She was at times a policewoman. I have vivid memories of my father and mother target practicing with her .38 Special in the backyard of our Oregon house. They split up when I was around 7, and we moved closer to the city, to Portland, Oregon. I went to a Hebrew parochial school, of all things, in a synagogue. We were not an especially religious family, but my mom felt that the best education could be received at that parochial school. That was interesting. Then, my mom remarried to the person I consider my dad, Nels Doeleman. That's where I got my last name. And then, we moved to Belgium again. In the late 1970’s we went back to Belgium. My dad, who is a high school teacher in physics, took a sabbatical to study in Belgium. We spent a year there.
Is that different than the initial connection in Belgium when you were born?
Yeah. So, the initial connection happened because my father was a medical student.
Your birth father?
Yeah, my biological father was a medical student at a university in Belgium. Ultimately, he did not take that path, but that's where I was born. About 10 years later, after my mom remarried, our whole family went back to Belgium for a year because my dad was working and studying there. After we returned to the US, I attended public schools through high school. I wound up graduating high school a little bit early. I was about 15 when I graduated high school, which is itself a story. I don't recommend that for everybody. For some reason I was advanced a couple years because people thought that was the best thing for me. I think ultimately it may not have been the best thing for me, but it is what it is, and I graduated early from high school.
Did you stay in parochial school, or did you move to public school?
No, I was only in parochial school for a few years, from grade 2 to grade 5, and then after that I was in public schools. I am a proud product of the public schools of the great state of Oregon. And then I went to Reed College. One of the things that got me interested in astronomy was the fact that in 1979 the totality path of a complete solar eclipse went through Oregon. My parents took me to go see that, and that was really an extraordinary and special event, especially because it was cloudy, and then at the last minute, the clouds parted and allowed us to see totality. That, in some bazaar way, comes full circle with my career. You know? Because the EHT is observing light bending around the black hole, which is very similar to the 1919 eclipse that started the whole test of Einstein's theory of general relativity using the blockage of the sun's light to look at the deformation of starlight around the limb of the sun. So, there are some similarities there.
So, you started college when you were 15?
Yeah.
Did you specifically, or in consultation with your parents, teachers, did you want to go to a small school nearby? Since you were so young, was that part of the calculation?
I don't think so. I just applied to very few places. I thought that Reed would be kind of an interesting fit. It's an eclectic place. There's a good vibe there. I might have gone to a lot of different places but that seemed like a pretty natural place. I didn't go home too often, so maybe it was just a good time for me to get away anyway. But I think it was just mostly that it was an interesting place to be. I didn't give it too much thought, actually.
Did you go in with the intention that it would be physics and astronomy that you would focus on?
I was interested in physics, and there were wonderful teachers at Reed College. David Griffiths was there, and he's the author of many seminal textbooks, both in electrodynamics and particle physics. I was fortunate to study with him, and to take his classes. He was my thesis advisor, and it was wonderful to work with him. It was a really interesting time. But when I graduated from Reed, I was somewhat burned out. I felt like I'd been accelerated through my whole education. I graduated when I was 19, and I wanted to do something different. I didn't want to go to grad school right away.
I remember seeing a flyer. We didn't really have the internet in those days. This was 1986. Nowadays, you do a search: “interesting things to do if you're an astronomer who's burnt out.” And you get a big list of things and you apply to them automatically. These were the old days when you'd go to a ring binder in your physics department, and you'd flip through all the posters of potential jobs. And one of them caught my eye. It was a job working in the Antarctic, to go do experiments down there. And I thought, well that's for me. I applied for it, and in a crazy coincidence, got it. I spent that summer after I graduated hopscotching around the United States going to all the laboratories that had experiments down in McMurdo Station, Antarctica, and also in the South Pole. And once I'd learned about all those experiments and learned how to repair them and how to monitor them and how to run them, we were sent down to Antarctica, and I spent a year in McMurdo Sound.
When you were a kid, were you a tinkerer? Was that sort of part of your world?
Kind of. I was pretty good at taking things apart. Not so good at putting them back together. I did some of the things you would associate with kids who were curious cats. I used to build rockets and launch them. I have distinct memories of doing this with my brother and running out of the materials to build a real rocket. So, we just started gluing wings onto the engines, which is horribly dangerous. There's no stabilization involved whatsoever. Normally, you build a big tube, and there's a housing for the motor, and there's a nose cone made of balsa wood, you glue on fins, and you let it dry. We were just so impatient. We didn’t have the time to go buy another kit, and we had all these engines. I remember thinking, I'm not sure this is a good idea, but gamely just gluing fins directly onto this little, tiny high-powered engine filled with explosive material. And we lit them off, and they'd just go in crazy directions. No forward propulsion at all, just random directions. And one of them imbedded itself in the tiles of our neighbor's house. The thing about these engines is that they have an explosive charge at the very end, normally which is used to blow a parachute out the top of the rocket. So, we watched this thing burrow and burrow and burrow, slowly getting into the tiles of this roof. And then there was this pause, which is typical, because you want to wait for the charge to detonate until you reach the apex of the rocket’s flight, and then after about a second, there was this big bang, and the whole rocket flew out from this tile and it started the roof on fire. Thankfully, we had a hose, and we were able to douse any potential fire there. I assume the statute of limitations has expired on things like this. But we were curious. We had a lot of fun. But I wasn't a boy astronomer, grinding lenses and building telescopes and things like that.
As an undergraduate, to the extent that you appreciated the binary in physics between experimentation/observation and theory, were you specifically looking for experimentation type work? Did you dabble in the idea that you would pursue a career in theory at all?
You know, I was pretty undecided at that point. I was still looking for direction. I think what decided me was the experience in Antarctica. We're all shaped by some experience. How does passion flower in people's minds? It's a very personal journey, and a lot of it is due to happenstance: where you happen to be, who your mentor is. I think two identical people with different mentors will take different directions.
And for me, I was thinking about theory, but then when I got to Antarctica, and I had spent a lot of time looking at these experiments, I fell in love with doing hard work in challenging circumstances. When you're in Antarctica, there's no Radio Shack down the road. If something breaks, it's all you. You have to fix it. You become a jack of all trades. It wasn't just that I was working on boron trifluoride tubes to detect neutrons from the sun, or from cascades through the Earth's atmosphere from the sun, I also had to get that data back to continental United States. So, I needed to know something about the satellite transmission system, which was hooked up to a Yagi Antenna, pointed at the horizon, so we could bounce signals occasionally off of a geosynchronous satellite. It was understanding all of this that pushed me into this career that I'm in now. When something broke, we had to try to fix it on the spot with whatever we had, and it wasn't enough to say, "Well, I don't have this part, so I can't fix it." You had to fix it, otherwise a whole year's of data would be lost.
Who were some of the senior people in Antarctica who served in mentor capacities to you?
Oh, I was alone.
Just you?
Yeah.
You were the mentor.
Yeah, here's what happened. I did train at various sites around the United States. We went to Stanford and worked in the labs there. I went to University of Delaware. Went to all these sites and learned about auroral photometry, magnetometry, neutron capture for solar emission. We were trained on all the experiments, and then we were flown to Antarctica where we had a week or two crossover period to learn from the person before us. That was it. There was really nobody else there for the whole year. I was in sporadic contact with people back in the United States via very, very slow internet, maybe 300 baud. You'd see the letters slowly form in these messages as they'd come across. So, we had to be very self-reliant. But yeah, I was the only person there. It was a crazy, crazy thing when you think about it, but it was just wonderful. I really loved it.
How long were you there for?
About 13 months. I got there in October of 1986 and left in November of 1987.
How did you do psychologically?
It could be difficult. There were long periods of darkness. So, for about four months, it was dark all the time. And then, in the summer, of course, it's light all the time. The sun just circles in the sky. I was cut off. It's difficult, I think, for people to appreciate now, but in those days there was very limited communication to Antarctica. It's not as though you could just get on a phone or send an email back then. It was often difficult to connect.
As an example, we used the military Ham radio network, called the MARS bands, to talk with folks back home. We did this by using the radio receivers and transmitters to talk to a station in Florida. And they would then link up to a phone, and they would call someone in the US for us. So, you would make the radio connection to somebody in Florida, and you would say, "Here's the number to call." And they would call, and if you wanted to have a conversation, you'd have to click the radio receiver and say, "Over," when you were done, and then the person you were talking to would have to say, "Over," when they were finished, and then the guy in Florida would click his handset whenever someone said, "Over." So, conversations went like this, "I love you, Mom. Over." And she would say, "I love you, too. Over." "How's the dog? Over." It was very difficult to have deeply meaningful conversations, not just because of the repeated "overs," but also because somebody else is listening. Someone else had to listen to your conversation in order to pass it on. Psychologically, it was difficult, but it was also a good time, too.
Did you have well-formed ideas of graduate school at the end of those 13 months?
Yeah, I knew I wanted to apply to MIT, and I did. I took some vacation time in New Zealand, and sent my application in from New Zealand, and learned that I had gotten in.
Was MIT one of the places that you hopscotched around to prior to going to Antarctica?
No, I really didn't know much about MIT at all other than the fact that I thought it was a good school and I wanted to go there. I believe that was the only school I applied to.
Huh. Not a professor, not a program, just reputation.
Just reputation, and I applied there. That was it.
To what program?
Physics. I wanted to do plasma physics. I had in mind that I would work on plasma fusion. That was something that I had in my mind as the next step for me. I couldn't have been more wrong, but then again, happily so. But you asked about the background, and I think it was working in those circumstances in Antarctica that got me hooked. To this day, that's what I like to do. I like to go to telescopes. I like to make things work. Being able to solve problems up at altitude, installing cutting edge instrumentation, seeing it work for the first time, and then ultimately being able to see results that you know you're the first person to really appreciate and understand. Those are really the coin of the realm for true scientists.
When you got to MIT, did you not feel young relative to your cohort, given your experience?
I felt I had had a lot of experience. There were a lot of people who came to MIT directly from undergraduate. I felt I had been working in Antarctica, which is primarily populated by construction workers and sailors. I was the youngest person, I believe, on the ice that year. So, I had to deal with a lot of interesting personalities. It was all great. We had a lot of fun. But it was not an academic environment. It was not college, let's put it that way. I remember I got there and the first thing they said was, "Here's your alcohol ration card." And I said, "Oh, what is that?" And they said, "Well, you can get a case of beer and two bottles of hard liquor a week." That's a fair amount of alcohol! You'd go to this warehouse once a week, they’d punch your card, and you could get two bottles of hard liquor and a case of beer. They would just give it to you, and you'd drive off in your truck, and that was it. So, that gives you some idea of the pastimes that were involved there.
Did you stay with plasma physics initially at MIT?
I tried. It's an interesting story. I was staying with cousins in upstate New York, who are artists. Beloved cousins. And I needed to get to MIT, so I had a steamer trunk filled with all of my worldly possessions. Somebody from that neck of the woods was driving to Boston. So, I hitched a ride, and I really had no idea what I was doing. I had no place to say. So, I called up the professor, who had been identified as my advisor. And I said, "Hey, I've just been dropped off here on the side of the road with a steamer trunk. What do I do?" I was somewhat clueless about how to go about organizing myself then. And he said, "Well, I'm sorry, but I'm going out to dinner now. I have no idea what you're going to do, but I'm going out to dinner." So, I relied on my network, and I called a few people, and somebody's cousin was living in Beacon Hill. So, they said, "Okay, you can spend the night there." I wound up spending three weeks with these cousins of my friend. Then, I finally found a place to stay, and got situated, and went to work. Ultimately, I did not stay with that particular professor or that particular group and found my way into astronomy pretty quickly.
Departmentally, what's the distinction at MIT between physics and astronomy?
There is none. It turns out that at MIT there's only a physics department, and they have different divisions within the physics department. They have an astronomy division, particle physics, condensed matter, things like that. So, I got a degree from MIT in physics, but worked in astronomy.
Who was your advisor, ultimately?
My real advisor and mentor was Alan Rogers, who worked at the MIT Haystack Observatory. Alan is a very special individual. He's one of the smartest people I've ever met, and is truly a gifted instrumentalist, but also a problem solver. He's an electrical engineer by trade, or by training, but has applied his supreme intellect to a lot of different problems over the years. He was one of the icons of radio interferometry in the early days, coming up with new methods and new analysis algorithms to do the science. He's also developed instrumentation to detect deuterium in the universe, and now is looking at the epic of re-ionization. He continually reinvents himself. Studying with him and working with him was one of the most important mentor relationships that I had.
What was Rogers working on at the time you connected with him?
He was working on some of the short wavelength interferometry. I remember I visited Haystack Observatory to see whether there were projects there that I would like to work on, and he described this idea of very long baseline interferometry, which I had known about before, but he was pushing it with his colleagues to the highest frequencies. As we've already discussed a little bit, you really need to go to these high frequencies to see all the way to an event horizon of a black hole. He was already thinking about how to pierce the hot gas around these supermassive black holes, and how to get better angular resolution, but also how to see more deeply into the target sources. And I was hooked, because he said things like, "We're going to go to these remote locations, we're going to install this instrumentation, we're going to make these observations." And it sounded a lot like Antarctica. So, I said, "I'm in." And thankfully, he took a chance on me, and it was really the best thing that could have happened.
Was this NSF funded work as well?
Yes, in those days it was National Science Foundation funded, and those are the days when we were still using reel to reel tape recorders, and banks of analog filters, and things like that. It was after working there for about five or six years that we launched this new effort to build out a digital backend.
And this is new technology at this point, building out a digital backend.
Yeah, people had not really done that before. This is the idea that had come from the conversation I had with my colleague at Berkeley. I worked with Dan Werthimer, and Alan Rogers was heavily involved, and engineers at MIT Haystack were involved. And we ultimately developed new instrumentation.
In terms of siting, where were you located? Where were the most important research sites?
The MIT Haystack Observatory is in Westford, Massachusetts. It's about 30 or 40 miles north of MIT itself. That was where I worked for many years. As we started to work on 3mm wavelength very long baseline interferometry, there were sites across the U.S., also in Europe, that had started operating together at this wavelength. You should understand that there were national facilities which were coming online in the mid '90s that were capable of doing work up to 7mm wavelength. There was something called the Very Long Baseline Array, built by the National Radio Astronomy Observatory, which was routinely doing observations at 7mm wavelengths. We were always in the forefront, though. We were always asking, what can we do that is not being covered by these national facilities? How can we push the technique? We had an ad hoc array of international telescopes at 3mm that was making observations.
One of the first things I did when I joined the group at MIT Haystack Observatory with Alan was to carry out observations in 1992 at 3mm wavelength. We published some papers on active galactic nuclei at that, at the time, frontier frequency. But we were always pushing to higher frequencies, and it was in the late '90s that we started to make our first observations at 1mm. After I graduated from MIT in '95, I got a postdoc staying at Haystack Observatory from 1995 to 1998, and in 1998, I was offered a staff position at Haystack Observatory. And I said, I'll do it on one condition: I want to be able to do VLBI experiments at 1mm wavelength, and I want to run these experiments. The director of MIT Haystack Observatory at the time, Joe Salah, said, "Okay, you can do that." And we had the resources to allow me to pursue that path.
That was an administrative requirement, or a scientific requirement that you were making?
Scientific requirement. I said, "That's what I want to do. I want that to be my job description." And he said, "Fine." So, then I began to work with some key sites in Arizona, and in Europe, and also in Hawaii, to try to push the technique to 1mm.
Shep, before we get too far afield, for your thesis research specifically, how did you slot in and carve out for yourself a thesis as it related to what Rogers and what Haystack was doing more broadly?
So, my thesis was on 3mm VLBI. I had joined some observations in 1992, as I described, and I analyzed the data from a couple of the sources we observed, which was the subject of my thesis. A big part of my thesis was some work I did jointly with Alan on Sagittarius A*. We had some of the first observations of Sagittarius A* at 3mm, and Alan and I wrote a paper on that. The bandwidth of our receiving system was pretty narrow, about 112 MHz as I recall, so the signals were faint, and part of my thesis was to work on new detection algorithms.
I had also designed and built a new polarizer that allowed the Haystack Observatory to receive circularly polarized light, which is an important part of the VLBI system. In other words, if you're using widely spaced interferometers to observe the same source, if you have them linearly polarized, they'll be oriented in different orientations to the source. So, you might be recording one sense of polarization in one site, and a different sense of polarization in another site, whereas you want to be looking at identical polarizations. One way you get around that is by looking at circularly polarized light: light whose polarization vector is rotating continuously. Then there's no preferred orientation, and you are assured of receiving the same polarization at each antenna.
So, I'd built some instrumentation that allowed us to do that at the Haystack 37-meter, which was used during these observations. So, I had both instrumental and analytical components to my thesis. Now, the size that we found at that time for Sagittarius A* was much larger than the size of the shadow because at 3mm we were limited by the interstellar scattering. This even predated thinking about the shadow, really. And we realized that in order to make progress we had to go higher in frequency.
Obviously, in your word, 1mm, 3mm, 7mm, these are very big distinctions. I wonder how you might explain to your broader audience, what's the big difference? This is only a few millimeters.
Yeah. It sounds like a small difference. So, going from 7mm, which is 43 GHz, to 3mm, which is 86 GHz, ultimately to 1.3mm or 230 GHz, a few things happen which make observations more difficult. First, the electronics gets noisier. As you go higher in frequency, the noise level of your electronics goes up because you're asking the electronics to operate at much higher speeds. You're looking at much smaller devices, and the amount of thermal noise increases. That means the sensitivity of your system goes down because you want to look at the same source, but the instrumentation noise is going up as you increase the observing frequency. So, that's one thing you're battling. The biggest problem, though, is the atmosphere of the Earth. One of the things that absorbs radio waves is the precipitable water vapor in the Earth's atmosphere.
Specifically, that water vapor in the troposphere is turbulent, so you can wind up with path length fluctuations. The delay of light coming through the Earth's atmosphere is variable based on the roiling and changing water vapor in our own atmosphere. Since an interferometer measures phase - it measures the difference in time between when a wave hits one geographical location, and it hits another geographical location - the jitter in the phase caused by the turbulence in the Earth's atmosphere can destroy your measurements. And then, going to higher frequencies imposes a further penalty because the dishes themselves generally become smaller. The reason for that is that you need a very smooth dish. The dish has to be smooth to a certain fraction of wavelength of light in order to be efficient, to reflect all of that light to the central focus. So, going from 43 GHz to 230 GHz, your dish has to be about 5 or 6 times smoother than it does at the lower frequency, and it's harder to make a very, very large dish that's very smooth. So, our dishes become smaller and smaller so that we can control the surface better.
All of these things are combining to make your measurements harder. This is what created a crisis of sensitivity. And it was a crisis because in order to keep making progress on Sagittarius A*, this supermassive black hole in the Milky Way, we need more sensitivity. That is what propelled us, that's what motivated us to make new instrumentation. We had to find some way to get around this sensitivity issue. The turbulence in the atmosphere affects you because you have to detect the source within the time that the atmosphere is stable. So, the atmosphere will be stable for about 1 or 2 seconds, and then it will start to change, and if you integrate for longer than 1 or 2 seconds, the phase of your interferometer changes, and it's like trying to add up a vector that's changing direction. If the phase changes are too large, you wind up with nothing because all the different points of the vector add destructively, and you wind up with nothing. If you wait for too long, your signal is destroyed. So, we had to integrate for shorter and shorter periods of time with a noisier system, with smaller dishes. That was the crisis.
Besides Rogers, who else was on your thesis committee?
Oh, my god. So, Bernie Burke was actually my formal advisor at MIT. He was a professor at MIT. Alan was a research scientist.
Is that because Alan was at Haystack? Could he not be your primary because of his affiliation?
I think he was my primary advisor, but I still needed a faculty member to sign off on my thesis. That was Bernie Burke. He was not very involved in the specific research I was doing, but he was my tether to MIT. I believe Ed Bertschinger, a cosmologist, was also on my thesis committee, as well as Paul Schechter, who's a well-known observational astronomer. I had some very good conversations with all of them.
Now, to stay on for the postdoc, was the idea that there was just unfinished work to finish?
Well, a couple of things happened there. I got married, so I had a two-body problem. My wife was still working at Harvard to finished up her ScD, the equivalent of a PhD in epidemiology. So, I wasn't mobile. I was offered a postdoc in Japan to go work on space VLBI, as I recall, and was unable to -- and that also had some sense of adventure to it, and I was thinking about doing that. But ultimately, I felt there was still a lot to do at Haystack Observatory. I was living in Cambridge, Massachusetts, at the time, with my wife. So, we decided to stay. It often happens, you think something's not permanent, and then you wind up waking up 20 years later, and you realize that's the way it's going to be. So, we both wound up staying in the Cambridge area. But really, there was a lot to do. I really felt that there were some opportunities there and getting the go ahead to work on 1mm VLBI was really what I wanted to do.
What were some of the feedback that you were getting that the scientific requirement to work at 1mm was worth the ask?
There was a lot of faith. There was some feedback in that we got some new results at 3mm. The VLBA then extended its operation to that new frequency. This is what always happens. You're in the vanguard, and you do some first measurements at a frontier frequency, and then a national facility will move into that space. So, in the mid '90s, the VLBA started to become available at 3mm wavelength, but still working with a very narrow bandwidth. There was a limit to what even the VLBA could do. There was a limit to the number of targets it could look at, and especially for Sagittarius A*, it was difficult even for the VLBA, to make much progress there because it was limited in bandwidth. So, then, moving to 1mm in the late 1990s, I began some experiments in Arizona, linking to telescopes in Europe. We had a very good and strong relationship with the Max Planck Institute in Bonn, and we were working with their team. They had some success detecting Sagittarius A* at 1mm. In 1998, they published a paper that was really also pioneering, in which they detected Sagittarius A* on a baseline between France and Spain.
Why was this interesting? This was interesting because all of a sudden there was this proof of concept that you could detect Sagittarius A*, a pretty faint supermassive black hole, at that wavelength. I believe it was 1.4mm. The problem with that detection is that the baseline was not long enough to get the angular resolution to clearly show what the size was. They derived a size limit and indeed the size that they felt was the best fit was much larger than the lensed photon ring of Sagittarius A*. If you believed that result, you would have naively thought there's no point in trying to image a black hole, because we're only going the see the surface of last scattering. The hot gas surrounding the event horizon would prevent us from seeing the shadow. So, this was a case where the first result was interesting, but it kind of led us in the wrong direction. And I think it actually served to diminish people's interest in SgrA* for a while, because they said, look, there's no point. We've reached the end of the road. We can't do any better than this size. But I always felt that the calibration of this result was uncertain. I thought, maybe this result, as interesting a demonstration as it is, is not the final word. I really felt as though the final word had not been written on Sagittarius A*, and the real key was to go from a shorter, inter-European baseline, to a transatlantic, or a transpacific baseline from Hawaii to the continental United States.
So, we launched a number of experiments between Europe and the U.S., and they all failed because we just didn't have the bandwidth. You have to understand that with those long baselines, the angular resolution is so fine that you're only going to be looking at a small fraction of the energy emitted from Sagittarius A*. If you had a short baseline, your effective resolution would be so coarse that you'd see all the energy from the black hole. That's what they were seeing with the baseline in Europe. When you go to much longer baselines, you start to receive exponentially less flux density, or exponentially less energy from the black hole, and you need to be very sensitive to do that. So, we retrenched. We said, let's back up, and let's start doing 2mm, or 150 GHz VLBI, and then we were finally able to make an intercontinental detection between Europe and Arizona at that wavelength.
I had some amazing times working with the engineers in Arizona. There was a gentleman named Bob Freund who was working there. Bob Hayward, Ferdinand Patt, these were all engineers in Arizona, and Lucy Ziurys and Peter Strittmatter were at Arizona, and we worked together to make these first detections at 2mm wavelength using the Submillimeter Telescope on Mount Graham. Those were great times. We drove a maser cross-country, an atomic frequency standard cross-country to time tag the radio waves as they came into the Arizona site. We brought a whole backend there, a reel-to-reel tape recorder, all the analog electronics in the late '90s. We had some success at 2mm, and that then encouraged us to go for 1mm, but we needed to tackle this problem of bandwidth in order to make these detections. That's what consumed me personally, and our group at Haystack from about 2002 to about 2006. So, over the course of about 4 years, we identified, this is the problem, we tackled it, and ultimately, we succeeded in building new instrumentation to overcome it.
In terms of the conferences that you were attending, the journals that you were publishing in, in what ways were you connected with the larger field, in terms of communicating what you were finding up until this point?
That's a great question. Sagittarius A*, because of its panchromatic appeal was interesting to infrared astronomers and to optical astronomers, so there were always conferences in which we would report our latest results. There was a very interesting conference that I recall in 1998 in Arizona, where people were reporting primarily some of the first infrared observations. For example, Reinhard Genzel was there, Andrea Ghez was there, Andreas Eckart was there. And they were talking about looking at the stars, orbiting the supermassive black hole in the center of the galaxy.
I gave a talk there on some of the 3mm results that we were seeing, and the possibility for extending it even further. And there were talks that we gave at the American Astronomical Society meetings. We would come together for those. There was a conference in Green Bank Observatory, I think around 2002 or 2003, where we were talking about what the next steps would be. So, we were connected with the larger group, but I think it's fair to say that radio was not making the biggest splash around that time. I mean, we were pushing on it, and we were setting some limits which were important, but people were really focused on the optical, the infrared, and even the X-ray. That was a time when they were observing the first X-ray flares from Sagittarius A*. Fred Baganoff and his collaborators, in the very early 2000s, had seen the first X-ray flare from Sagittarius A*, confirming that there was something extremely compact that was energizing very high energy photons from the galactic center. That took center stage for a long time, and there was a time when radio observations of Sagittarius A* and supermassive black holes were taking a bit of a back seat to the other wavebands. But we would attend these conferences, and we were publishing our results.
In 1995, we published the detection methods we would use to overcome the short coherence times imposed by the turbulence in the Earth's atmosphere. We also published in 2001 our first paper -- our second paper, really -- on the size of Sagittarius A* at 3mm wavelength, which became a touchstone for people who were starting to model the emission around Sagittarius A*. For the first time, we had a really good measurement of the photosphere, the size of the hot plasma around Sagittarius A* at 3mm. And then we came with some 2mm detections, but it was that period from 2002 to 2006 where I like to say we went back to the woodshed, and we said, how are we going to solve this problem? We said to ourselves, we're not going anywhere until we solve this problem. It's not going to happen until we crack this nut. And that was the key to it.
Shep, what about the string theorists who love black holes? Were they following this work?
You know, I had zero connection with the string theorists at that time. I know they were working on information paradox. I know they were working on firewalls. Certainly, we were thinking about some of these, but we were somewhat separated from that.
What about the quantum information folks? Black hole information, people who are interested in this?
At the time, I personally had very little connection with them. Now I do. Now, especially with the Black Hole Initiative, we speak every week with people like Andy Strominger, and Alex Lupsasca, and Shing-Tung Yao, and people who are thinking about the mathematics and the quantum information of black holes. But at the time, we were really focused on just the instrumentation to propel the radio forward.
Now, what's the bookend in 2006, as you establish this narrative? What happens at that point?
So, what happens in 2006 is we finalized our design for this new instrumentation that allowed us to realize a factor of 8 increase in bandwidth. We had tested it at low frequencies locally at Haystack Observatory, and we took it to Mauna Kea. We were working with the Caltech Submillimeter Observatory, and I worked with the people there, and applied for time and got time to install this instrumentation there. It was very complicated. I mean, it was really a Rube Goldberg kind of device. We had a hydrogen maser atomic frequency standard at the Submillimeter Array, which was a different facility on Mauna Kea. We used an existing optical fiber to transfer this frequency reference from the SMA to the Caltech Submillimeter Observatory, where it was used to stabilize the receiver. The radio waves we received at the CSO were sent back up to the Submillimeter Array, where we digitized and recorded them using our new backend and high-speed recorder.
So, we had this crazy system where the photons were being received at one facility, the fundamental tone that stabilized everything was at a different facility, and then the radio waves were sent back over optical fiber to the first facility where they were digitized and recorded. There was a lot of time spent running back and forth at high altitude between these two sites which were separated by about 300 meters, and it's not easy to run at that altitude between those two sites. And at the same time, we had a team in the Arizona site that was working on this. I think that was run by Bob Freund and our collaborators there. We recorded radio waves from Sagittarius A* and other active galactic nuclei at both those sits. And everything seemed like it was working well, but as I explained earlier, a malfunction at the Caltech Submillimeter Observatory doomed the whole experiment to failure. We spent months fruitlessly searching for these detections.
The other thing to understand, for people who really want to know what's going on with the very long baseline interferometric technique is that we don't know where the detections are. It turns out that you need to know the positions of your telescope to within a millimeter to really find these detections, because you're looking at the wave crests from the black hole radiation at both of these sites. You need to be able to align them perfectly. So, you're looking at a fraction of a millimeter precision in the location of your radio telescopes. And it's the very nature of the surface of the Earth that things do change by maybe a millimeter over the course of a year. So, from year to year, you have to reacquire the exact geometry of your Earth-size virtual interferometer in order to make these detections. At the same time, one telescope is going towards the black hole, and one is going away from the black hole due to the rotation of the Earth. So, you also have to search in frequency because the radiation that one telescope receives is slightly doppler shifted from the radiation that the other telescope receives. You have to correct for that perfectly if you're going to find the detection. We make a 2-dimensional search in frequency and delay, and ask, where is this detection? So, we searched and searched and searched for months, and we found nothing. Ultimately, we learned about 2 or 3 months after the observations that there was this known failure in the receiver of the Caltech Observatory.
How did you learn that?
Because they had tried to do some local interferometry between the Submillimeter Array and the Caltech Submillimeter Observatory, and they couldn't find even local fringes. They couldn't make an interferometer even over 300-meter baseline. That's when they knew there's something funny going on here, and they opened up the receiver, and they found that a small chip of metal -- a small, little particle -- had fallen into the innermost workings of that telescope and was vibrating and was ruining the phase stability of the whole receiver. We simply could not have known that going in when we made our observations. As a result of that experience, we now test for phase stability to ensure things are working properly.
What are the administrative origins of EHT? How did that get started?
That's a great question. So, as you can imagine, in the early days, we had to apply for time at all of these sites. There was a huge political and administrative overhead to what we did. We were busy making these new systems to take the data, but at the same time, we had to get telescope time. So, I would have to apply for time at the California array, CARMA, which was one of the sites we used early on. I would separately apply to the James Clerk Maxwell Telescope, or the Caltech Submillimeter Observatory for time. And I would separately have to get time on the Arizona telescope. I would have to convince all of these sites that this was a worthy endeavor, and that they should all commit time to this joint interferometric configuration looking at Sagittarius A*. There was a huge overhead there, but ultimately, we were able to convince people to do it. As a side note, one interesting story, in the 2007 observations after the failure in 2006, I applied for time at the CARMA array in California, and we got time in Hawaii, we got time in Arizona, but the CARMA array declined our proposal. This was horrible. I mean, this was really bad news. I mean, we had now been working for years leading up to this important experiment, and the CARMA array said, "Look, we're not ready yet."
Why is it CARMA or bust, though?
So, the first year, we had tried with a telescope in Arizona and Hawaii, and we'd failed. The problem there was that we didn't know whether the problem was in Arizona or Hawaii. By adding at third site, you multiply by 3 the number of baselines you have. You triple the opportunity that you'll be able to make a measurement, and you now have many more paths to victory. So, if Arizona is the problem, maybe you can still make the detection between California and Hawaii. We really felt that in order to understand how the instrument was performing, and to maximize our chance of getting a good result, we needed to have this third site. I remember calling one of my colleagues at Berkeley, and I said, "Look, we've been declined." And they said, "Well, that's how the cookie crumbles. That's life." And I said, "No. We are just not going to take this lying down." And I called up other faculty at Berkeley, I called up the director of the CARMA array, and I said, "Look, this is absolutely critical." And they said, "Okay, well, I'll try to get this reversed. Send me some convincing material." So, I sent them everything I had, and thankfully, they were able to secure us a little bit of observing time at CARMA. That was a near thing.
What do you think the winning argument was at the end of the day? What case did you make that you didn't make before?
It's hard to know. I think at this level it's all personalities. It was all convincing the director of CARMA, who I think at the time was Leo Blitz. I told him we had all this other telescope time lined up, and it's really important that CARMA is part of it. And I think that the topic, the enthusiasm, the fact that he saw that people were willing to go to the mat for it, convinced him that, you know, I think we should give some time to this project. I think they're glad they did, and I was certainly immensely grateful that they did. But it's moments like that that drive home the fact that you need champions for this kind of work. You need people that are just going to go and get it done and convince people that really it should happen.
Getting back to your original question, the EHT formed to bring these telescopes and institutes together to formalize the 1mm VLBI process for getting time and focusing our efforts. It started in 2012 and then there was a key meeting at the Perimeter Institute in 2014 where we formed an interim ‘board’ that wrote the charter for the EHT. In 2016 I was elected and became the founding director of the EHT.
Now, in terms of seniority, are you relying on people at this stage in your career, who are older than you, who are more well-established, or is your position at Haystack where you feel comfortable -- "I can make this case on my own. I'm going to stand with my own authority on this."?
That's a good question. At that time, I was still pretty junior.
Yeah. I mean, you're still the 15-year-old that started at college, and you just keep going in this narrative.
Yeah, in some sense that's true. I felt very secure operating on my own at that point. I was very independent, mostly because – even though I was not faculty at that point – I was making my way in my career by getting independent grants from the National Science Foundation. So, at that point, I had a great deal of confidence and the backing of the National Science Foundation, to go to these sites and say, look, we've been funded to this. This is really something you should do. We're on the cusp of doing something really tremendous and you should be part of it. I was able to make that argument clearly, with confidence, and ultimately convincingly.
What in the end proved to be so important at CARMA that you had advocated so strongly for? How well were you vindicated by this?
It was important because the baseline between CARMA in California and the Arizona telescope is short. It's only about 900 km. That gave us one firm data point, and it showed us that the amount of energy we were receiving from Sagittarius A* on the short baseline was roughly the total energy that even a single dish would receive. The next point was the fact that we finally ultimately detected Sagittarius A* on the long baseline from Arizona to Hawaii. That was the clincher because on that baseline, the received energy was so reduced that we could see clearly that there was a finite size to Sagittarius A*. That was the only thing that explained the fact that the short baseline observed much more energy than the long baseline. If we didn't have CARMA, we would not have had that first point. It was the combination of the short and long baselines that allowed us to clearly measure the size, for the first time, of the Sagittarius A* black hole. And that was only about 37 micorarcseconds across. That was very much in keeping with the size of the expected shadow, and that is what launched the resulting race to image the black hole.
So, at this formative moment, to go back to my earlier question about the interplay of theory and observation, at this point where you make this measurement, where is theory in this? What guidance are you getting from theorists?
So, at that point, we had a pretty good idea of what the shadow would look like because it had been simulated with general relativistic magnetohydrodynamic simulations. Also, with schematic models. Many theory groups were making these models. It was pretty clear what we were looking for at 1mm wavelength, and what the size was. But none of these models could say for sure that we would be able to see all the way to the event horizon. Yes, in the ideal case theory predicted what you might see, but there was a lot of material between us and the event horizon that might prevent us from making that image. That's what I was referring to earlier when I said that the first measurement made in Europe predicted a much larger, or estimated the size to be larger, than it ultimately proved to be. And that result from Europe put the brakes on some of the theory, I think, for a while, because people thought, oh, there's nothing to see here. We'll never see all the way to the event horizon. After the success of the Hawaii to the continental United States observations, that's when we realized, yes, now we can do it.
So, as you say, at this point, the race is on. Is it a race propelled by excitement? Is it a race because, to come back to this idea of competition in the field, you might get scooped? What's the sense of urgency at this particular moment that leads us to 2019?
It was not so much a race between groups, because we were already a somewhat international group. I was already working with people at Max Planck Institute. Just after this detection, I started to work with a group led by Mareki Honma in Japan. People in Taiwan started to get interested. The race really was, how fast could we do this? It was pure excitement, because it's as though you're on this winding road, and you're running a race, and you really don't know where the end is. Then, all of a sudden, it opens up into a straightaway, and you can see the finish line. Then, you just put the afterburners on. You just begin to run like mad, because you can really see what needs to be done. Up until the point of the SgrA* detection, we were on that winding road. We really didn't know where we were going to go, or if we were ever going to get to the end. But seeing that straightaway, that allowed us to do a couple of things. First, we were able to get millions of dollars from the National Science Foundation right away. I got a grant immediately after that discovery of about $3 million, which allowed us to start building out the new machinery and adding new telescopes.
Shep, if you're comfortable naming names, is there a project officer at the NSF who's a hero visionary who supported all of this, or do you see it more as an institutional support, an institutional response to help you do what you want to do?
That's a good question. I'm not sure if there was any one person at the National Science Foundation who championed this. It's not as though there was a person like there was for LIGO, who championed that particular project. I think, even though we saw the finish line, I was always seen as high risk, and it was not identified as a key project. I'll give you an example. After the SgrA* detection, well-received results which we published in the journal Nature, I wrote a white paper outlining the whole way forward to the US Astronomy Decadal Review. Every ten years, all of U.S. astronomy gazes at its naval and says, what are the big projects to do? And I wrote a paper with colleagues describing the clear path to imaging a black hole, and I went and gave in-person testimony to the panel. Then, it turned out that the Event Horizon Telescope as I described it was not mentioned in the entire decadal report. It just didn't appear. Even though some people realized this was a possibility, it was just not even worth mentioning, according to the gurus of astronomy in the U.S. And so, that's perhaps why we didn't have a champion at the National Science Foundation. There was nobody who really was going to take up the torch.
If the cognoscenti of astronomy did not see fit to put it into the decadal review, who are we to step into this? It didn't diminish our enthusiasm at all, and we just kept writing grant proposals that were successful, and ultimately, I was able to raise millions for this project, but we didn't have a champion. In fact, I do remember talking to one National Science Foundation official, and I told them "Look, this is really a fantastic project, and we really need to get more funding for it." And they said, "Look, the most I can really see this getting is maybe half a million dollars. It looks like it's all ‘blobology’ to me. You're going to look at a blob. Maybe you'll see something, maybe you won't." Now, that was one interaction I had with the NSF, but generally, I have to say that they really came through, because we submitted our proposals through peer review process, they were highly rated. So, the NSF had a system in which, even though we were not mentioned in the decadal review, there was still room for reviewers to say, "This is good stuff. We need to invest in this." And all my proposals got consistently very high marks, and we were able to push this forward.
There was another key development which really needs to be discussed if you're going to talk about why the EHT succeeded. We had already solved the bandwidth problem. So, now we were able to multiply by orders of magnitude the bandwidth we were recording, but even that was likely not enough. The last key development that we thought was going to make the EHT a success was phasing up the ALMA Array. Now, ALMA was this site that was coming online in the late 2000s and the early 2010s, that was roughly 60 dishes, each 12-meters diameter in one of the best observing locations for millimeter and submillimeter astronomy on the planet. But in order to make it work for the Event Horizon Telescope, we had to phase up all of those dishes to make them look like one large dish. The hooks to do that had been built into the system, but nobody had ever completed it.
So, recognizing that that was the key, that was the next big hurdle to jump over, I started working with some engineers at the National Radio Astronomy Observatory. Rich Lacasse, Joe Greenberg, Ray Escoffier -- these were the people who had built the electronics for the ALMA Array. Also, I was working with Jonathan Weintroub at the Center for Astrophysics, and Rurik Primiani on this, and over the course of a couple years, starting in 2009, we came up with a really interesting design that would allow all the dishes in ALMA to phase up together, to act as essentially a 75-meter effective dish. Now, bandwidth, as you can imagine, increases our sensitivity, but only as the square root of bandwidth. The diameter of the dish, though, that goes in linearly. So, you wind up getting a huge increase in sensitivity by adding a very, very large dish to your array. ALMA was going to increase the sensitivity of the Event Horizon Telescope by a factor of 10. So, we really wanted to add that to the array.
We went to the ALMA board, who was very reluctant to do anything with us, because they were busy just getting their facility online, and they didn't want to see us coming. We were an unnecessary complication. And I wrote to them, and I said, "Here's a way we can phase up ALMA for VLBI." They wrote back and said, "This is very interesting, but it's too complicated." And it was complicated because we were going to use electronics at each individual telescope to adjust the phase to bring everything into alignment. And they said, "We don't want you to do that. You have to come up with a scheme and a system that will only use the central electronics in the central building. We don't want you changing anything at the telescopes themselves." So, we went back to the drawing board, and we found a way, using some of the electronics in the central location, to make those minor adjustments for each incoming stream of data from the individual telescopes. And then we went again to the board.
Meanwhile, I wrote a proposal, one of the biggest proposals I had ever written, to the National Science Foundation saying we want to phase up ALMA. But I couldn't submit that proposal until we had a letter from the board of ALMA that said, "We grant you permission to do this." That was really a race against time. So, we wrote to ALMA, and they were deliberating and deliberating, and that's when again I called in favors. I called in French collaborators, and I said, "Look, you've got to call the chair of the ALMA board and tell him to write this letter." The chair of the board at that time was Laurent Vigroux who was a French astronomer. We called in all the favors we possibly could, and ultimately, with about 2 or 3 days to spare, I got the letter from the ALMA board saying, "We agree to go forward with this plan." And I was able to put that into the proposal, and then I sent that to the National Science Foundation. That proposal received the highest marks I've ever gotten from any proposal ever submitted, because I think all the reviewers could see this is the key, the EHT was possible, and this could be potentially transformative.
You're also building a sequence of near-misses here, between CARMA and then ALMA, where the whole thing could have fallen apart quite easily.
You know, you're exactly right. Imagine that we had not gotten that letter. That might have delayed the phasing up of ALMA for years. Right? It's just little things like this. If you don't push it, if you don't devote yourself heart and soul to something like this, there are many different points where it can go off the rails. That's the human dimension to some of these efforts.
Shep, to go back to this idea that you've made the measurement, and then the race is on, the excitement is on. To go back further to where you said you fully embraced Moore's law as a matter of faith that you'll keep pace with the instrumentation, and the instrumentation will keep pace with you. Was there every any concern that you were moving so fast that the things you wanted to do would not be available with off the shelf technology from 2009 to 2019?
Yeah, that's a good question. It was a little bit the opposite. We had to make sure that we were using with optimal efficiency what was available to us. So, we went through a number of generations of this equipment. The first piece of equipment we built used one generation of field-programmable gate array, that allowed us to collect a lot of bandwidth. That was, let's say, 4 gigabits per second. That, in and of itself, was a factor of 8 faster than any other VLBI network on the planet. And then we built another version called the RDBE. That was about the same bandwidth, but it was more flexible. And then, after we got an influx of funding from the Gordon and Betty Moore Foundation, which was absolutely critical, we developed in very short order a device that allowed us to go to 32 gigabits per second. And then, ultimately, to 64 gigabits per second. So, the same devices, the final generation, the third generation of these backends, we started building in 2012. A wonderful postdoc, Laura Vertatschitsch was the driving force behind those, as well as Jonathan Weintroub and a lot of engineers and scientists at the Smithsonian as well as well collaborators at Haystack Observatory that used commodity, off the shelf equipment, to get us to 32 gigabits per second. So, we were keeping pace with Moore's law. We were not out in front of it at all. Partly, the reason for that is you don't want to be on the bleeding edge.
You want to be on the cutting edge, but not on the bleeding edge, because if you're too far, then you can build equipment that may not work right off the bat, and we couldn't afford that. We wanted to use tried and true commodity electronics so there was no doubt that it would work. Then we could build robust systems that would operate at high altitude, which means they couldn't be running at the limits of their cooling capacity. We couldn't be running these components super hot, because it’s difficult to cool electronics at these rarified altitudes to keep them in their normal operating ranges. We needed to be just shy of bleeding edge, and ultimately, we were able to get to 32 gigabits per second in 2017, which was about a factor of 16 faster than any other network on the planet at that time. We were already an order of magnitude wider bandwidth than other networks, and with ALMA phased we achieved breakthrough sensitivity. All this came together in 2017.
Shep, on the administrative side, what were some of the considerations leading to you being founding director of EHT? In other words, do you have people specifically that report to you within that context, or is this sort of extracurricular from Haystack?
So, you have to remember that I've moved to the Center for Astrophysics. Starting in 2012, I began to work half-time at the Center for Astrophysics. Partly, that was because we did get this large enabling grant from the Gordon and Betty Moore Foundation. In order for me to receive that, I had to move to the Center for Astrophysics. It was one of the only ways I'd be able to take that money.
But you're still with Haystack through 2016.
Yeah, so from 2012 to 2016, I was half-time CfA, half-time Haystack Observatory. And at this time, we began to think about organizing the whole collaboration. Now, in retrospect, probably some mistakes were made. I think that we went the route of organizing it around institutions, when I think it probably would have been better to organize it around principal investigators who were driving the science, because as soon as you involve institutions, they don't always have the same devotion to the science as the principal investigators do. Institutions worry about where the money is coming from and what role they will play as an institution in the project, whereas a scientist just wants to get things done. Ultimately, we ran into some difficulties there, but for better or worse, we organized it around institutions, and I was elected to be the director, a natural consequence because I'd been leading the effort for over a decade. And then, I didn't have necessarily a staff working under me. We didn't have the funding for that, but I was directing the entire global effort with a number of people serving as leaders of working groups and helping to organize. We have a whole organizational tree within the Event Horizon Telescope. It was a very interesting time, and there were a lot of politics involved.
Ultimately, we were able to overcome differences within the collaboration. When you're that close to something that transformative, everybody wants to make sure they get the credit they feel they deserve, and there were clear tensions along these lines. There were simultaneous announcements of the result all across the globe. We had announcements in DC, in Brussels, in Tokyo, in Taiwan, in Taipei, in Chile. Everyone got a chance to make a big splash in their region, and we've even been able to honor the early career astronomers with special awards to make sure their contributions are noted. That was one of the great things, I think, about the collaboration.
As a matter of drama, Shep, can you set the stage for the moment where there's this transition from not seeing it to seeing it? How does that happen?
We went from moments of successive excitement to even more excitement. So, the first bit of excitement was when we realized that the experiment had worked, that all the baselines linking the eight telescopes we had employed were seeing detections of our sources. That was the first step. And then we breathed a sigh of relief, because then we realized we had something to work with. Next, we had to calibrate the data, and that was complicated because we had to take into account all the vagaries of the electronics across the array, the atmosphere, the polarization purity. everything that could affect our data had to be taken into account. And then, we had a moment in May of 2018. I remember where I was: at a conference held by the Black Hole Initiative. We were at a dinner, and some of the postdocs came and showed me the first calibrated dataset, and it blew me away because you could see clearly that there were the signatures of a ring in the data. I mean, you could just do the Fourier transform by eye, and you could see this bounce in the Fourier space of the data as plotted. I remember looking at that, thinking, “Oh my god.” And everybody around us was just poring over this plot at this table. There are some pictures of all of us looking for the first time at these data. I left that dinner, went right back to my office, and I ran a model to see just what the size would be for the signature that we had seen. And for M87, it was pretty much exactly what we expected for the black hole shadow.
What were you specifically looking to confirm? How did you know?
Well, I wanted to look at a ring model. So, I put a ring model into the computer, and I traced where that would give us this curve, and it overlaid perfectly onto the data. That's when I began to really believe that we had something special. And then, we split up into four teams, so that we would not succumb to group think. And we said, every group is going to independently analyze the data with no cross-fertilization. We did not want to see shadows where there were no shadows just because everybody felt they should see a shadow. And more than that, we asked everybody to also look at calibrator sources. These are quasars for which we would never expect to see a ring. And we all came together at the Black Hole Initiative in July, on July 24th of 2018 -- I'll always remember that date -- and we flashed up on the screen all of the different images from the four teams, and they all showed a ring. That was the moment when we realized, not only had we detected these sources, not only had the instrumentation worked, not only had we calibrated the data well, but we were seeing for the first time the image of a black hole. And that was amazing. I mean, that was just a wonderful moment to share with the whole team.
And then, we spent a number of months trying to make it go away. We did everything that we could think of to try to model the data with filled discs, or with Gaussians, or with anything but a ring, and no matter what we did, it was always clear that a ring was the best fit. And then, we also looked at 3C 279, a very bright quasar, where you would expect to see a core and a jet morphology, the core being the unresolved region around the black hole, with this light speed jet issuing from that black hole. And that's exactly what we saw with all of our imaging algorithms. So, all the imaging algorithms that had been developed over the years specifically for the EHT case, showed us a ring where we expected to see one for M87, and a core and jet morphology for 3C 279 when we expected that. At that point, we determined that the result was robust, and we began writing up our results. We came with six papers bound into a handsome volume, which you see here. And I think this will stand the test of time. This will be one of the great observational results in astronomy for our generation. It worked out really well.
Shep, as I've heard the story told to me from Fabiola Gianotti after the discovery of the Higgs, to Rai Weiss and the detection of gravitational waves, where there's such a strong level of concern to make sure that you have it right before the announcement, which requires to some degree secrecy and trust among the team that this won't be leaked until we're ready. What considerations were there for the EHT team that everything needed to be signed, sealed, and delivered internally before the announcement was made?
This is exactly right. With every great discovery, there's a real responsibility to tell the story purely and scientifically with no leaks, because if there are leaks, the story winds up being told by others who don't understand and appreciate the full dimension of the problem and may not take the care that we would in order to make sure the results are faithfully recounted. So, it was paramount for us that there be no leaks, and I think it's a testament to the coherent scientific vision of the more than 200 members of the collaboration at that time, that nothing was leaked. We kept everything secret, and just kept our heads down to work on the data. This idea that you have to “red-team” your results, and you have to go through the analysis over and over again, was really evident in the work that we did. All of the papers that we published were jointly written with a broad authorship list, and each one was separately reviewed by two internal reviewers to the collaboration who had not been involved in the writing. There were many iterations for clarity, and we wanted all of the six papers to tell a cohesive story. So, the first paper is kind of a summary paper; the second one is all about the instrumentation; third is about calibration; fourth is about imaging; fifth is about theory; the sixth is about model fitting. They trace a narrative arc that describes the ultimate results and how we determined them.
Leading up to the announcement, we also needed to have exquisite control over the results because everybody needed to be able to tell the story in different regions. So, if one region leaked it, that would be horrible for the other regions. We determined that at exactly seven minutes past the hour of 9 o'clock, eastern time, in the United States, that's when the image would be able to be shown. We synchronized everything down to the second. I remember that at 11 at night, the night before, on April 9th, I got a frantic call from someone saying the image was available on a web server in Germany. Oh my god. This was horrible. And then I was on the phone with the publications lead at the National Science Foundation, and they said, "If Europe is going to do this, we are going to release the image to US reporters." I felt like I was the intermediary between nuclear powers. I had to get on the phone with the scientists in Europe, and I said, "Look, you have to fix this." But of course, everybody was asleep.
It turned out, what had happened is that the PR people at the Max Planck Institute had used an old internet link for their new announcement. So, the old address was still floating around out there, and somebody had accidentally typed it in, and seen the full black hole image press release. It was a journalist, but thankfully, that journalist was someone of great integrity, and they reported it to us first. They said, "Are you sure you want this to be public yet? This seems really irregular, and I want to check before I write a story about this." And we said, "Thank you. Please do not write the story." The head of the AAS, the American Astronomical Society press corps was informed about this, and he got in touch with me. But we were able to diffuse this, and we were able to get people in the very early morning hours in Europe to close off this site. I was able to convince the National Science Foundation not to release anything, and we wound up making a very nice announcement that morning.
Obviously, this is going to shake the physics and astronomy, but did you have a sense, and were you prepared with the communications team, that this was going to be all over the newspapers and cable news?
We were not prepared. I mean, we knew it was going to be big. We were calibrating ourselves with LIGO, for example. I mean, LIGO was a big story. But somehow, the visualization of this black hole wound up being much larger.
And for whatever reason, the public has some grasp of a black hole much more than gravitational waves. Like, the man on the street has heard of a black hole. Probably not gravitational waves.
Well, you know, it's more than that. It's that the people on the street have some idea of what a black hole might look like because it's in literature, because it's in the movies, because it's part of science fiction. It's part of our culture, and it represents the deepest mysteries. If you don't know something, you throw it into a black hole. If you want to time travel, you go into a black hole. I mean, Interstellar showed us that you can put anything into a black hole, even bookcases, and you can communicate with the power of love from inside a black hole. There appears to be no limit to what can happen in a black hole. It's the tabula rasa. It's this empty notepad you can write your heart and dreams into. Really capturing the essence of the black hole, showing people what it looked like resonated deeply across cultures, across borders. Everyone was able to appreciate the significance of it. And going even further, it was because this was seen as being really impossible.
Maybe hearing a black hole with gravitational waves, you can kind of think, maybe that can happen, but the very nature of a black hole is that it forms an event horizon, that light can't escape from. And just simply stated like that, the average person, even if they're well-informed, thinks that imaging one is an impossible problem. At the very least, they say, that's a really hard problem. And then when you say, we had to build an Earth-size telescope to do it, and we had to draw on expertise from around the globe to do it, and we had to use the determination of early career astronomers, and the experience of veteran astronomers, and we had to organize this politically across the globe, people, I think, see it as a human endeavor in which we were able to tackle a hard problem and solve it. With the pandemic that we're experiencing now, with climate change that we're experiencing now, even healthcare, food distribution, these are all seen as being impossible problems. Almost intractable problems. The only answer will be a global response. The only answer will be a solution motivated through coherent vision and drawing on the world's resources. The Event Horizon Telescope showed everyone that we could rise to these types of challenges. I think that was the uplifting part of it. There was a science part, there was a wonder part, there was an against all odds success part. But then there was the uplifting part that we had done it as an endeavor of humankind, not just as science. And I think that's what spoke to people.
Shep, the media narrative is of course that the achievement is that EHT imaged a black hole, that we can see a black hole. What is both true and not true about that statement?
Well, of course, one can get into a discussion of what an image is. In this case, we are reconstructing an image from imperfectly sampled data. So, in some sense, what we're seeing is the most likely visualization of what the black hole is, but it's pretty clear that we're seeing a ring structure. It's pretty clear that we're sensing a dark void in the center of this brightness distribution. And we're also seeing it in radio waves, and we're visualizing this with a false color image that we can see with our eyes. But still, I believe this is the best representation of what is actually happening around the black hole. You are seeing light bend around the black hole. Even though they're radio waves, you're seeing them made visible through this process of false color. Because this image has some element of uncertainty to it, because we're looking at it with sparse sampling, we've also set a goal for future observations. We're going to look at M87 with ever better arrays. We're going to look at M87 with ever more capable instrumentation. I'm supremely confident that we will see corroboration of this image, and that we'll learn more about it. This will stand the test of time. Sometimes the permanency and the validity of something really is not totally understood until we have sat with it for a while. What the image allows us to do is to sit with it, and to understand it, and to make predictions from it, and to see those predictions realized. Over the course of the next few years, and even the next decade, we'll come to see this image as being a true image, as a true representation of what the spacetime around the black hole really is.
Shep, do you see the Breakthrough Prize as an appropriate response to the fact that this is big science, that there was not just one person, but hundreds of people that made this possible, and it can recognize a large group of people in the way that, for better or worse, the Nobel Prize does not?
That's an interesting question. I think certainly, recognizing the full collaboration in the way the breakthrough prize did is appropriate. It truly is. I think this is not a project that any one group or any one person could do. I wonder, though, if the anonymization of efforts within the collaboration will not serve to reduce people's appetite for taking risks. When everything becomes anonymized, then some of the efforts that are required, sometimes, to launch a project like this, might not be seen as worthwhile, or might not be seen as being valued by the community. So, one has to strike a balance. One has to allow for people to take risks, and to understand that it will be worth it, but one also has to understand the supreme effort by a large collaboration. The Breakthrough Prize chose one path. It was a path that was different than other prizes they have given, other awards they've given. They've given an award to LIGO, for example. LIGO, arguably, is a huge collaboration that could never have succeeded without the efforts of many, many people. And the Breakthrough Prize chose to acknowledge certain leaders within that project, as well as the collaboration itself. In the EHT, they took a different path. I think this illustrates the arbitrary nature of the award process, but certainly, I think that the whole collaboration needs to get the recognition. It needs to be recognized that this was a group effort. That was a complicated answer.
Yeah, well it's a complicated issue.
Because frankly, you're hitting on an issue that speaks to the personalities involved.
Absolutely. And of course, you could make the argument, it is your picture -- yours -- that's on the Breakthrough Prize website. So, in many ways, you are the public face of that recognition, and there might be value in the sense that the public funding agencies, you still need a face. You still need a personality, because you can't make that connection with 347 people. There needs to be that level of connection. Might be one way of squaring that circle.
Yeah. Squaring the circle. That's interesting. I will say -- I mean, you're the historian. I think you know and recognize maybe better than anybody that these kinds of projects are launched by individuals. And then they're finished by large collaborations. That's the narrative, and you have to be able to honor both ends of that because if you don't, I think it will diminish people's appetites for taking risks. That's the first thing. But also, it can give the public the wrong idea, and it can give funding agencies the wrong idea. Then somebody will come to them next with a great idea, and say, "Give me some money to do this." What are they going to say? "Well, first, get 200 people together, and we'll give it individually to all of you." So, there has to be a happy medium where you absolutely acknowledge the full team, but you also recognize that that's not the way science is really started. I'm not sure where that breakpoint is. The Breakthrough Prize chose one version of it. In other cases, they have chosen a more nuanced version of it.
Right, right. Shep, to what extent is the achievement of imaging one black hole sufficient for extrapolating, and thereby perhaps not necessitating, similar endeavors to image other black holes? Or is that worthwhile, because different black holes might teach us different things?
Yeah, it's a great question. So, it turns out that in the family of supermassive black holes, there are different varieties, and they can tell us very different things about fundamental black hole processes. M87, for example, launches relativistic jets. So, we believe it's a spinning black hole with dynamically important magnetic fields and understanding how the jets are launched will help us answer the deep question of how energy is extracted from a black hole. How these engines power redistribution of matter and energy on galactic scales. That’s an important link to how black holes affect galactic evolution with impact on cosmology. Whereas Sagittarius A* is a different kind of animal. Sagittarius A* in our own galaxy is much fainter, and it may not have a jetted outflow at all. This may be an object where we can look in real time to see the motions of matter around a black hole, and we can also look at it in real time in the infrared, and X-rays. So, we can study in a panchromatic way how our own black hole converts gravitational potential energy into luminosity. And they're going to be very, very different creatures. In the same way, if we could look at other objects, we'd start to study the demographics of black holes and look at different spins, look at different orientations and inclinations and jet powers. LIGO does its work almost exclusively by carrying out demographic studies. And it has to, because once two black holes merge, they’re gone. LIGO cannot detect that black hole pair again. They get one shot at it.
The primary way they aim to study general relativity and the progenitors of black holes is to analyze many, many events. Now, imagine if the EHT could do the same thing. Right now, we're complementary to LIGO because we have only a couple of sources, but we can observe them over decades. We can observe the dynamical changes. We can observe the turbulence and the accretion flow as a function of time. We can observe the changes in magnetic field topology as a function of time. When M87 and SgrA* exhibit bursts in the radio spectrum, we can localize those radio bursts and see exactly what drives those. So, it's a completely different view of black holes. If we could further extend that by launching a satellite into space, for example, and creating a telescope larger than the surface of our planet, then we would have access to tens or hundreds of black holes. Then we can start to look at the spin and the mass of these black holes over cosmic time, using radio interferometers that link the Earth to a space platform. That's something to dream about. So, there's a whole new dimension that we can explore into. This is just the beginning. It's just the beginning.
Shep, last question. For you personally, and for EHT generally, to what extent is the imaging of a black hole the launchpad for everything else to do, and to what extent do you not want to be specifically bound to one discovery at a particular place and time, because the universe is big and there's lots to learn?
Yeah, that's a great question. So, with all great discoveries -- same thing goes for LIGO -- this is the start of something. It's easy to look this image and say, okay, let's put a bow on that and it's done. But really, this is just the start. We'll get ever better observations, ever better resolution on the event horizon. We can start to ask daring questions about whether the internal quantum states of the black hole might be manifest on horizon scales. We can only do that with much better precision than we have now. So, there's a whole new avenue of questions one can ask as we refine these measurements that bear on the information paradox, for example.
Similarly, understanding the true nature of jets and how black holes launch them is still a very open question. We can make the most detailed observations of that process in the future. So, there are multiple directions that we could go in, even on tests of Einstein's relativity. Right now, we're looking at light bending around the black hole and making some statement about how well it agrees with general relativity. But imagine that we could time the orbital periods of matter as it circularizes around the black hole. That will give us an entirely new way to test GR, and also to measure the spin of the black hole.
One of the things I like to point out is that the size of the shadow is relatively insensitive to spin. If you look at a non-spinning black hole, what we call a Schwarzschild black hole, then the size is about 5.2 times the Schwarzschild radius. If you spin the black hole maximally, so it's the most rapidly spinning black hole you can have, it shrinks only to about 4.5 times the Schwarzschild radius. So, a very slight change, but the spacetime of course is much different around a spinning black hole than it is around a non-spinning black hole. That's because even though the photon orbit is shrinking, it gets magnified more strongly by the spacetime of the spinning black hole. So, these cancel each other out, and you wind up seeing the same approximate shadow size. But the period of orbital matter around the black hole, at the innermost stable circular orbit, changes by an order of magnitude. For Sagittarius A*, it ranges from 4 minutes if the matter is co-rotating with the black hole, to an hour if it's counter rotating. That's a huge difference. So, if we see a signature that's 4 minutes in length due to the orbital period of matter around the black hole, that will nail the spin. And if it's an hour, then we'll know that it's spinning in the opposite direction, and we’ll be able to make a very good measurement of the spin of the black hole. So, these are all new ways of using time domain radio astronomy and high precision imaging that will lead us in entirely new and unexpected directions. So, I'm as excited as ever by what's to come.
Shep, this has been a fantastic conversation. I want to thank you so much for spending this time with me. I really appreciate it.