Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.
During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.
We encourage researchers to utilize the full-text search on this page to navigate our oral histories or to use our catalog to locate oral history interviews by keyword.
Please contact [email protected] with any feedback.
This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.
This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.
Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.
In footnotes or endnotes please cite AIP interviews like this:
Interview of Harry Deckman by David Zierler on 2020 September 25,
Niels Bohr Library & Archives, American Institute of Physics,
College Park, MD USA,
For multiple citations, "AIP" is the preferred abbreviation for the location.
In this interview Harry Deckman, recently retired as Senior Scientific Advisor at ExxonMobil Corporate Strategic Research, explains the many research and consulting facets of this work, and the collaborations he has participated in over his career which has tracked with technical and geopolitical developments. Deckman emphasizes ExxonMobil’s commitment to research in non-petroleum energy sources in parallel with finding new oil and gas reserves. He discusses: his childhood outside of Cleveland; early interests in science and the excellent public school offerings he received; undergraduate education at Case Western where he focused on solid state physics; his decision to go to Iowa State for his PhD, where Constantine Stassis supervised his thesis research on magnetic neutron scattering; his initial appointment in Exxon’s Corporate Research Lab to work on laser fusion, the impetus of this research in light of the energy crisis of the 1970s, and the many experiments and collaborations in the Lab that mixed basic and applied science, including the demonstration that laser plasma could be a source for X-ray lithography; the rise of soft matter physics as a discrete subfield, and the impact of the information revolution in the 1980s on his research agenda and his focused interest in absorbed molecules and studying isotherms and transport; his work with novel reactors and coupled waves in geoscience, and why his work in carbon dioxide separations marked a major turning point in his career both as a fascinating area of research and because of the imperative at Exxon to address carbon emissions; his work on hydrocarbons and their value as an affordable energy source; his interests, in retirement, to continue searching for solutions for greenhouse gas mitigation strategies.
Okay, this is David Zierler, Oral Historian for the American Institute of Physics. It is September 25th, 2020. I'm so happy to be here with Harry W. Deckman. Harry, thank you so much for joining me today.
My pleasure. I kind of had a way I wanted to go about doing this, if you wouldn't mind.
What I thought I might do — of course, we can change this. I wanted to start with a 100,000 foot background of where I've been, and then maybe go back and follow my career in a timeline. Would that be acceptable? It'll be in some detail.
Okay, that would be fine. Let's just start, though, just so we know where we are from the outset, tell me your title and institutional affiliation.
Okay. I am a Senior Scientific Advisor at ExxonMobil Corporate Strategic Research (CSR). In practice, what that means is that I advise about — I'm going to say 30% of the projects in CSR, whether they're science build, business affiliated, or corporate driven. To be fair, I will not talk about the advising dimension of my life, because it goes into other people's work and other areas. The other dimension which we'll come to, which has probably been a third of my entire career, has been acting as a consultant to our businesses. I think it's not germane for this discussion. So, at the end you're going to carve out half my life away from the discussion, but I think there will be enough to talk about.
Certainly. So, did you want to start with that 100,000 foot overview, and we'll go from there?
Yeah. My career has been one of a lot of collaborations, to be fair. I've interacted with a huge number of people. For this interview, I started to write the list down, and I gave up when the list exceeded 2,000. I just threw in the towel. I suddenly realized if I talked, I'm going to have to apologize to so many people right now that I won't be able to mention. I'll barely be able to say one word about anybody. So, I've kind of tried to carve this up, with some exceptions, not mentioning very many names. I will apologize to so many people who started this. It is through those collaborations that I've gotten the pleasure of interacting with some of the brightest people in the world. Fabulous colleagues to learn from. They're the ones, to be honest with you (full disclosure, up front) that actually made me what I have become. It's been the ability to work with interdisciplinary teams, which has been the hallmark of the ExxonMobil Lab. You can work with so many bright people and learn so much from them. This has been and remains a great environment to be in, even though the business environment throughout the oil and gas industry has been very challenged in 2020. With that, I want to start with a big apology to everybody, because I just won't be able to name even a small fraction of my collaborators. I tried to think how to weave names in, and then I thought all I'm going to do is read off a list of names all day long. So, it's what it is. That's how we're going to go at it. I'm going to really focus the talk on the majority of my career, and that's with Exxon and ExxonMobil. I'll probably short shrift the beginning of my scientific life, and that's because, in reality, it is the ExxonMobil career that has really defined where I've gotten to. I think it's useful at the 100,000 foot level to look back and note what has driven my research career at ExxonMobil. My career tracks the history of the oil and gas industry. At 100,000 feet, let's look at that. I was hired in the mid-1970s to the Exxon Corporate Lab to work on thermonuclear fusion. Wow, why would you do that, pray tell? Well, the answer is that in the '70s, Exxon, just like every other oil company in the world, was looking into options for alternative business lines given the challenges with oil supply at the time.
You mean geologically and politically, because of course, the Arab Oil Embargo is '73-'74.
Right. What happens if you turn back the clock to the 1970s — how do we find oil today? The same way we did in the 1970s. The primary exploration tool is seismic. Seismic is a very, very complex technology. I have a reasonable understanding of it but won't go into all the nuances. That would take too long to even begin to touch. We'd be here for 10 hours if we started to talk about all the aspects of seismic exploration. Let's just say that the data comes in with tons of noise, lots of artifacts that have nothing to do with the signal one wants. It's all physics related, but most of the signal you get has nothing to do with what you're trying to look for. Even after removing noise and artifacts it turns out, what you're looking for, is not like looking through a lens to form a simply recognizable image. It turns out, the Earth effectively has a change in index of refraction with depth. It's like looking through a lens with a gradient of index of refraction. That's because the index of the Earth, the velocity of sound, changes with depth in the Earth, tremendously. So, you have to have some way of accounting for the velocity change in the Earth. I've just done tremendous short shrift to so many things, but to process data so that it can be interpreted you have to do something called migrating your image. Whenever you get data, you have to migrate it. Today, we do that with a huge digital computer. The data sets are astronomical and world scale computers are used. That's why our company today has a peta-scale supercomputer, as do most of the other major oil companies. You need that, because you have to digitally migrate. You have to build velocities. It's very complicated. I will not talk about full-wave inversion, and all the technologies used. But if you turn the clock back, you had none of that. Let's think of a 1970s computer. How much memory did you have? 50-100 kilobytes on a mainframe? So, you could only demonstrate how you could do things with a computer. Our company did it as well as others. But you actually migrated it with a compass. You may have used a compass. You can do Kirchhoff migration with a compass. I won't teach it today, but it can be done, and it was done. You aren't going to find much oil that way. It's very simple structures. So, the answer of the day was because the digital revolution had not occurred in the 1970s, all you could find were very simple structures. That's really not where all the oil's been found. So, because of this challenge, the corporate research group for our company, Exxon, which is where I started, wound up with three big focus areas.
One area was to perform basic research in other forms of energy. So, at the lab, we worked on fusion, isotope separation, solar and biofuels. Exxon had a second vision, which was amazing. They had the vision of the information revolution. At that time, you didn't have the entrepreneur money chain in Silicon Valley that you have today. So, they became one of the first venture capital funders, they started more than 20 companies in Silicon Valley with the clear vision of the information revolution. They funded companies developing displays, printers, storage devices, workstations, you name it as well as the first commercial microprocessor; the Zilog Z80. I'll return to that later. I had an ultra-small role in its development at most a tiny, tiny footnote. The information revolution bet was also a portion of the lab and I did a little bit of other work to support it.
The third area was to research new ways to find or develop oil and gas reserves. This spurned work in the lab to develop hydrocarbon resources where we already have it: known reservoirs or known resources such as coal. Within CSR there was a large amount of research on coal chemistry to make synthetic fuels. Physicists in the lab began to work on improving recovery from known reservoirs. At that time, you were only recovering about 20-50% of oil in place, depending on details of the reservoir. So, you can find it where you have it, and with a successful enhanced recovery technology potentially increase the amount of oil you can get out. So, that was one of the bets that was placed down that I will come back to later. Hidden within that bet is the birth of soft condensed matter physics. Finally, also, there were refining chemicals bets placed at the same time, but we'll skip over those because I don't think they're germane to this discussion. Okay, so, following the history timeline and this is the big history picture — by the mid '80s, the information revolution is well underway. Digital computers have advanced to the point you can now process seismic. You're now able to find reserves. That advance has unlocked this vast amount of hydrocarbon fluids and reserves that you see today on the markets. If we transition forward, we began to address a challenge that has influenced my career and I will highlight a few things I have done to meet it. This is making sure the world has the energy it needs while also minimizing climate-change risks. In our company, it's expressed as the dual challenge, which I think is very articulate. Much of the world lives in poverty. Let's be fair. We're talking of billions of people. They have the desire to move into the middle class, and everybody knows as you move into the middle class, what's the first thing you do? You use energy. Not a big surprise. So, there's a major need — there's a hunger for a lot of people in the world to be able to improve their lives, and with that, they will use more energy. At the same time, the question is how do you mitigate the impact of greenhouse gas emissions? On that dimension, I will say things that I've been doing. I won't say everything the lab is doing, but I'll say there is a lot of research that we're bringing forward.
So, Harry, the dual challenge, the idea is that on the one hand it's providing energy to all of the people who aspire to a more secure and comfortable life, and on the same time, it recognizes that all of this increased energy use has implications for global climate change.
Precisely. One way the company is currently responding is to focus on CO2 capture and sequestration. On the research side the lab has been exploring a whole bunch of options, and building a science base to create more options. The list of concepts being explored is fairly large. There are a lot of innovative and clever and fermentative ideas giving options to capture CO2. There are concepts on both point sources and negative emissions technologies to remove CO2. Some of my research has focused on technologies in this space. Okay, so, I went longer on that than I thought I would. We're going to be over time. I think I gave you all my ground rules — everything I'm going to talk about has been released in a patent publication that I can point to so it'll basically be a non-proprietary discussion.
I know what's certainly non-proprietary. We can start with your background. We haven't learned anything about where you're from and your family background. Let's start with your parents. Tell me a little bit about them and where they're from.
Okay. I was born in Cleveland, Ohio. My mother never talked much about herself, but one time, years ago, I gave a seminar at Washington University in St. Louis and looked at a plaque, and she was in one of the first graduating class of women from Washington University. She had never told me that at all. My father went to Western Reserve and had excelled academically. It was the Depression, so economically, as I understand, he had been accepted for graduate studies at Harvard, but they passed because they couldn't afford it at the time. That's what I'm told from other family members but my dad never told me that.
What was his field of study? What did he want to pursue?
He basically was doing a business degree, and I don't know what his actual curriculum was at that time. What he professionally did, was computational and accounting work. So, he was an early user of IBM equipment. In the Second World War, he had gone off and was in the Far East, and also told me very little about himself ever. He never, ever, ever spoke of the Second World War. It wasn't until he died that I found a citation for code breaking. It was kind of obvious, because he was in Papua New Guinea. It was just an obvious thing — he was on the Missouri at the signing, and he was only a Lieutenant Colonel. So, there were some anomalies that I knew were there. He never, ever said one word. So, I have no idea. In the end, he became the Chief Executive Officer of the Cleveland Transit System.
You grew up in Cleveland, Harry? Your whole childhood was there?
Yeah, Euclid, Ohio, actually. It was a lovely place to grow up. At the time, it had an excellent industrial tax base, which has since eroded. You had a huge, enormous industrial tax base there, which basically funded a wonderful education. Facilities at the high school were incredible. I was taught by people who had their Masters and PhD’s in the science areas once I entered tenth grade. It was an incredible education. I can't say enough about it. What a job. Unfortunately, that has eroded. Like a lot of things, it has fallen on harder times when you don't have the funds to back it.
Harry, were you interested in science from an early age?
I think from the get-go. From day one. As far as I know, at the youngest age possible, since I could ever remember, I was always interested in science. All the things of that era — amateur rocketry, chemistry — coming out of the high school, I was planning on becoming a chemist at Case. I went to college locally, which I had always wanted to do anyway. It was a place I admired at the time. I went there to become a chemist, took organic chemistry, and suddenly realized I'm learning a lexicon. There's no "why?" to it. That's not the way it's taught today. I know several physical organic chemists. They promised me that this is not the way organic chemistry is taught anymore. So, I wound up loving physics. Just completely enjoyed it. All the reasoning was there, and the logic. I just enjoyed it to no end. So, at Case, it was a transition. It was a very quick transition — about the first semester. Case gave you a very, very — scientifically, well-rounded education. I'm not going to say so from a humanities standpoint. At that time, what you did is you took one humanities each term, but you basically took a wide array of science and math courses. I was exposed to metallurgy, chemical engineering and most scientific disciplines. The core curriculum made you sample a little bit of most scientific disciplines. From a science standpoint, this training has served me well.
Harry, did you ever cross paths with Sig Hecker?
I've heard the name, but I don't think I've crossed paths with him.
He was a metallurgy guy at Case. He went on to be the director at Los Alamos. World-class authority on plutonium science.
Yeah, I've heard the name, but I am not really sure if we met. I might have met him, because I've been at Los Alamos many times, and met many of the people there.
So, Harry, initially, you were focusing in chemistry.
Well, it was very short-lived. After high school, I was convinced I would be a chemist. I liked physics, but I figured I'd be a chemist. Then, I got to Case, and almost immediately became a physicist. I've since, by osmotic pressure from all my colleagues, learned a lot of chemistry. And, to be fair, in full disclosure today, out of the clear blue, not seeing it coming whatsoever, about two and a half weeks ago I learned that I won a national prize from the American Chemical Society. It is the Storch Award. It's a national award in energy chemistry. I will be the 2021 recipient of that.
So, I guess I've come full circle, possibly the only reason I won it is because of all my colleagues who taught me. I probably had the most expensive chemistry education known to man if you think of the time invested by my colleagues, and how much my colleagues are paid. You're going to see more and more as I follow the evolution of my career, I start to transition to many different disciplines. I hope I always bring the perspective of a physicist to what I've done.
When did you start taking physics courses?
High school. Probably the first was junior high, an introduction. In high school, I had an advanced physics course. Then, I came to Case and started right away with physics, and just straight on from there. I really didn't change my trajectory, other than I stopped taking chemistry courses. That was about the only trajectory change that ever occurred. I found I loved physics when I was at Case. Excellent teaching, fantastic opportunities and I want to mention so many people but again, I'm passing on naming people.
Harry, I wonder if you sort of naturally gravitated toward solid state physics as an undergraduate.
I did. I was going to get there, yes. I went to solid state physics in a natural way. What happened was I got two NSF summer traineeships that introduced me to solid state physics. They were with Don Schuele's lab at Case. He worked on pressure derivatives of elastic constants as well as many other topics. My involvement was with the work he was performing on pressure derivatives of elastic constants. Those are due to anharmonicity of solids and were funded by the Naval Research Lab, but my funding came from NSF summer traineeships. What you do to measure them, is determine the change of sound velocity at high pressure to approximately a part per million accuracy. So, I suddenly got involved in high accuracy measurements. Conventionally they were done in a pulse echo manner and I became involved in a new more accurate measurement technique. I was involved in developing the theory for a continuous wave resonance method of a piezoelectric crystal bonded to a cylindrical sample, because you can measure frequencies very accurately, as we all know. So, that became a standard method thereafter and I did a little something to move it along. Of course, what you're measuring is the electrical impedance of the crystal. You're measuring the resonance through the electrical impedance of the crystal that you put on it to drive the sound. As such I gravitated to solid state physics early on. Upon graduation from Case I was accepted at Iowa State University and received a NSF fellowship. The NSF Fellowship is what took me there because it paid for the graduate school and minimized my teaching requirement. I knew at that point; I didn't really want to teach. At Iowa State I wound up having two advisors. The initial was Sonny Sinha. He left very early in my career, and my real advisor was Constantine Stassis.
Sonny Sinha would go on to San Diego?
Many trajectories, but he's at San Diego currently, yes. You'd be completely correct. I would say Stassis is the one who really shaped a lot of my early thinking. What he forced me to do was to think more deeply about problems, look to get insights into them, and then develop simplifications. This was the essence of his scientific approach which I use in many instances although often I hybridize it. As my career developed I have become a generalist rather than a specialist. — I broadly explore topics initially, and then in areas of interest try to look deeper into them, trying to find insight and simplifications as well as analogous problems. With Stassis I did a combination of experiment and theory. My thesis was basically theoretical. — It was on magnetic neutron scattering. We had been studying a lot of things about magnetic scattering from rare earth elements, but this was more about form factors and the basic theory of magnetic neutron scattering. Although there had been many treatments of magnetic neutron scattering, they were fairly detailed and complicated when applied to problems such as scattering from rare earth elements. When we started to think about them and looked at them in depth, we suddenly found an analogy to the atomic radiation problem, which has been elegantly solved. Some of the greatest physicists in the world took care of that one, thank you very much. So, once you've found that, you've unlocked the simplification. It was the big unlocking from my point of view. You could build on Racah algebra formulations and simplifications that had been developed. You could employ numerous tricks from atomic physics. For a numerical treatment I did some Hartree-Fock-Dirac calculations of form factors, and we published eight or nine publications before I left, covering most all of this work. That is a very brief summary of my background and looking back I appreciate the general thought process that started to shape how I approach research problems. I also knew that a lot of me was different than the way we worked, because a big part of me wanted to do something that was innovative, creative, and industrial. I wanted to work on really practical problems. That was a big portion of my persona. So, that was a desire that I had when I finished up, and when I went hunting for jobs. I was fortunate that Exxon made me a job offer right out of school, with no post doc. They took a big gamble on me. They took a gamble on some of the other people that you've interviewed. These are some very famous people. There weren't too many that they took such gambles on and I am probably the black sheep of the group, by the way.
Harry, what year was this when Exxon offered this to you?
I joined in 1976.
Okay. And you joined what? Was it the lab? What was the division that you initially started with?
I was hired into the Corporate Research Lab. I was hired to work on laser fusion, specifically laser fusion target fabrication. I'll just say that strangely, with a resume doing neutron scattering and theoretical work, I wound up interviewing for the same position at the national labs. Something about my resume, I don't know. Don't make me explain that to you. It was the same job offer but in the end, what drove me to Exxon was the scientific environment. Even during the interview, it was obvious that Exxon was an innovative, interesting environment. — I'd never seen anything like it. A myriad of things were going on and it is impossible to do justice to the whole backdrop of all the energy research that had already taken off. It was an eye opening moment for me. They had a vision of becoming the Bell Labs of energy. They sold it. I liked the idea that you could come there and be creative.
On that note, were you aware of the idea that Exxon had a vision to be the Bell Labs of energy? In other words, that it was really committed to basic science, and letting the staff scientists pursue what was intellectually interesting, and if that related to the bottom line, great, but there shouldn't be any particular pressure to direct work in that area. Was that your sense early on?
I think one sometimes paints things as monolithic. I don't think it's quite the black and white that everybody wants it to be. I think there was a portion that let people pursue their interests but also at the same time a portion focused on the new directions the company was exploring. That portion was focused on a mission with the idea that fundamental approaches would move the needle. It wasn't a crisis, but there was urgency as I tried to say in the background. Being fair, that’s good news for research. There were several grand ideas (or bets) that defined a mission and there was the idea that unfettered research will also take you somewhere. You don't know where it's going to be, which is fair. That's been the history of the world. So, I'm willing to say a toe in every pond — that's why I didn't want to be monolithic about this whatsoever.
I guess a simpler way of asking that question is when you got comfortable in the lab, and you saw the lay of the land, at what point did anybody, if at all, come to you and say, "Harry, you've got to work on something that's going to be profitable to Exxon at some point."?
Well, I was really hired on a different axis than some of the other people were. I was hired specifically to work on laser fusion target fabrication.
How would this have related to overall goals that Exxon had at the time?
It's very simple. When I was hired, they had an honest belief which the national labs and everybody else did, that laser fusion was going to succeed, because everybody thought they understood the physics behind it for a number of reasons which I will not say specifically, because it will probably have to get redacted anyway. They believed, truly, for what should have been a lot of good reasons, that the physics was well understood, the scaling was understood. When I came, they had a little tag line that was, "Online in '79," meaning they thought laser fusion break even and even net energy production would occur by 1979. With the scaling laws believed at the time, that was a reasonable idea. I was hired specifically with the idea that Exxon could become a fuel supplier for thermonuclear fusion. As such they wanted an early entry into target fabrication. Their logic was simple; if we supply fuels now, we'll continue to be a fuel supplier. Bring it on. I was given some freedom, but I was part of a mission. The laser fusion project was collaborative with the Laboratory for Laser Energetics at the University of Rochester. I would spend about three days a week up there, so I commuted back and forth between the Exxon lab in Linden, New Jersey and Rochester every week. Also, a lot of the people I was working with were with the Institute of Optics at the University of Rochester. So, I learned a little bit of optics too, on the side. I'll drop one or two names, but not many, to try to minimize it. So, it was kind of a formative time. The other exciting feature was basically that when I joined, we didn't really have a capability to make most of the desired laser fusion targets. So, I got to come in on the ground floor and help build the capability. A lot of that was to build micro-fabrication facilities. So, we're back in the '70s. There aren't many people doing micro-fabrication at the time. There are some, but it's not what it's become today and this thread will go through my career. So, what's a laser fusion target? When I arrived, the simple target that people were playing with at the time was about a 100 micron diameter hollow glass sphere. It typically had a ~1 micron thick wall that was individually characterized. So, you individually selected them using a Mach-Zehnder interferometer to get targets that had a wall thickness uniformity better than about a quarter micron, and then went off and fill them with deuterium or tritium. So, how do you fill a hollow glass microsphere with deuterium or tritium? You basically put it in an autoclave, heat it to high pressure, and diffuse the gases through the wall. Initially I helped develop some autoclaves and ways to quantify the amount of gas filled. Tritium, of course, is a little different kettle of fish, because it has about 10,000 curies per gram. Nasty. So, we actually had that done at a National facility. We didn't handle it ourselves. The more advanced targets started to get into the beginnings of the micro-fabrication effort. Some of the advanced targets had thin film coatings on top of them, including a wide variety of metals ceramics, and polymers. So, you had to deposit them uniformly on the exterior of a microsphere. We won't go into all the tricks of the trade that we built, but you start to do physical vapor deposition, sputtering, evaporations, electroless coatings, plasma polymerization, and gas phase polymerizations. We did all of this and collaboratively built a capability as part of a group effort. One of the things I invented was a way of making uniform multilayer colloidal particle coatings on microspheres. We found a way to coat the microsphere surface with one layer of ~250 angstroms size alumina colloid, and then by electrostatic attraction adhere a monolayer of oppositely charged silica colloid to it. Using colloids with different isoelectric points we could keep going back and forth to build a coating from nanosized particles. From there, you got to the fancier micro fabrication stuff, which would be a shell within a shell. So, that would be a ship in a bottle problem, and that was done with micro fabrication. We did early versions of plasma etching and lithographies. It would be a while to describe how we did that. As previously mentioned one of the things I had a large role in was answering the question of how much gas was held inside a target. The issue was just because you permeate a gas into one of these micro balloons, that didn't mean it's going to stay there. Glass composition dramatically affects the rate of diffusion so, the question was, how much gas is in one of the targets after you've stored it? You can't go and prepare it the day before you use it. One of the ways we answered this question was crushing an individual microballon inside and a mass spectrometer and quantifying the pulse of gas released. Where I think I started to get clever was with tritium. I came up with a way of counting the beta particles that came through the glass wall from gas in the interior, and devised a way to correct for the attenuation. This was a nondestructive assay, so right before it got shot, you knew exactly how much tritium was inside. That helped you understand the neutron yield that you're getting. Another characterization which we made significant improvements on was wall thickness and sphericity. We developed a variety of fancy stuff. We had one of the first AC interferometers in the world that was built by the Institute of Optics. This extended the optical resolution limit from a fraction of the wavelength of light to lambda over ten to twenty. I came up with a way of doing very, very high resolution radiography. Kind of a tricked out system. You take a scanning electron microscope, change the column so you can get an intense beam that you can really focus to a submicron size spot. You might want spot sizes smaller than that, but the technique required the huge beam current. Then the target is mounted in a cassette inside the SEM and exposed from a fan beam projected from the spot. The radiograph was recorded with high resolution silver halide film and the recorded spatial record of the silver concentration in the developed film was read out with a SEM rather than optically. With the SEM readout you can beat a tenth micron resolution easily. Although I really enjoyed this, the bigger story, which is really where we are going is that it became clear that something was wrong with the postulated laser fusion physics; the X-ray emission from the targets didn't have all the right characteristics.
How did you know what to look for in terms of defining what should have been the right characteristics?
To be fair, that was other people, in particular Barukh Yaakobi, from the Laboratory for Laser Energetics at the University of Rochester. That's who I worked with quite a bit. Also, neutron yields weren't scaling correctly. So, the targets weren't behaving like the models were saying. There were a lot of signatures out there that things weren't working right. Barukh Yaakobi proposed a method how we could diagnose it. All he wanted to do is have us fill neon, argon, or xenon into the micro balloons, and he would be able to diagnose the physical properties of the compression using x-ray spectroscopy in a clever way, with even more information being available from time resolved X-ray spectroscopy. There was just one tiny problem that neon, xenon, and argon have really low diffusion coefficients in borosilicate glasses. They don’t diffuse in a finite time so, filling by high temperature permeation wasn't going to work. I wound up inventing a way of making such a target. We called it drill fill and plug. The way you did it and keep most of the symmetry, was to drill a sub-micron hole through the glass micro balloon and seal gas inside by melting a micron size colloidal polymer sphere over the hole in a pressurized autoclave. So, how do you drill the hole? Well, you go out and get somebody who you know has a picosecond laser, focus it through an optical microscope onto the surface of a ~100 micron size glass micro balloon, and fire it. It wasn’t obvious that we'd get a sub-micron hole, but the first time we were able to do it with the target held in place by salt crystals so it didn’t move around. Kindly, Gérard Mourou who recently won the Nobel Prize supplied the laser. You may have talked to him.
After you've got the hole which is pointing upwards, you take a colloidal particle or polyball, electrostatically attached to a glass slide and micromanipulate it over the hole and touch it to the hole. Alignment of the polyball with diffraction from the hole is done with an optical microscope. As you might have expected, the change of the interaction of the rough surfaces transfers it perfectly. So, you have your plug in place. You then take the substrate holding the microballon with a hole and polyball attached into an autoclave. Pressurize it with neon xenon or argon and heat it, and low and behold, you melt the plug and seal high pressure gas inside.
These were extensively used in several campaigns that helped reveal what was going on and resulted in several publications. It turns out, that hot (several kilovolt) electrons race through the target, heating the whole thing up before you ever try to compress it, and it explodes. The idea that hot electrons could be generated had been advanced by Estabrook. In effect 1000 to 10,000 photons get together and accelerate one electron. Those are called hot electrons and weren’t accounted for in the original theory of laser fusion which only has the idea of laser ablation driving a compression. Instead, you've got high energy electrons racing through the target, heating the whole thing up before you ever try to compress it, and it explodes. So, it was working in a completely different mode. It made fusion essentially no longer "online in '79," but an ultra-long-term proposition.
We made a lot of X-ray imaging devices to perform other studies of the x-ray emission. One coming from a collaboration with a colleague was a high aspect ratio gold zone plate. The micro-fabricated structure was a 50 to 100 micron thick gold zone plate, with tenth micron outer zones. So, what I just told you is we were able to fabricate aspect ratios of 1000 to 1, with tenth micron resolution in the late '70s, which even today is a pretty impressive micro fabrication achievement. It came from a combination of anisotropic plasma etching, and all kinds of other tricks.
One of the things we learned in studies of x-ray emission was that it could be optimized and a relatively high efficiency conversion could be achieved. The numbers were impressive and we realized that we could use a laser plasma as a source for X-ray lithography. This led to the first demonstration of using a laser produced plasma as an X-ray lithography source. Lithographic resolution in the demonstration was approximately 1,000 angstroms. The technology ultimately was licensed to a company called Hampshire Instruments that went on to make a prototype stepper. So, in the end, we arrived at a point where laser fusion as a major energy source went far into the distant future.
Harry, were you surprised how long you stayed with it?
No, it took a while. From my own point of view, I think what helped is that we approached this from a different perspective than others working in this space such as in the national labs.
I’ll stop the laser fusion narrative there and will look at several side journeys I had during this time. One was associated with the second bet, that we could get involved and play a significant role in the formation of the information industry. Exxon started numerous companies in “Silicon Valley” working to develop all kinds of displays, printers, and storage technologies. In addition, Exxon funded a company called Zilog, which made the first commercial 8-bit microprocessor. As I mentioned before I wound up playing an ultra-small role in the development, at most a tiny, tiny footnote. They had a yield problem initially which wouldn't be too surprising. So, they were able to sample devices. I can't remember, but let's say, initially when they were producing — let's say one out of a thousand was a good device. I was sent out to help mitigate that problem or correct it with rather crisp instructions. I did assist, is what I would have to say, in mitigating what they knew up front it was contaminations in both the fab line and the mask sets. So, I did some things to help mitigate that, and also introduced some early gas based plasma etching technology, which later became very big. We had a homemade etcher patterned after one at Bell Labs and advocated transitioning to it. There were several keys to fixing the yield problem, but it was contaminations although it is not clear which of many changes solved the problem. The other side journey I had was helping colleagues in the lab who were studying a recently discovered effect called Surface Enhanced Raman Scattering (SERS). It was early days and the effect allowed you to study molecules adsorbed on the surface of gold or silver island films as well as colloidal particles of these materials. The real discoverer never got full credit for it, but that's a different question. That was Alan Crayton, who was the true discoverer of surface enhanced Raman scattering. I got to know him when he came to the lab, and worked with Dave Weitz on this topic.
Harry, is this roughly the time when the term "soft matter physics" comes into play?
Not quite yet. This is just a little before. There's a pivot in a short period of time to a focus to what has become known as soft condensed matter physics. Work on Surface Enhanced Raman Scattering drove a lot of thinking about interfacial interactions, surface structure, and colloids. The original things, which I had a hand in, were the fabrication and characterization of gold and silver island films, which were the textbook things to play with. Alan’s work as well as work by several of my colleagues drove a lot of thinking about colloids and their interactions including adsorption of molecules to their surface, stability in solution, aggregation, potential for them to order, and a host of other phenomena. I don't know what was in everybody's head. There were a lot of things going on with many dimensions, but I'm going to say starting to play with colloids at least contributed to the thinking of people at Exxon who ultimately drove soft condensed matter physics.
What explains that? What's the transition?
Well, I think the transition is driven by a number of factors. Scientific curiosity is one, which you've kind of pointed to, because you start to look around at the status of colloids, foams, wetting, complex fluids, and rheology. These topics were in the literature before, but often had not been approached from a physicist’s standpoint. Much of the literature was in chemistry journals and was approached with a different mindset. It became infectious and there are a lot of great physicists at the lab who started to think about these topics from a physics standpoint. You also have the confluence of the question I gave you before: How do you find oil? Well at that time one way was in existing reservoirs where we already have it. A viable enhanced oil recovery (EOR) technology could unlock more oil. So, there's a business driver to go with it. That's not a bad confluence. There's all kinds of scientific things to do, and there's a business application. I happen to like that as a fermentative environment, and the ability to be creative in it. In the end, it is ground zero for soft condensed matter physics. In a short time there were so many players in the lab that it became difficult to accurately count them. There were ten to twenty principal investigators (PIs), consultants, innumerable post docs, and visiting scientists. By the way, if you can, would you try to extract from Paul Chaikin for the historical archive his talk for the CSR 50th? Paul has been one of our most valued consultants and had one of the best pictures of what everyone was contributing. Would you please ask him?
That is great. Even though it was very comprehensive, it still doesn't have everybody, but it's got most of it. I think there may have been a recording of it. I don't know. I was going to ask about that.
This is the 50th anniversary of the CSR archive?
Well, at least he will have the slides used in the talk. We had celebrations at the national meetings, but Paul's talk was a one hour long talk given at the lab. I recommend it highly. In that, you'll see that I have a very small role in the birth of soft condensed matter physics. I collaborated with and tried to help a lot of my colleagues because I was scientifically extremely interested. Obviously, you know, amongst the people who influenced me the most was Dave Weitz. Particularly in the time period when we shared a building together. We were in a building out back, away from everybody else. At that time you no longer have emails coming to you, so you had to walk outside for 10 minutes to see us. We were quite a ways away from everybody. We used to have a tradition of doing Friday afternoon experiments in what was called Building 8, often with Paul Chaikin in the mix. The deal was somebody would propose an idea sometime on Friday, and the goal was before we could leave at night, we had to do an experiment to show some insight about it. Create the experiment and do it. You're on the clock. It was a wonderful, wonderful environment, and to me, very fermentative. Of course, you know Dave Weitz is extremely creative, that goes without saying. From management's point of view at that time I had received a new assignment.
But you're still in the lab. It's still in the same division.
Yes, I stayed in Corporate Research, but I moved to a different group because the fusion group disbands. I move to a group focusing on photovoltaics. It had been up and running for a period of time. I was working in a group focused on amorphous semiconductor solar cells. There was another group working on crystalline silicon solar cells. As I said, research at the lab covering all the energy sources from heaven and Earth was going on. For amorphous silicon the issue at the time was the efficiency, and still is today. The efficiency of amorphous solar cells were well below that of silicon. Many reasons for that, but one of them — not the only one by a long shot, was that it's a thin film cell and it has a very weak absorption edge. So, it gets very poor utilization of near infrared light. A very well-known gentleman, Eli Yablonovitch, had proposed the idea of optical enhancement. The idea behind it is that the index of refraction of the semiconductor is very high. If you can get light to scatter inside the solar cell you can trap it by total internal reflection. He showed that the path length for absorption could be increased by four times the index of refraction squared, if you got perfectly random scattering. This was expected to give a significant efficiency boost for all solar cells with a particularly beneficial effect for amorphous silicon solar cells which have weak infrared absorption.
What was the experiment that confirmed this?
Well, that's where I come in and shortly I'll come back to that. There was also another idea put forward for optical enhancement by another gentleman at the lab who's well known, Ping Sheng. He proposed an enhancement effect from a coherent grating that diffracted incoming light into guided wave like modes. The only problem with the coherent approach using a diffraction grating texture was its dependence on the incident angle of the light. So, your light has to be coming from specific angles. In principle you can get a much bigger enhancement, but the light has to come only in specific angular directions, because it coherently diffracts. Eli’s randomizing approach was more general because it didn’t matter what direction the light came from. So, we went about trying to look at both technologies. The bigger success, because of the randomness, and its lack of dependency on where the light came from, was the random texture. What I did was apply my micro fabrication expertise and invent methods that would economically texture the surface of the substrate on which the cell was deposited with what we thought would be the optimal textures. For random textures we estimated they might have to have a roughness on the scale of the wavelength of light inside the amorphous silicon cell. Because of the large index of refraction in the cell, approximately a 2500 angstrom scale of roughness was thought to be preferred. The challenge, of course, was to do this with pennies per square foot. It's an economic proposition. So, that led us to the creation of what we called natural lithography. It utilized a deposited monolayer of closely packed ordered as well as disordered colloidal particles as a lithographic mask. For the optical enhancement problem we used it as a micro etch mask to texture either the support or the cell. There are many ways of micro etching and we used a large area ion beam at the time. We discovered that spin coating could make a monolayer of ordered polyballs having a grain structure with dislocations connecting well-ordered regions. In addition we made random textures with colloidal coating techniques. We also discovered that we could make ordered textures using polyballs trapped at an air -water interface and lifting them onto a substrate much like a Langmuir-Blodgett film. Specifically you take a trough filled with water, get the colloids to trap at the surface, and compress them into an ordered array by putting a surfactant on the surface and lift them off onto a substrate. If we wanted even a higher degree of ordering we created a technique called replication lithography. We took a diffraction grating, coated it with a few monolayers of a release agent, spin coated a polymer thin film over the surface, floated it off, and literally transferred the grating structure as a lithographic mask. As such we invented a very fancy way of taking the ruled diffraction grating and making time and time again thin film copies. We wound up using a variety of these techniques to make the first optically enhanced solar cells. The ones that really got the big boost were those that had the random texture. What a surprise. In full disclosure, another approach by a member of the same group, Tom Tiedje also produced an optical enhancement with a coarser, sand blasted etched texture structure. Tom and I talked about the order in which we had done the work and we still don't really know. I happened to publish first. One note is that today, some form of optical enhancement is used in every solar cell in the world, period. Many of us have observed that a slogan for the optics of all of today’s solar cells should be “Exxon inside”. A few of my methods are occasionally used today in some semiconductor processing techniques. Not extensively, but there is some use of it.
In a side journey, I did quite a bit of work on stuff that was going on in the soft condensed matter area. Collaborations developed with many people who wanted lithographic structures to study the effect of roughness and chemical heterogeneity on wetting. We also collaborated with a couple people to make what could have been, we're not real sure, the first micro models used to study fluid flow. They look very much like many of the micro models you see today and were used internally to study EOR processes. In addition we collaborated in studies of the structure of Langmuir-Blodgett films, Penrose tiling, optical scattering, localization phenomena, and making X-ray wave guides. The last three are not tied to the soft condensed matter physics work. The other direction I started to engage in was developing an understanding at the nano-scale of structure property relationships in materials that were being developed in the lab. I had acquired a transmission electron microscope and we started to apply a variation of natural lithography to make TEM thin sections. We deposited a few colloidal particles on the surface of the sample to be studied and used them as an etch mask in an ion miller to form small 500-1000 angstrom diameter posts. A little thin strip or slice was taken from the sample turned on its edge, and put into the TEM, to image the structure exposed on the circumference around the edge of the posts. So, instead of all the laborious techniques that were around at that time, in that era, we could walk in the lab at nine in the morning and have a sample in the TEM by one o'clock. Today all of these have been supplanted by the elegant focused ion beam methods to make beautiful, thin sections for the TEM. But for that era, our method was fast and productive.
That's a regular day, or that's a productive day?
That's a regular day. Productive is if you want it by ten o'clock in the morning. So, we imaged a whole lot of samples at the time. We had a lot of materials work going on. At the lab Tom Tiedje had invented amorphous semiconductor superlattices and Theodore Moustakas was exploring a wide variety of other superlattice compositions. He was synthesizing gallium arsenide superlattices along with tungsten carbide metal lattice structures and diamond-like films. Later I will focus on the study of one of these materials that changed my life. One of my early Science articles was a study of molybdenum disulfide catalysts. Moly disulfide is a layered structure that with a promoter forms the basis for several hydrotreating catalysts. What we did was image catalytic sites. Layers exposed at the edge of the posts can be decorated with promoters and also converted to rag structures which are present in actual catalysts. So, it turns out it gave you very microscopic images of what happens at catalytic sights of one of the hydrotreating catalysts of the day. For the Science article we studied the structures, the catalytic activity, and the optical absorption, correlating the findings which became a cottage industry internally for a while.
Another big transition occurred with the vision of one of our directors, Peter Eisenberger. He had driven the lab to take a strong position in beamline science. He had the vision that you could use the synchrotron to create a new three dimensional X-ray microscopy technology; microtomography. At that time, there was no microtomography, period. It hadn't been done. When we started to look at that, it became obvious why. When you worked it out, for the desired resolution and the size of samples of interest, the data required would be equivalent to all the data collected in the hospitals in the United States when the acquisition was occurring. A little on the large size. Today, that would not be true. This is old school. If you used the existing Shepp-Logan algorithm to invert the data set, we estimated it would tie up our Cray supercomputer for two months. That was going to be a bit of a challenge, and that was if the data was good. It also required a detector with unprecedented resolution and data volumes. So, teeny tiny problems were in our way. So we divided and conquered. The algorithm was done by Brian Flannery. If you ever looked at Numerical Recipes in C, and look at the author list, you'll see a Flannery on it, that's Brian. He successfully made direct Fourier inversion work. This idea had been out there and had been tried by many others who failed to make it work. We knew it was going to be at least a thousand times faster than the Shepp-Logan algorithm, but lots of artifacts can contaminate the inverted volume. Let's not get into the whole story about how it was avoided.
What do you mean by artifacts?
Oh, my God. That's a polite way of saying you get garbage in your reconstruction. With a bit of imagination, you can see what your original initial target was supposed to look like. It's got all kinds of things ringing through the image, so by the time you're done, it gave you a dog's breakfast. The issue which I took on was to build an electro-optic detector that rapidly recorded large amounts of data with the required resolution. Our electro-optic detector converted an X-ray image to an optical image that was then recorded by a scientific grade charge coupled device (CCD). In the scientific community at that time such charge coupled devices were utilized only by astronomers. To efficiently convert X-rays to light required a thick film phosphor which would not yield the resolution we desired. If you want sub-micron resolution, you're going to be limited by the thickness of your phosphor. So, that wasn't going to work if we wanted efficient x-ray conversion. We invented a cellular phosphor with micron cells and even sub-micron cells with aspect ratios as great as 1000:1. You confined the light to one cell, its own private little wave guide and called it a High Aspect Holey Array (HAHA). We incorporated it into the electro optic detector, and then used a state of the art tape drive to record the large data volumes. This was state of the art technology for the early '80s. We put it all together, and it actually worked the first time. It was the most amazing thing I ever saw. We put it together, we went out to the beamline at Brookhaven, took our first data set, brought it back home, Brian had his code ready, and we inverted it. Everything worked the first time out of the gate. Rarely does that ever happen.
What do you think explains that, Harry?
I have no idea. Blind, dumb luck. I'm not sure. Looking back, if I were to think of all the things we later knew could have gone wrong. The image and data quality did improve later on. We improved it, but the first was an acceptable resolution 3D data volume. We then went on to do a lot of work. We looked at rocks, coal, composites, fluid flow, catalysts and archeological artifacts. Our work on rocks was the birth of digital rock physics. All the first images for digital rock physics came from us. So this field of geoscience was born in the Corporate Research Lab. Internally it was carried on at Exxon by my collaborator at the time, John Dunsmuir. John had collaborated with me in all of the previously mentioned microfabrication endeavors and then throughout the rest of his career utilized and advanced microtomography within the Corporate Research Lab. He perfected it and did all kinds of very clever things with it over the years. As you know technology eventually spreads and today it's an item of commerce. You can go out and buy a microtomography rig. Admittedly they cost a lot of money.
So, we're now coming to a transition point. We have moved just past the mid '80s. We're going into a transition, and it may not be as interesting to you as we go forward because for me it's going to be an evolution to applying physics in chemical engineering, and to applying physics in chemistry. Going forward the number of people I'm working with expands almost exponentially. I'm often at the inception, but I'm not often in at the kill, or at the end. It may be the way your career naturally transitions. In a large part this transition in the mid '80s was driven by the information revolution which had started to occur. It opened the door to finding reserves and enabling the oil and gas industry to expand. I'm given a bit of freedom to work and see what I want to do for the lab.
How was this communicated to you, Harry, when you get the sense that you can sort of branch out and do your own thing?
Well, I was always given freedom before. I'm going to say I was never, ever told I have to do x and never y. I'm going to say, I was kind of always using 25% or greater of my time to do what I liked. We've tried to carry that into the staff today. Right now, that's been a freedom in the lab that has been explicitly stated by our management. At times, I was well over 25%. Let's be fair about that. I can't pretend it wasn't. But I never got slapped for doing that, as long as you're productive and doing some good things with it. Although part of my nature is to have the greatest passion for problem driven versus unfretted scientific exploration, I deeply enjoy scientific exploration. I can't pretend I don't, but if you had to balance your life, where are you going to lean to?
Let's focus on the transition to research on oil and gas. For myself it enters through our invention of microporous superlattices. I told you about doing the TEM studies with microfabricated posts on Ted Mostakas’s gallium arsenide and aluminum gallium arsenide superlattices. Samples had layers as thin as ~10 angstroms, that was often increased to see what was going on in his depositions. We looked at them and hypothesized that we could selectively etch either the gallium arsenide or the aluminum gallium arsenide layer with high selectivity. We went about doing it and created very beautiful two dimensional slots by selectively removing one of the layers. The smallest estimated slot size was ~10 angstroms which we claimed to be the smallest of all manmade lithographic structures and images of our microporous slotted superlattices appeared on the cover of several magazines. Scientifically we were using these to study two dimensional molecular confinement effects. We were looking at adsorbed molecules confined in the slots, doing spectroscopies on them, studying isotherms and transport. These studies led to a pivot of my research, and a focus on molecular transport. Really, that's what drove me.
Why, Harry? This was intellectually compelling to you?
It was compelling to me at the time. It suddenly opened a door to a bunch of areas I only tangentially knew about. There were people in the lab working on zeolites and molecular confinement effects were known to occur in them. With zeolites, you can synthesize well defined molecular channels with characteristic sizes ranging from ~3-10 angstroms. Typically zeolites are crystalline aluminosilicate networks with pores and cage sizes defined to fractions of an angstrom. You can make all kinds of architectures out of them. Exxon and ExxonMobil are famous for them in reality. There is a huge number of structures you can study that are going to have different molecular selectivity, diffusion, and equilibrium characteristics. So, I began to think about the type of selective molecular separations that would occur if we made a membrane out of them? Nobody had made a large area zeolite membrane at that point in time. The way we approached the synthesis of a zeolite membrane was with an adaptation of some of the microfabrication techniques we have discussed. We trapped a monolayer of them at the air water interface of a Langmuir-Blodgett trough, compressed them with surfactant, lifted them on a substrate coated with a release agent, spun coated a polyimide diffusion barrier on top, ion beam etched to expose the surface of the zeolites, released the film containing the zeolite monolayer surrounded by the diffusion barrier and studied transport. Again, we got lucky. We chose to microfabricate the membrane using a MFI zeolite, which has a pore size a little over 5.5 angstroms, and tried to separate two molecules that are chemically identical, para-xylene and ortho-xylene having molecular sizes of approximately 5.5 and 6.5 angstroms. One would expect that ortho xylene would be size excluded and have a lower diffusion coefficient compared to para-xylene. We got a ~2-fold separation factor. That excited us very much, and led us to start a much larger project trying to make zeolite membranes in a more industrially viable manner and have impact on the business.
What was the big promise here, Harry? What was exciting?
Well, what was exciting was that we thought we could change one of the major petrochemical processes in the world. You and I are wearing a product made from para-xylene; its polyester. At that time Exxon I think was the largest producer of para-xylene in the world. As we go forward there will be lots of these different vignettes and you can see which ones you want. This vignette is simply that with a lot of work, improvement and scale-up we might take a petrochemical process that had been put in place for quite a while, over 40 years, and completely improve it. The hope was we would reduce the capital required as well as the energy consumed. Early on we also did something that should be part of many research programs. What we did after we got the initial result and became excited we then said let's step back and take a look and see what this thing could be. To do that, we went about doing what I call "zero order economics." It's something I try to teach and may not do a very good job here, because I normally do a much longer version. I do think it's something that should be used in industrial settings, and also when you're trying to change the world in different ways. In particular when there's something that you're after, or if there's a tangible outcome, or it's something you think people should try to do. In the end, economics is usually going to determine what happens. I think it's something that most people are unfamiliar with. It's not taught in lots of curriculums. I had to learn it on the job. So, I think it's something physicists could stand to learn a little more about. You don't want to become an economist. You don't want to spend all your time doing this, but you want to know who and what you're about, if you can. So, my method is, first of all, dream the dream. Step one. You don't do that; you'll never get anywhere. Try to think what the best possible outcome is like. At this point I haven't done economics. But then put some realism in. Okay, now we have dreamed the dream, and it's time to put a bit of realism to it. The next step is to make the best physical model of the implementation that you can think of. What would it look like? All the bells and whistles, all the dirt, and the nasty things you normally don't want to think about. Put them in. You'd be surprised. You have to scrounge around to find costs, but you can find them from many places. They're available. Details become important. When you break it down, what are things going to cost? In the case, of xylene isomer separation it involved a process concept, a flowsheet, an estimation of the heat and material balance, along with a rough idea of equipment and energy required. Later we'll come back to other examples in which you can do this. One works out some crude estimate of economics and sees what this thing looks like. Does this look like an interesting idea? Does this move the needle in some way? Basically, if you like it, then try to go forward. If not, try to think of a new idea, or modify it. A lot of things I have done have passed this screening and entered development. As you go through development, you retest it many times. So, almost every research project aimed at a development should have iterations of zero order economics before a formal more detailed first order economic study is done. These are often called research guidance economics. So, I'll give you a proposition today. I just invented a 12% efficient solar cell. Congratulations. I just did it. Can I sell it, or do I have to pay someone to take it? You can answer that question. I know the answer to that question. I think you know what my answer is. The answer is you're going to have to pay someone to take it, because of the land and balance of systems cost when compared to economics for present day silicon solar cells. You can't even give it away, because silicon is that cheap and that good. That's where we are today. It isn't that you don't work on something that's less than 12%, efficiency because most organic semiconductors are less than 12%. I'm not saying you stop that. But you certainly want to be aware that you need improvements, or you have to find a niche market. You think about what else it could be. There are many examples. I picked on this one just for fun.
Returning to the zero-order economic analysis that we did to frame results of our initial experiments using a microfabricated membrane to separate para-xylene, we began with an analysis of the base case: i.e. how were the commercial plants configured to produce the valuable para-xylene isomer used to make polyester. Para-xylene plants in this era utilized molecular separation technologies to remove and purify para-xylene from a mixed xylene isomer / ethyl- benzene stream. The remaining ethyl-benzene was catalytically destroyed and the ortho and meta-xylene isomers were catalytically reisomerized to make more para-xylene. When you catalytically run an isomerization reaction, you can only convert ~20% of the ortho and meta xylene stream into para-xylene because of the equilibrium that produces the other isomers. Thermodynamics gets in your way. The xylenes plant basically isomerizes, separates, then goes back and re-isomerizes, separates, and keeps going until you consume all the otho and meta isomers and produce only the valuable para isomer. So, in the commercial process there is a big, big recycle. Once we studied the base case, our analysis showed that if we could combine separation with isomerization to overcome the equilibrium limit, we could really lower the capital cost, Opex costs and energy consumed. We might be able to make streams that have 30-70% para-xylene in them. That looked to be very attractive. To do that, you'd have to have a membrane that would operate a ten bar or higher, at a temperature of 250-350 centigrade. That told us what type of technology we had to develop. With our initial analysis we were able to convince our chemical company that this was worth the trip. Like everything, the first time out of the box, you have to start small and grow the idea. So, we launched a journey, and it was towards a targeted goal. Gradually this built into a big project. They understood this was high risk. They were willing to take on an extremely high-risk proposition where you had to create everything from scratch. The point was their economics came out like ours. The value proposition was there if we could do it. So, they were willing to back a multiyear effort with a goal to create a scalable defect free high temperature MFI zeolite membrane. A huge amount of material science work ensued with the development of all kinds of clever hydrothermal synthesis techniques and all kinds of inventions. I'm going to try to speed up because we're going to run out of time, so I'm going to skip a lot of stuff. I'm going to pick one of the inventions we made along the trip. A couple of things I'll highlight. One is in characterization. The earliest synthesis routes yielded membranes with a significant number of defects in them. So, you're not going to see the real or intrinsic performance, if you have a number of micron size holes or cracks. You see a lot of nonselective transport through micron holes or cracks in the membrane. We had to devise a way of figuring out what the defect flow was and correct our transport data for it. We invented a modality called perm porosimetry, where you use capillary condensation of a vapor introduced in a carrier stream to block helium transport, and watch the helium flow through the membrane decrease as the activity of the vapor is increased. As you continue to increase the activity of the vapor you move up the capillary condensation curve and block selective transport through the small zeolite pores. This isolates the contribution to the transport from large nonselective defects. Initially we used the Wicke-Kallenbach method to measure multi-component membrane transport. This method is commonly used by academics and introduces a vapor in the feed to a membrane using a carrier gas sent through a bubbler. Early screening studies had a mixture of xylene isomers in the bubbler, a nitrogen carrier gas, and a helium stream was used to sweep the vapor permeating through the membrane into a gas chromatograph. As we started to go to higher pressures and tested different carrier and sweep gasses we learned that the permeances were moving all over the map. Results depended upon what carrier gas was used and were anomalously sensitive to the transmembrane pressure. This shouldn't have happened. We eventually determined that this was due to a mutual diffusion effect. Diffusion is driven by a difference in chemical potential of gasses on opposite sides of the membrane. As gasses transmit across a membrane into a support, there will be a mutual diffusion effect that puts mass transfer resistances in the way. For high flux membranes the support mass transfer resistance can significantly alter the performance. So, my comment is there's a significant number of people who academically use this method without realizing the pitfalls that you get. You've got to be very careful. We published ways of mitigating this problem, as well as other tricks to accurately measure selective permeances for high flux membranes. As we moved down the development path, finally we did devise synthesis techniques that produced defect free membranes and got the performance we were hoping. The time and effort that it had taken to get to this point in the development was larger than anticipated at the beginning. We then began looking at the next step: scale-up and saw how big it would be and the resource commitment that would be needed. I think everybody agreed that it was about the right point to stop this effort.
There were several parallel membrane efforts we did at the time. I'll pick a couple of them and try and get through them fast. One involved the invention of a pressure drop membrane reactor. It came out of a PhD thesis for which I was a co-advisor. This was a simple idea. Take a sol gel, and cast it as a thin film on a porous support and a deposit palladium or platinum catalyst in the 50-100 angstrom pore structure in the inorganic thin film formed from the sol gel. At high temperature these catalysts would be expected to be able to dehydrogenate ethane to make ethylene, or propane to make propylene. Even at high temperature ethane and propane conversions are equilibrium limited with conversions increasing with decreasing pressure. You don't build catalytic reactors at one bar because of the required residence time and the volume they get to. You build them at high pressure. However with a nanoporous sol gel membrane you can have high flux with high pressure on one side and low pressure on the opposite face. The question becomes would the conversion of an ethane or propane feed be closer to the thermodynamic limit for the high pressure side or the low pressure side. We found and ultimately were able to model the result that the conversion comes close and could even exceed the conversion expected for the low pressure side. Exxon Chemicals spent a lot of time studying our results and performing research guidance economic calculations to determine if this reactor technology could displace the incumbent steam cracking technology. It came very close. It was a little bit better, but it wasn't advantaged enough because of all the development costs. The reason I chose to talk about this was the physics that came out of the PhD thesis that studied gas transport through sol gel membranes in great detail. One of the first studies in the thesis was high temperature pressure driven transport of single component non-reactive gasses through the 50-100 angstrom pore structure in the sol gel membranes. One would expect the transport would match a simple Knudsen diffusion model with the permeance scaling as the reciprocal of the square root of the molecular weight of the gas. That is what's supposed to occur. When we looked at it, we found that the scaling was close but it was definitely not right. Well, we were kind of excited by it, and finally made our own model, and came up with what we thought was the answer, it was basically that the molecules were inelastically colliding with the walls and hanging up for several picoseconds per collision. We were almost going to write it up, and then performed an extremely thorough literature search, and discovered that Langmuir had discovered this in the 1930s. It had just been forgotten. It's a small point in that you too can publish in the Physical Review, and be a Nobel laureate, and get yourself forgotten in all the textbooks. I kid you not, we could not find a textbook reference to it since the time it was published. Maybe it is there but we just couldn't find it. Another type of study examined multi-component transport through the support structure that has micron sized holes using a variation of the Wicke-Kallenbach method. We flowed an argon-oxygen mixture on one side, and nitrogen on the other side of the membrane. We were able to do an argon-oxygen separation with high selectivity with one micron pores! From a molecular perspective you could drive a truck through those micron sized pores. How did you get molecular selectivity through something like that? Well, we already were wise to the idea that this might be due to mutual diffusion effects. We went back and used the governing equations to confirm that this was indeed due to mutual diffusion. We named the effect atmolysis. Again, we decided we'd look through the old literature and once again found that this had been also discovered (back in the 1940s). Not the same separation, but close to it. A lot of history can be buried and often hard to find.
Another interesting thing we came up with was a surface catalyzed hydrogen spillover membrane. Kind of a simple structure. Take polyethylene, actually a garbage bag, coat it with a physical vapor deposited (PVD) platinum island catalyst film , and then cover them with a thin layer of silicon monoxide. So, what can you do with this structure? Well, our idea was to flow hydrogen on the polyethylene side and let it diffuse through the polyethylene to the platinum. We postulated the hydrogen would be expected to activate on the platinum catalyst and spillover through the silicon monoxide film to the outer surface where it would be able to drive a hydrogenation reaction. Hydrogen spillover physics from platinum catalysts to the surface of a support was something other scientists who had been in our lab were famous for. What was new was the hypothesis that the activated hydrogen would permeate through the silicon monoxide film as atomic hydrogen to the opposite side where it would be reactive enough to hydrogenate an olefin. In our initial tests using cyclohexane as the olefin we were able to hydrogenate it to cyclohexane. The question was; was somehow cyclohexane getting back to the platinum in a way that we didn't know? So, the next thing we did was to add thiophene to the cyclohexene feed. What's thiophene? Thiophene is a sulfur compound and is almost a quantitative poison for platinum killing the catalytic activity. The amount of thiophene we added would have killed more than 106 to 109 more catalytic sights than we had. I think we nuked it yet the hydrogenation went on. So, we were proud of that one. It was very interesting. We did not pursue this because here we did our own zero order economics, and we came up with the area needed. What we were getting was one reaction per platinum sight per second, which is not bad, but that's not fast enough when you had to scale this up as a membrane, not as a high surface area catalyst. Catalysts can have 100-1000 square meters of surface area per gram. As such a gram of catalyst can have 100-100 times as much catalytic activity as a square meter of a surface catalyzed membrane, so there was a little bit of mismatch.
Next thing I want to get to is some work we did with novel reactors. Today this would be called process intensification, but what we were doing was saying if we took things to physics limits: — really high temperatures and short resonance times. What could you do with a reactor that would dramatically shrink its footprint? Can you make really small compact reactors? What conversion could occur with short resonance times? Milliseconds or microseconds. We also wanted to do this on an industrial scale. So, we started with an idea. How could we do this?
What has millisecond resonance times and gets hot? A diesel engine and don't worry about putting fuel in, just compress it, it will get hot. Changing the compression ratio from ~10 to 1 to >100 to 1 will produce temperatures of ~3000 Kelvin for adiabatic compression of a monatomic gas. Won't say you'll get there because of heat transfer problems, but let's leave that alone. So, we went about using a motor driven high compression diesel engine to study a large number of gas phase reactions that mechanistically proceed by free radical processes. We looked at a whole bunch of reactions and they all worked. We could do ethane to ethylene, propane to propylene, at a high temperature we could even take methane to acetylene. We turned around and did a whole series of exothermic reactions. We put oxygen in the feed and reacted it with methane and ethane performing partial oxidations. With methane we began to optimize yields of CO, CO2, and hydrogen products that are referred to as synthesis gas. Synthesis gas is industrially used for a lot of things, including the making of what today we refer to as blue hydrogen. Also changing the feed we found you could make synthesis gas containing mixtures of acetylene or ethylene or propylene which we referred to as multicomponent synthesis gas. Continuing on through this journey, we also converted bitumen to cracked products and found chemistry to add carbon donated by methane into bitumen. This whole suite of reactor technologies we named RONCO. Why would you call it RONCO? Well, in my era, if you watched late night TV, you used to see these ads where the pitchman says "It slices, it dices. Buy one get three free." The company was called RONCO. We thought the name applied to this thing because it did every free radical chemistry we wanted. One of my collaborators Mike Matturro, who was a physical organic chemist, identified a wide variety of uses for the multicomponent synthesis gas product slates and drove us to launch a development project in chemicals for a new line of plasticizer alcohols. I will just follow our development pathway for making synthesis gas at an industrial scale. We began working with mechanical engineers looking for ways of scaling up our RONCO research reactor. They actually came up with ways to scale it up by approximately a thousand fold using marine diesel engines that power ships. Literally, big industrial scale stuff that could be engineered. We were pretty proud of it at the time, and then we started to think some more and realized we could do better.
Do better how?
The better was we could shorten the residence time even more and change the temperature profile quite a bit. The way of doing this was to use a jet engine technology instead of a diesel engine technology. So, literally, we went about the redesign of the combustor section in a jet engine to make it into a chemical reactor. It was the burner section we were redesigning and we were not incorporating a turbine in the exhaust. I won't go through all the considerations and the hydrodynamics. At this point there was a bigger team and I was less involved. We built our lab scale jet engine reactor with all the hydrodynamic mixing tricks that go into a jet engine. We successfully ran it and demonstrated high yield synthesis gas. From an engineering perspective this was a significant improvement over the diesel reactor because the productivity just went up with a concomitant decrease in reactor footprint. At that point we went back and reflected one more time and realized we can do better yet. I think you can see it coming. We transitioned from a jet engine technology to a rocket engine technology with the team growing and my involvement becoming less. We started with our own design to convert a rocket’s combustor into a chemical reactor using the hydrodynamic principles employed in the rocket. Ultimately we partnered with a company that made rocket engine combustors with fuel and oxidant distributed through lithographically patterned plates that were diffusionally bonded together. With this we fabricated a custom mixer, tested it in the lab and showed that it did the chemical reactions we wanted. It actually went into commercial development and became a unit in the Baton Rouge Refinery.
What was viable about it, Harry? What allowed it to be adopted at this level?
I said it became a unit in the refinery. I should have said it was a demonstration unit. It was a very large unit; almost a full feed demonstration. So, it's not like a tinker toy. This was actually on the landscape of the Baton Rouge Refinery; you could actually see this unit far away as you came in the gate. This was part of a development when we were hoping to do gas conversion as part of the strategy for the company to commercialize methane reserves. What happened was our strategy required two innovations at the time. This rocket engine reactor was one and the other was a brand new way of converting the synthesis gas product with a fluid bed Fischer-Tropsch reactor that was invented by Eric Herbolzheimer, one of my colleagues. Our company finally got to the point of making the decision that you cannot afford to do two major innovations at once. This decision was driven by the cost of running this major development project and the perceived schedule delay to complete two major innovations simultaneously. It's one lesson in innovation theory. You come to the point where, yeah, we've got a viable path, but we can only chew on one thing at a time. It was a tough decision and hurt at the time, obviously, but you can respect the decision. In a development program you often have to make some very tough calls and without the Fischer-Tropsch conversion process, you have nothing. I want to start talking about innovation in a different way. Shortly, we're going to talk about a discontinuous journey. The journey I have been discussing into the land of high temperature short contact time reactors was a continuous burst of innovation over many years. It was then put on hold with a derivative of the idea emerging in the future. We're going to go back to this thread again with a new embodiment, later when I discuss the dual challenge. I want to come back to it but I'm trying to look at the clock and think how I'm going to cut something short that's near and dear to my heart. I do want to go through it fast, but I have to really accelerate. Can we go ten minutes over? Is that possible?
Absolutely, we can go beyond that. Harry, don't hurry yourself.
Okay. This may be boring to you. I'm sorry. Anytime you want me to stop —
Not even a little bit. This is great stuff.
Okay. I also have other sides to me, I know quite a bit of geoscience and have worked in geoscience areas within the corporation. This will be a story about something that we did not start whatsoever. It was an idea that came from Exxon’s Upstream Research Company and was proceeding into early field testing before we even got involved. It was called coupled waves. The idea behind coupled waves was to use electrokinetic coupling to convert an electromagnetic wave into a seismic wave. Kind of an interesting idea. The embodiment being tested placed long electrodes on the surface of the Earth to drive oscillating currents and hence electromagnetic waves deep into the ground. Electrokinetic coupling then produces a seismic wave that can propagate back to the surface of the Earth. With long duration data acquisition, data stacking to reduce noise, and advanced seismic processing the hope was that you could get an image of the resistivity in a reservoir with seismic resolution. Why do you care about a resistivity image? Because resistivity logging generally tells you if there's oil or water in the formation. So, you care about that. Full field scale tests were being done with electrodes that were hundreds to thousands of meters long that injected hundred kilowatts to several megawatts of a Golay sequence of current pulses. Looking at this process more microscopically current pulses flowed through the brine filled pore spaces of the rocks in the ground. They propagate as an electromagnetic wave from the surface where the current is injected. Locally in the pore space of the rock the electrokinetic coupling coefficient between the oscillating current and the brine in the pores causes the fluid (brine) to move. That's what electrokinetic coupling does. It basically says if I pass a current through a fluid in a porous medium, I can cause it to move. That's classical electrokinetics. The motion of the fluid couples to the rock matrix and launches a seismic wave. In more scientific terms this coupling of the fluid motion to the rock matrix is known as poro-elasticity or Biot theory. As such electrokinetics and poro-elasticity are responsible for the generation of the seismic wave at the surface. The amplitude of the returning seismic wave was expected to be very small, but possibly detectable. In the initial testing with targets as deep as ~1,000 feet it actually worked and imaged gas pockets. Pretty cool. There was one catch. They were getting surprisingly large signals, much larger than they were predicting. How'd that come about? They asked us to take a look at it and that is how we got involved. First thing we thought, well maybe the electrokinetic coupling coefficients of rocks was wrong because nobody had ever really studied electrokinetics of rocks. So, we got a lot of types of rocks, studied them, and we found some anomalies that weren't quite right compared to what people thought should happen, but they wouldn't explain what had occurred in the field. Thus far, we were empty. We then took on the hard problem which was to see in a laboratory setting the rock matrix motion due to poroelasticity. In the laboratory, with the low voltages that can be applied without generating gas or doing other things we expected rock matrix motions of ~10 femtometers. That’s 10 x 10 -15 meters or equivalently a few nuclear radii. To measure that, we applied a small AC voltage across a core sample and used a fancy AC fiberoptic interferometer to record the oscillatory motion on one rock face (i.e. the rock matrix motion). To get an acceptable signal to noise ratio we acquired data and signal averaged for several days and in some cases a week. We aren't going into all the technology that was used, but, lo and behold, we found a rock matrix motion of 10-100 times greater than expected. This looked very much like what was being reported from the field. The answer came from one of my collaborators, Eric Herbolzheimer who showed that this was due to the boundary conditions for fluid flow in and out of the sample. Examples of fluid flow boundary conditions are squirt flow where fluid flows out of the face of the rock or no squirt where fluid is sealed into the core and the fluid pressure can build up at the end of the core. The theory showed that if they're symmetric on all sides in most cases you will have very little response with only the momentum of the fluid coupling to generate a rock matrix response. However, with the right thicknesses and asymmetric boundary conditions, you get a pressure buildup at one side and a large response. We referred to this as a “breathing rock” effect. The idea that rocks could breathe was a very cool effect that explained both the lab and field data.
I want to start a transition, and it's a transition that takes us to many of my research efforts going on today. Around 2000, my research efforts start to become more and more focused on CO2, particularly CO2 separations.
My research on CO2 capture was an evolutionary path. It starts with the problem of economically producing large high CO2 content gas fields found throughout Southeast Asia. Often these offshore gas fields contain~70% CO2. Our thought process involved answering the question: what could you do about this CO2? The project we devised was aimed at separating CO2 on an offshore platform as gas came out of the reservoir and re-injecting the separated CO2. Conventionally, you can do it with cryogenic separation, but the kit is really big and is not practical in an offshore environment. People had looked at it for decades before we got involved, so we said, is there any new science or technology options that could enable this? We went about evaluating a variety of process concepts using advanced separation technologies, built screening models, and evaluated zero order economics. Eventually we settled on a concept built using fundamental data to predict how membranes that had for the most part never been synthesized would operate to separate CO2 at high pressures. The models were built on molecular transport theory that predicted high pressure transport behavior of a membrane using a theoretical formulation that for the most part had not been discussed in the literature. Under high pressure conditions the loading in a membrane is well outside the Henry’s law limit. Most of the literature had addressed models of transport in the Henry's law limit, however industrially you're not in the Henry's law limit. A Henry’s law treatment of transport can be accurate at low pressure and a low loading limit of the adsorption isotherm. However a different treatment is needed for high pressure transport problems. We developed the needed treatment of high pressure membrane transport and used it to predict the selectivity and permeance of a variety of inorganic membranes from fundamental measurements of adsorption isotherms and transport diffusion coefficients. Our predictions were well above the upper bound limits for the tradeoff between selectivity and permeance that Robesen had illustrated for all kinds of existing polymer membranes. Plots put together by Robesen implied that there is a line bounding the tradeoff between selectivity and permeance. Our calculations said that a variety of inorganic membranes would cross the line by a whole bunch. It was enough to launch a high risk development project. The chances of it succeeding would not be great, but what you're after could be worth the trip. The Upstream Research Company agreed to it and handled it as a breakthrough project — a high risk high reward research / development project. Initially there were several challenges you had to get over. There were no gas separation membranes that had been made for high pressure operation. Several of the process concepts required operating pressures of 3000 PSI. Concomitantly no inorganic membrane had been tested under these extreme conditions. We started by synthesizing and exploring the transport through three different types of inorganic membranes: silica, carbon, zeolites. We gradually rejected two of the three. The amorphous silica was rejected because synthesis and transport studies had advanced to the point at which questions about fouling and degradation had to be addressed. We found out that hydroxyls that were present in the amorphous silica membranes led to irreversible degredation that couldn’t be stopped. So, we published a lot of papers around that. In collaboration with the Japanese ceramics company NGK, we settled on something that looked viable: high silica crystalline DDR membranes. Our choice of DDR was a result of a deep scientific understanding of the transport physics and the recognition of how it would offer an advantaged opportunity. DDR is a zeolite that is sometimes referred to as a clathrasil structure. We did a lot of work studying the hydrothermal synthesis of DDR as well as fundamental measurements to characterize adsorption isotherms and transport diffusion coefficients of DDR powders that were produced. Technology to synthesize DDR membranes was developed internally as well as with another company. Initially the DDR membranes had a lot of defects and defect flow had to be characterized and corrected for using several of the methods that I have mentioned. For high flux membranes corrections had to be made for mass transfer resistance in the hydrodynamic boundary layer on the feed side as well as that in the porous support on which they were synthesized. Ultimately the testing program showed that it was possible to make low defect density high flux DDR membranes enabling meaningful high pressure testing up to pressures around 3000 PSI. The testing results were in reasonable agreement with the model. There was a little deviation which we attribute to how the diffusion coefficient changed at high pressure but overall the agreement was impressive. They were pretty close to the commercial target we were looking at. The issue was that there was a significant scale up required, and at that point the company was making a lot of discoveries in natural gas that did not have the high amount of associated CO2. They had lower development costs, so there was a commercial decision to stop the program, which is a legitimate call. We were a long ways off yet. We had gotten to the point of creating inorganic membrane technology in the lab that had never existed and whose performance tended to match our original models.
I would like to transition to another idea where we worked with Georgia Tech to take carbon capture head on. I will begin by trying to explain how this concept worked. It is one idea amongst a larger family of ideas to address CO2 that I have been involved with.
Harry I want to ask also — again, this is your perspective. When you start to take on carbon capture head on, to what extent is this a compelling scientific program to work on, and to what extent is this really where the company is headed, and you're doing your part to get there?
I think it is both. To me, it's compelling from a dual challenge point of view, and from lots of points of view the company is quite literally taking on many challenges in carbon capture. In the research phase the lab has been exploring a wide variety of separation technologies ranging from various types of adsorption and absorption processes to electrochemical separations including fuel cells. I would like to focus on an idea that a gentleman named Ron Chance and myself collaboratively worked on with Bill Koros and Ryan Lively from Georgia Tech. Ron Chance and I had collaborated on the high pressure natural gas separations we discussed and for us the idea grew out of that work. I'll try to explain the idea and hopefully I can do it in a short fashion without a cartoon. It starts with the idea of a hollow fiber contactor that looks very much like a hollow fiber membrane module used for gas separations. The difference is that it is impermeable and one side contains an adsorbent that can remove CO2 from a flowing gas stream.
As such the concept for the contactor was a bundle of impermeable 100-1000 micron diameter hollow fibers. In this configuration we can pass water or steam on one side, and flue gas on the other. Due to the impermeable wall the two don't contact each other. On the side where the flue gas will flow we required a coating of a selective high capacity CO2 absorbent, such as a zeolite. There are many other adsorbent options and metal organic frameworks would be another. They weren't around when we were doing this, so we wouldn't have been able to use them. In use there is an absorption step in which a flue gas is passed on one side of the hollow fiber, where the absorbent loads up as it removes CO2 from the flue gas. This adsorption step is stopped before an adsorption wave breaks through and deleteriously affects CO2 capture efficiency. The adsorbed CO2 is released by a temperature swing desorption in which you send pressurized hot water or steam, through the opposite side of the module, driving a thermal wave along the length of the fibers. As one end gets hot, it desorbs the CO2, and basically it rolls an adsorption wave down the column. It pushes down the column, just like a chromatographic column, and you're going to roll it right out of the contactor. You're going to create a thermal wave that literally heat exchanges and moves down the fiber bundle. So, what you're doing while you're sending hot water or steam in one end, you're pushing cold water out the other end. When you have desorbed the CO2 and pushed it out you can send a little bit of cold water into the front of the column, and can start the next absorption step at the cold end. You basically cycle this around. It's a cyclic temperature swing absorption process that in principle can capture CO2 from flue gas. This idea was simultaneously arrived at by the Georgia Tech team and us (myself and Ron Chance). It could be because we were talking to each other. It could be that it was just the right time. We did however get together and collaboratively work on it. One of Bill Koros’s bright students, a gentleman named Ryan Lively, who's now a professor at Georgia Tech, took on the fabrication of this device. We kind of knew that it was a herculean effort, especially for a PhD thesis. Without even worrying about the chemical engineering aspects, this was an extreme material science challenge. But at the end, let's just say, credit to Ryan he made a prototype device.
It worked with dry flue gas. We knew we did not have an absorbent at that time that would work on wet flue gas. The only absorbents we had that we thought would work were basically not going to perform well in wet environments. Ryan did successfully demonstrate the whole idea with this limitation. What's happened over time is Georgia Tech has gone back and reworked on this. They've taken a derivative from this idea to the National Carbon Capture Center. Recently MOF adsorbent materials have advanced to the point that they might enable the use of this concept in wet flue gas and one of my colleagues is exploring this in a collaborative program. These are examples of discontinuous innovation.
Another discontinuous innovation story is the rebirth of the RONCO reactor concepts. A derivative of the RONCO reactors is currently being explored in the lab to make syngas for process concepts that have low carbon footprints. This rebirth is due to one of the original RONCO team members, Frank Herschkowitz and I am only a small contributor to it. Scientifically this rebirth is due to a combination of the understandings of high temperature short contact time chemistry processes coming from RONCO reactor studies married with a new physical understanding of thermal waves in swing adsorption processes. There are also new drivers; the importance of ways of mitigating CO2 emissions and the possible role hydrogen may play in the energy mix in the future.
Yet another type of discontinuous innovation story involves a rebirth with a very significant modification of the original idea. As I mentioned, we had originally developed a DDR membrane to remove CO2 from raw natural gas. We began thinking about rapid cycle swing adsorption processes and estimated how DDR would behave in a pressure swing adsorption process for bulk removal of CO2 from raw natural gas. It looked like it had potential and we worked with our Upstream Research Company partnering to begin exploring this. The exploration led to a broad suite of ideas to take contaminants out of raw natural gas with all kinds of rapid swing adsorption cycles involving not only pressure swing, but also temperature swing, purged pressure swing cycles, and a variety of hybridizations of these. The team grew as we explored removal of different contaminants such as CO2, H2S, and water; different mechanical valve and hardware options; and potential pathways by which we might commercialize the suite of technologies being considered. Some advantages of this suite of technologies come from the fact that these ideas provide a significant process intensification; reducing space and weight footprint, and in many cases the CO2 footprint. From a science perspective that does not fully credit engineering challenges, the essence of what's been going on has involved devising ways of finding molecularly selective fouling resistant absorbents and structures that have great mass transfer characteristics and incorporating them into cycles and marrying them with mechanical valves. It's involved quite a bit of experimental work and development because we have gone through a lot of candidate processes and gas purification targets over time. In particular this has involved synthesizing a large number of candidate adsorbents and making a large number of measurements to characterize their diffusion coefficients and their competitive adsorption isotherms. An advance in computational chemistry that happened during this development changed a significant fraction of the experimental workflow. One of my colleagues, Peter Ravikovitch has driven our application of this and has moved us to the point that for certain classes of absorbents, the computational chemistry is so accurate that we no longer have to measure equilibrium properties. We of course experimentally validated this approach with well characterized model systems before stopping work measuring equilibrium transport properties. In many cases the computational chemistry approach even provides a good model of kinetic behavior. For fouling behavior, you still do an experiment because you can't predict fouling. Nobody's gotten there yet, but you can do a lot of stuff. The other thing that changed during this development effort that I have a little role in, but others have carried much farther, is process simulations. Advances in the ability to simulate processes have taken us to the point that we now do model guided scale-up predicting how field scale demonstrations and commercial units will run. So, you start from fundamental isotherms, transport diffusion coefficients, mass transfer equations, and you can basically solve the complete differential equation set to predict how it's going to behave and scale up. That's a world of difference from where life has been we can now reliably do model guided scale-up. Right now, the company has formally announced the construction of a demonstration unit that is the culmination of the work. The announced unit is incorporated into a gas processing plant and is testing rigorous (sub ppm level) dehydration. I can also say without going farther that variants of these ideas are under consideration for flue gas capture.
I would like to head into the last two vignettes that point to work being done on the dual energy challenge. These are basically related to the question of how do you supply more energy to the world with lower energy intensity and lower CO2 footprint. One, which we published in Science is hydrocarbon reverse osmosis. This is a collaboration with Georgia Tech where you literally try to use the idea of what you do with water. So, you're familiar with reverse osmotic separations of water, well you can do them with hydrocarbons. The energy intensity of hydrocarbon reverse osmotic separations is expected to be significantly less than distillation processes used today. To demonstrate hydrocarbon reverse osmotic separations we went back to our old xylenes example, and showed the reverse osmotic separation of xylene isomers. We've also published details of how you would do a Stephan-Maxwell membrane transport model for this industrially interesting regime. So, now there's a full blown treatment out there; it's more or less the models we've been using for the last 30 years. They're now formally out there.
Long time coming.
Yes, Long time coming. For the papers Georgia Tech independently derived the same equations that we had been using. From my point of view one of the interesting aspects is the overlay of our model with existing treatments of aqueous reverse osmosis. The existing treatments required pressure to jump potentially several thousands of PSI at the surface of a membrane which from a mechanics standpoint is unphysical. Even though they employed a nonphysical mechanical condition the existing aqueous reverse osmosis models were good enough for the design of desalination plants used throughout the world. So to an engineering approximation for aqueous RO they were good enough, however we think that our treatment is required to understand the reverse osmotic separations of hydrocarbon mixtures with a large number of molecular species present.
I think the last couple of things I'll get to is work on the other side of the dual challenge. These vignettes will focus on research we performed on new ways of finding hydrocarbon resources as a way to provide affordable energy to the world. The issue is that hydrocarbons are an ever depleting resource. As soon as you start producing a reservoir it’s on a decline curve. So, we have a desire to economically replace production that intrinsically dwindles, requiring a continual series of new discoveries. It was my pleasure to work with a lot of our great young physicists on a team led by Deniz Ertas to assess new or underappreciated physics that might improve the chance of success in exploration. We developed some interesting geophysical insights and a few novel exploration concepts. One of them a magneto-seismic exploration concept won a Society of Exploration Geophysicists (SEG) best poster prize. The idea was to inject a current in the ground and use the magnetic field of the Earth to create a Lorentz force on the brine in the pore space that carries the current into the earth. The J cross B Lorentz force exerted on the fluid couples poroelastically to the rock matrix and produces a seismic wave back. We basically published a paper about how this would play out in a field setting and defined the limits of what you can do with it. Another geophysical insight we published involves the development of a way to quantitatively interpret one of the oldest of all geophysical measurements; induced polarization which was originally discovered by Conrad Schlumberger when measuring the resistivity of rocks. Rocks that have an induced polarization effect do not act as resistors, but rather have a complex conductivity which allows charge storage when interrogated with time varying currents or voltages. This effect has been used to a small degree in well logging and even though it was more than 100 years old had only been phenomenologically explained with a Cole-Cole model. In geophysical settings with brine filled pore structures it was also known that the effect was due to the presence of electrically conductive grains such as pyrite. We were able to develop and experimentally test a first principles theory that quantitatively related the induced polarization response to the number of grains present, grain size, and mineral type.
I realize that I have rushed through these recent contributions that are trying to push the needle in different ways to find reserves. I'm close on time, believe it or not, wow. There are a lot of other things I did not touch on in carbon mitigation, and there are other people. So, there's a lot of work.
But you're conveying that. The important thing to convey is that there is so much work going on toward mitigation.
Okay, so hopefully, at least, maybe I entertained you at some point. I don't know.
Well, Harry, I'm going to reserve my last question, and that is, I know you're going to retire sooner than later, so I want to ask because the dictum is that physicists never truly retire. In what ways do you want to remain involved in physics, even after you retire?
Okay, well I have very specific thoughts. We'll see where they all go. The first thing, of course — I will retire — that's a given — from Exxon. I don't want to retire from physics, per se. At the moment, I have a lot of energy around several topics. I've loved, being able to act as an advisor offering suggestions and advice to a multitude of the research projects in the lab. I think it's been of value and hope to find a way to continue helping other researchers. I've not really talked about that dimension of my life. There are so many interesting research projects in the lab that I wish I could take you through. So, we'll just say there's a lot of fascinating scientific work at the lab. It's exciting to me.
And there's no doubt, what you're saying, is that Exxon definitely sees itself as part of the solution.
Yes. Also, when I retire I have an interest in continuing to look into other solutions for greenhouse gas mitigation. I don't know if I can contribute to solutions in the future but would like to believe that I could. I like that belief and we'll see where it leads to and what can be done.
Like I said, Harry, if you're succeeding in these areas, this is good news for all of us. So, I want to wish you a lot of luck in that regard. It's been so fun speaking with you today, and I'm so glad we were able to put this together. I really appreciate it.
Well, listen, I thank you for considering me for this. I've been excited by it, so thank you.
Oh, it's my pleasure.