David Fryberger

Notice: We are in the process of migrating Oral History Interview metadata to this new version of our website.

During this migration, the following fields associated with interviews may be incomplete: Institutions, Additional Persons, and Subjects. Our Browse Subjects feature is also affected by this migration.

We encourage researchers to utilize the full-text search on this page to navigate our oral histories or to use our catalog to locate oral history interviews by keyword.

Please contact [email protected] with any feedback.

ORAL HISTORIES
Interviewed by
David Zierler
Interview date
Location
video conference
Usage Information and Disclaimer
Disclaimer text

This transcript may not be quoted, reproduced or redistributed in whole or in part by any means except with the written permission of the American Institute of Physics.

This transcript is based on a tape-recorded interview deposited at the Center for History of Physics of the American Institute of Physics. The AIP's interviews have generally been transcribed from tape, edited by the interviewer for clarity, and then further edited by the interviewee. If this interview is important to you, you should consult earlier versions of the transcript or listen to the original tape. For many interviews, the AIP retains substantial files with further information about the interviewee and the interview itself. Please contact us for information about accessing these materials.

Please bear in mind that: 1) This material is a transcript of the spoken word rather than a literary product; 2) An interview must be read with the awareness that different people's memories about an event will often differ, and that memories can change with time for many reasons including subsequent experiences, interactions with others, and one's feelings about an event. Disclaimer: This transcript was scanned from a typescript, introducing occasional spelling errors. The original typescript is available.

Preferred citation

In footnotes or endnotes please cite AIP interviews like this:

Interview of David Fryberger by David Zierler on May 13, 2021,
Niels Bohr Library & Archives, American Institute of Physics,
College Park, MD USA,
www.aip.org/history-programs/niels-bohr-library/oral-histories/47474

For multiple citations, "AIP" is the preferred abbreviation for the location.

Abstract

In this interview, David Fryberger discusses: childhood in Minnesota and Florida; Korean War service; studying experimental particle physics at the University of Chicago under Val Telegdi; muon X-ray experiments and the first experimental use of spark chambers; Bruce Winstein’s early work; using the “coffin magnet” in spectrometer experimentation; using feedback loops to automate muon decay experiment; early operations at SLAC; invention and patent of the touch panel; work with Arthur Rogers on baryon-antibaryon model for meson mass and structure; Mark I detector build with Burton Richter; role in the “November Revolution”; Ting and Richter’s discovery of the meson nuclear particle; innovations for the Mark I storage ring; Buford Price’s work and its influence on storage ring experiments and the SLAC Positron Electron Project (PEP); magnetic monopole search at PEP; collaboration with Price on PEP-2; technical details of his vorton theory papers and the vorton particle model; discussion of Blas Cabrera’s work with magnetic monopoles; ball lightning (BL) experiment at Languir Laboratory in NM; Erling Strand and Fryberger’s Hessdalen paper; problems with BL computer simulations; experiences working under Panofsky and Richter; cavity light (CL) phenomena and collaborations with JLAB on CL experimentation; small luminous object (MLO) behavior; physics beyond the Standard Model; collaborations with Michael Sullivan; magneticon phenomena; capability of finding and observing vortons; coining the term “vorton”; incompatibility of the vorton model with string theory; differences between experimental and theoretical physicists; Neil Weiner and inelastic dark matter scattering (iDM); current work on iDM model viability and magnetic hydrogen as a working hypothesis for a dark matter candidate; and thinking outside the box of the physics establishment. Toward the end of the interview, Fryberger reflects on his work at SLAC in a staff support position and his hopes to mount an experimental CL program post-COVID.

Transcript

[Editor's note. Some of the scientific notation in this version of this interview's transcript is incorrectly formatted. For correct formatting, please consult the pdf version in our digital collections.] https://repository.aip.org/islandora/object/nbla%253A317941

David Zierler:

OK, this is David Zierler, oral historian for the American Institute of Physics. It is May 13th, 2021. I’m delighted to be with Dr. David Fryberger. David, it’s great to see you. Thank you so much for joining me today.

David Fryberger:

Well, the pleasure’s mine too. I feel honored to be included in such an important program.

Zierler:

[laugh] That’s great. David, to start, would you please tell me your most recent title and institutional affiliation?

Fryberger:

Well, I’m a retired staff member emeritus. Unlike faculty, who routinely retire as professor of physics emeritus, say, due to Stanford protocols governing staff retirees emeritus, my title is rather cumbersome. I’m Deputy Head, SLAC Experimental Facilities Department [EFD], Emeritus. This is a Stanford designation rather than a SLAC designation. There are also SLAC staff emeriti, but that designation was invented after I retired, and as I understand it, unlike the status of Stanford staff emeritus, it’s not necessarily permanent. Since my emeritus title seemed kind of cumbersome, I asked personnel if I could just call myself an emeritus physicist, and they said yes. So, let’s say that I’m an emeritus physicist retired from SLAC.

Zierler:

David, when did you become emeritus?

Fryberger:

When I retired in October of 1998. I received a letter from Gerhard Casper, the Stanford President at that time, informing me of my retirement status and its actual title.

Zierler:

And in what ways over the past two-plus decades have you remained connected to SLAC?

Fryberger:

I was employed by SLAC from when I arrived in October ’67 until I retired in 1998, over three decades. In discussing my upcoming retirement with Burt [Richter, the SLAC Director at that time], I emphasized that I didn't want a cash buyout, but rather I wanted to be able to continue my research interests in physics. Consequently, Burt arranged that I would retire as a staff emeritus and would have a bit of ad hoc financial support for a few years into my retirement. So, as an emeritus retiree, I have an office at SLAC, a computer account, and a free A parking sticker on the Stanford campus. Being emeritus is key. To facilitate the financial support part, I transferred from EFD in the Research Division to AARD [Advanced Accelerator Research Department] headed up by Robert Siemann in the Technical Division. So [laugh], I guess that’s the main part. As a result of my retirement arrangements, I was actually able to do some very interesting work on Cavity Lights [CL] at JLAB [Jefferson Lab in Newport News VA], which I hope we can talk about later. So, I’m still connected to SLAC. However, because of COVID, I haven’t been in to my office for over a year—well, actually, I have been in to my office twice to retrieve some books and papers for work that I’m doing. But those visits had to be supervised by SLAC Security.

In the same vein, because of COVID, they won’t let most employees go onto the SLAC site. Actually, people who are important in operations and some COVID studies can go on site. Yes, they’re actually doing some COVID studies at SLAC. But generally, staff are working from home. And I’m also working from home. We’re hoping that things will ease up soon.

Zierler:

Yeah. Well, David, let’s take it all the way back to the beginning. Let’s go back to Minnesota, and start first with your parents. Tell me a little bit about them and where they’re from.

Fryberger:

Well, my father’s family, the Fryberger family, is from Duluth MN. My grandfather, moved from Red Wing to Duluth 1889. In 1896, he founded a law firm there called Fryberger, Fulton & Boyle. I think it’s fair to say that it was a pretty good law firm. They did law work for various Minnesota mining companies and also for the Soo Line Railroad. But I don’t know any details of the work they did. My grandfather had a nice corner office on the 7th floor of the Lonsdale building, which is on Superior Street in downtown Duluth. When he took me to his office, I used to enjoy the view of Lake Superior and the famous Duluth aerial bridge. I could also watch the trains in the switchyard on the other side of Michigan Street. My father, William B. Fryberger, was born in Duluth in 1904. He was the third of six children, two girls and four boys. They were Helen, Virginia, William, Herschel, Robert, and Philip, in that order.

Zierler:

And what about your mom? Where’s your mom from?

Fryberger:

My mother, Kathleen, was born in West Virginia. Her mother, Maybel O’Brien, wanted to have her first child on her family’s farm in Valley Bend where she had grown up. My grandfather O’Brien was a mid-level manager in the Gorham Silver Company. I think he was also employed by some other silver companies. At that time, my mother’s parents were living in the East—if my memory serves me, first at Sag Harbor on Long Island, and then later in Wallingford, Connecticut, a small town about 15 miles north of New Haven. My parents were married in Wallingford in the summer of 1927. They had met in college. My mother went to Smith, and my father went to Dartmouth, as did two of his brothers.

Zierler:

David, how did your parents fare during the Great Depression?

Fryberger:

Well, that’s a good question. After they were married, my parents were persuaded to live in Duluth where my father, having also a law degree, joined the Fryberger, Fulton & Boyle law firm. To save money, they lived in my grandparents’ house, which was a quite substantial three-story brick house, built in 1908. It had a lot of bedrooms and was large enough for my grandparents to have comfortably brought up their 6 children there. And then, after the stock market crash, because of the depression, as far as I know, my parents continued living in the Fryberger house. Being a member of the Fryberger law firm, my father’s employment was stable. They weren’t rich, but they didn’t have to worry much about money. I was born in Duluth in 1931. And so that’s how they started. And, we can go from there.

Zierler:

David, what were your earliest memories that the United States was involved in World War II?

Fryberger:

Well, in this country, World War II started in 1941 with the bombing of Pearl Harbor, when I was 10. But let me go back a little bit—my parents got divorced shortly after I was born—maybe when I was a year or two old, so I have no memory of that, or what went wrong with the marriage. But my parents remained on friendly terms. After their divorce, I left Duluth with my mother. We lived in various places, but I don’t remember where, I was too young. But my best guess is that they were mostly in the south—North Carolina, South Carolina, or perhaps Georgia. I wish I knew, and I’d like [laugh] to be able to ask her. But being a single mother, with a small boy, I think she eventually felt some financial strain. So, when I was about 3 or 4, she arranged that I return to Duluth to live with my father and grandparents in the big house in Duluth. I went to kindergarten, 1st and 2nd grade at the Washburn Elementary School in Duluth. I walked to school, which took about 15 or 20 minutes. I lived there in Duluth until the summer of 1938 when I joined my mother again in a special sharing arrangement. I was to be in Daytona Beach, Florida, where she was then living with her mother (now also divorced) for the school year, and then in Duluth for the summers. By then, my grandfather and Philip had died and my father’s other siblings had married and were living elsewhere in Duluth. So, in the summers, I was living in my grandmother’s big house in Duluth with only my father and my paternal grandmother. It was a good arrangement, and I liked it. And, so, I was in Daytona Beach in December of 1941 when Pearl Harbor was bombed. I remember that day, it was a Sunday. I was in sixth grade, and I and some neighborhood kids—we all went out and started to dig a bomb shelter.

Zierler:

[laugh]

Fryberger:

It was easy to dig in Florida because it’s just sand. Florida is like a big sand dune.

Zierler:

Yeah.

Fryberger:

So, we dug a hole in the ground. Our bomb shelter was in a kind of open space in the woods. The hole was about five or six feet deep, and maybe ten feet by ten feet, in area, something like that. We were going to cover it over to make it a proper bomb shelter, but we never got to the roof part. Now, our house, which was about a block from the beach, was a model house for a real estate development that had collapsed in the late ’20s, so there were not many houses in our part of town—many paved streets and sidewalks, but few houses. That part of town wasn’t built up until after the war. I wonder what the property owner made of that hole when he went to build a house there.

During the war, there were a lot of soldiers coming through on the trains, and some were stationed at the Daytona airport. And a number of things were rationed, like gasoline, sugar, butter, and meat. But I would say that this was more of an inconvenience than a hardship. And there were scrap drives collecting—well, copper, iron, you know, anything that people no longer needed but which might be useful for the war effort. I remember very large piles of scrap in our school yard. And groups of school kids helped load this scrap onto freight trains.

But I should go on to say that in the early part of the war, cargo ships were being torpedoed just off our coast, and quite a bit of debris was washing up on the beach. Sometimes, on certain days, we were prohibited from going to the beach. It was rumored that some bodies were washing ashore, but neither I nor any of my friends ever saw any. And a lot of oil drifted ashore, seriously contaminating the beach. Those globs of oil were there in the sand for months—maybe even a year or so. I remember I found a can of Camel cigarettes which I gave to my mother—she smoked, which I’m sure contributed to her early demise in 1965 due to throat cancer.

The beach at Daytona was firm enough that automobiles, even heavy trucks, could drive on it, which is quite rare. Usually beach sand is too soft. When the tide, which is only about 3 feet in the vertical, goes out, at low tide you get a width of several hundred feet of exposed moist sand, which is as solid as most paved roadways. In fact, in the early 20th century, the beach at Daytona was the site for some early speed races, and some world record speeds were set there. Later, the speed races were done elsewhere, for example, on the salt flats in Utah. And I remember watching some stock car races there on the beach in the ’30s. Those early stock car races formed the beginning of NASCAR, and, of course, the Daytona 500. Bill France, Sr., who is known for founding NASCAR, had a son who was a few classes behind me at Seabreeze High School. Also, Fireball Roberts, of racing fame, went to Seabreeze High, I think a class ahead of me.

Zierler:

David, when did you start to get interested in science? Was it early on?

Fryberger:

Well, sort of mid-on. I would say not in grade school. I wasn’t one of those geniuses who were doing integral equations at age 8 or 10. But by junior high, my interest was certainly developing. I used to take things apart to see how they worked. By seventh grade, I was reading books on science. I also subscribed to Popular Science and Popular Mechanics. When I was 12, my mother gave me a book on electricity,[1] and my grandfather O’Brien a book on radios,[2] both of which I still have. I also had a copy of the handbook of the IRE [Institute of Radio Engineers], which had good discussions of the theory and practical details on how radio circuitry worked. The IRE handbook also had a very useful set of tube base diagrams, which was essential to one who was putting electronic circuitry together. My interest in science, especially electricity, continued to develop.

About that time, or maybe a little earlier, I put together a crystal set. I don’t know if you know much about crystal sets, but the most important component is a small crystal of natural material, often galena, and you have a cat whisker, which you use to make electrical contact with the surface of the crystal. And if you get the contact just right, it makes a rectifier, which, as a demodulator, enables the detection of the AM modulation on an RF signal. So, if you have an antenna, a coil for tuning, a crystal, and some earphones, you have a rudimentary radio. The radio band for AM was ~550 to 1,600 kilohertz, and I used to listen to AM stations with it. FM had not been commercially developed yet, and anyway, crystal sets can’t detect FM.

And what’s remarkable about a crystal set is that the very small fraction of the energy radiated from the transmitter, picked up by your local antenna, even hundreds of miles away from the transmitter, is able to furnish an audible sound in the earphones of the crystal set. No external power is required—there is no amplification by vacuum tubes or semiconductors, as in modern radio receivers. I remember receiving a signal from ZNS, which they pronounced Zed N S, in the Bahamas, about three hundred miles from Daytona Beach. Some nights, when AM transmission distances are greater due to ionospheric bounce, I was able to pick up a clear channel 50 kW station—from Nashville, I think. When you stop to think about it, it really is quite remarkable that useable power can be transmitted through the air over such long distances.

My antenna was strung outside from our roof—we lived in a two-story house—to a bamboo pole in a tree about 100 feet from the house. I wired the house end of the antenna through a window into my bedroom, which also served as my lab. I was lucky that lightning, which was quite common in Florida, never hit my antenna. It might have set our house on fire, or electrocuted me, or both. Once, during a local thunderstorm, before hearing the thunder, I did hear the snap of a spark in my bedroom. The spark discharge would indicate that the lightning bolt had charged the antenna to thousands of volts. Kids are not very good at evaluating safety hazards.

And then one summer, a friend of my father’s gave me an old radio, made in the early ’30s. It had five stages of TRF [tuned radio frequency] before the demodulator. That old radio used one of the earliest vacuum tube triodes that were manufactured for receivers and amplifiers. I think 01, or 01A, was its designation. It was a four-pronged triode with three internal elements. Two prongs were for the filament, which operated at about 6 volts, a prong for the grid, and a prong for the plate. For a number of years, I used these tubes, and parts like coils, transformers, and capacitors from that radio as components for a variety of circuits that I put together.

Later, I built more elaborate circuits, for example, oscillators, audio amplifiers, and even a superheterodyne radio. A superheterodyne radio converts the modulated RF signal to a modulated intermediate frequency [IF] signal for better amplification and tuning. For these projects, I bought components from Allied Radio, a mail order store in Chicago, sort of like Radio Shack was a few years ago. They put out a catalogue, and you could order resistors, capacitors, tubes, transformers, wire, solder—you know, anything that you might need to build electronic circuitry. Now, during the war, some items were restricted, like large electrolytic capacitors, 16 microfarads, say, which were used in high voltage dc power supplies. For those items, you had to certify that what you were ordering was for replacement in existing radios.

And, so, when I was building a power supply, I needed some large electrolytic capacitors—16 microfarads were typical for this application. These capacitors were used, along with chokes, to filter the ac hum out of the dc output voltage. A friend of mine who had a similar interest in radios and electronics—who passed away a few years ago—would just certify to Allied Radio that “Yes, these items are for repair.” But that seemed wrong to me. So instead, I went down to a local radio repair shop, and asked the owner if I could buy some large electrolytic capacitors. He asked me what I wanted to do with them, so I explained about power supplies, and how, using a rectifier tube, you can convert alternating current to direct current, and that I needed the capacitors to filter out the 60 hertz harmonics so you’d get a good clean dc voltage. This voltage was used in a radio or amplifier. It was called a B+ voltage, generally several hundred volts, and it was used to power the plate circuitry. The A supply furnished the filament voltage—usually around 5 volts—and the C supply, usually negative but not too large, was used for the grid bias. So, A, B, and C were the batteries, or power supplies, that you needed in those early radios or amplifiers. Anyway, he was sufficiently impressed that I knew what I wanted to do, that he sold me some electrolytic capacitors. He told my mother to help me continue my interest in science.

Zierler:

David, tell me about the decision to attend Phillips Exeter for the last part of your high school.

Fryberger:

Well, I was going to Seabreeze High School in Daytona Beach. Though I had been an ordinary student in 2nd and 3rd grades, from year to year, my grades kept improving—possibly because I studied and always did my homework. A lot of my classmates didn’t care about studying or homework, but I saw school as an important chance to learn, and I thought learning was good. So, by the time I got to high school, I was getting mostly As in my courses. Our class had about 60 students in it, and I would guess that I was among the top 2 or 3.

At Seabreeze I took chemistry, biology, history, shop, the usual math and English courses, Spanish, etc. Since we were in Florida, our Spanish teacher arranged for our Spanish class to go to Havana, where we actually got to practice our Spanish. It was fun. That was before Castro. But in 1946, when I was finishing my sophomore year at Seabreeze High, my father told me that he wanted me to get a better education—to go to a better school than a public school in Florida. But I told him that Seabreeze was one of the best schools in Florida. He asked me how I knew that, and I said that our high school principal [a man named R. J. Longstreet] had told us [laugh].

My father had wanted me to go to Exeter, in New Hampshire. He had gone to Dartmouth, and I guess he realized that prep schools were better than the public schools, even Seabreeze High School. I took the entrance exam, and was admitted into what they call the upper middle class. At Exeter, they have junior, lower middle, upper middle, and senior classes, the four years. So, I was admitted to the upper middle class, which would be the natural sequence from 10th grade.

But I liked Seabreeze, and didn’t want to leave my friends there. So, I spent the next year, the 11th grade, at Seabreeze. Then, my father came back again and said, “Look, you’ve got to go to a better school.” Realizing that my father was really serious, I took the entrance exam again, and was admitted again. But I was only admitted to the upper middle, not the senior class. Anyway, I went to Exeter, entering the class of ’49, which meant that I lost a year. The reason I couldn’t get into the senior class at Exeter was was that they wouldn’t give me credit for my 11th grade English course at Seabreeze. I was getting As in English at Seabreeze, so I didn’t see why I shouldn’t be getting credit from Exeter. But at Exeter, when I got my first assignment in English back—you know, they ask you to write short essays for homework—[laugh], I got a D [laugh]. I thought: “I don’t get Ds, I get As.” Well, I got As in English at Seabreeze but not at Exeter. Eventually got I my English grades up to a B- or B, but I got As in almost everything else.

So, that’s why I ended up repeating a year in high school. But in my two years at Exeter, I took an advanced course in physics and Math 5, which was equivalent to a college calculus course. When I got to Yale, I took second-year calculus. But it was half a year before I got into new material. The Exeter Math 5 course covered the first year and a half of calculus at Yale. And then, for some reason my requirement to take Yale’s Freshman English was waived, so, in the end, I hadn’t lost a year in English after all. Thus, the net effect of my entering the upper middle class at Exeter in the fall of 1947 was that I had gained an extra year of college education. Exeter was perhaps the apex of my educational achievements. At the end of senior year, Exeter offers prize exams, which seniors can sign up for. I signed up for four, winning prizes in Radio, Physics 2, Spanish 3, and Math 5. I haven’t gotten any prizes since.

Zierler:

David, was the draft for the Korean War something that you had to contend with between high school and Yale?

Fryberger:

Let’s see. I graduated from Exeter in the class of ’49 and I went to Yale that fall, entering the class of ’53. When did the Korean War start, in the early ’50s?

Zierler:

’49.

Fryberger:

And what was its duration?

Zierler:

’49 to ’53.

Fryberger:

Yeah, and, so, at that time there was a Korean War draft, for which I had registered and had a draft card. But as long as you were going to school full time you qualified for a school deferment, which I had. And being admitted to college tided one over the summer.

However, looking ahead a bit, upon graduation from college, you had to worry about the draft. After graduation, you could get technical jobs in engineering. If you were employed by a company that declared that you’re essential, you were given a deferment, and were temporarily excused from the draft. Having a BE degree, that path was also available to me. But I didn’t want to take it because then you were vulnerable to being drafted until age 35. And changing jobs could offer a point of vulnerability to the draft. My worry was that just as I would be settling in with a family and good career, someone would come along and say, “Well [laugh], your time is now.” And, so, when I graduated from college, I joined the Navy to avoid being drafted into the Army. In the Navy I would serve three plus years as an officer, as opposed to serving for two years as an enlisted man in the Army. For me, in spite of the extra year of military service, being an officer in the Navy was still much more appealing to me than being an enlisted man in the Army. Also, my father approved of my choice. In the ’40s he had volunteered for the Navy, entering as a full lieutenant. His attendance at VMI [Virginia Military Institute] before he was married evidently qualified him to enter the Navy as a commissioned officer. He had served in the Pacific as an Air Intelligence Officer, and actually saw some action. And like my father, I also thought that citizens had an obligation to perform public service of some sort. I applaud those who go into the Peace Corps.

Zierler:

Now, David, at Yale, how much physics did you have to take for the electrical engineering degree?

Fryberger:

Well, I don’t remember exactly what the requirements were. As I recall, I was required to have a year of physics and a year of chemistry. To satisfy my physics requirement, I took a course in classical physics. As it turned out, because of my interest in physics, I generally already knew everything that was being presented. And, so, I cut a lot of my classes. It wasn’t long before I got called into the registrar’s office. He said [laugh], “You’re cutting classes. You can’t do that.” So I tried to reason with him. I said, “Look, I’m getting a 95 in the course. Why do I need to go to class?” He said, “No, it’s our rule. Students have to attend all classes, and that’s that.” I don’t recall that he threatened me, but after that I went to all my classes. But I wish he’d said to me, “Alright, wise guy, [laugh] in that case, I think you should take a more serious physics course, like quantum mechanics.” But he didn’t, and it was almost a decade before I took my first quantum mechanics course—at IIT in Chicago.

Zierler:

What did you do during your time in the Navy? Where did you serve?

Fryberger:

Well, after graduation from Yale, I reported, in September of 1953, for duty at Officers Candidate School [OCS], which was located at Newport RI. A civilian enters OCS as an enlisted man with the rank and pay of a Seaman Apprentice. And in four months, if you graduate, which most do, you emerge as an Ensign in the US Naval Reserve [USNR].

Before I give you a summary of my Navy career, I should say that when I first applied to the Navy, in the late spring of 1953, they gave me a test, much like an IQ test, and interviewed me to see where I might fit in. As a result, they put me on a track for Air Intelligence [AI], with the designator 1355. Designators determine the kinds of assignments you get in the course of your navy career path. For example, they have designators for line officers, supply officers, medical officers, etc. Designators also indicate whether you are in the regular Navy [USN] or in the USNR. So, I was told that I would be in the Air Force of the USNR—probably attached to a shore based airborne early warning [AEW] unit, perhaps in Hawaii. These AEW units flew large 4-engine aircraft which were equipped with powerful radars, having detection ranges of up to several hundred miles. I thought, “That kind of duty could afford a nice place to get married and settle down.” But I deferred such thinking until such an assignment actually happened. In keeping with my AI designator, after graduating from OCS in January of 1954, I was assigned to a Navy AI school in Jacksonville, Florida that gave specialized training for AI officers. Now Jacksonville was about 90 miles north of Daytona Beach, which was great for me because I could visit my mother weekends.

After the AI school, I was assigned to an electronics school in Glenview IL, which was just north of Chicago. Radars are certainly advanced electronics devices. I should remark here that in the fall of my senior year, I had met Betsy Geraghty, who had just entered Bryn Mawr, and whose family lived in Chicago. And we had dated in college. So, it was a fortuitous circumstance that the electronics school was in Glenview, and in the summer.

After completion of the AI and electronics schools, I was ordered to report to an AEW/ASW [antisubmarine warfare] training school at the Navy base at North Island, which is located on the San Diego harbor. Both pilots and air crew, like me, attended that school. Part of that training consisted of flights of an hour or two out over the Pacific. On one of those flights, the training officer, thinking I was a pilot, offered me a chance to fly the plane. But not being a pilot, I declined. I kind of wish I had accepted.

After attending these schools, I was deemed qualified to be an airborne navigator/air controller, and was ready to be assigned to duty with the naval operating forces. But inconsistent with the earlier assurances I had received about a shore based AEW unit, I was ordered to report to VC-11, at the Naval Air Station [NAS] on North Island. VC-11 was a rather large group, that gave specialized AEW/ASW training to pilots, air crew, and enlisted men for deployment aboard aircraft carriers. These carriers were stationed at bases on the west coast and made periodic cruises to the Pacific far east. There were also east coast carriers, making Atlantic cruises, but they were in a different command structure. Typically, after this specialized AEW/ASW training, so-called splinter groups, comprised of 10 to 15 officers and enlisted men, were drawn from the main VC-11 squadron and assigned to each carrier as a part of a much larger carrier air group [CAG]. The main mission of the CAG was to furnish airborne protection for a carrier task force, which consisted of the carrier, destroyers, and other support vessels. Most of the aircraft in the CAG were jet fighters. The VC-11 planes, I think each splinter group had 3 or 4 of them aboard each carrier, were single engine prop planes with the designation AD [an Attack plane manufactured by Douglas Aircraft].

These ADs had been modified to carry a long-range radar, which transmitted a stream of high-power microwave pulses—generated by a magnetron—from an antenna, hanging inside a radome below the plane’s fuselage. After the pulse transmission, the antenna then fed a microwave receiver to detect the power reflected back from various targets—planes or ships, say. The antenna rotated continuously scanning the full 360 degrees in azimuth. The time delay between transmission and reception gave the target range, and the antenna azimuth angle gave the target direction. This information was displayed on what was called a plan position indicator [PPI], a round display with the transmitter at the center and a simulated beam, or radial line, radiating out from the center, and continuously sweeping through 360 degrees in sync with the antenna. In this way, the detected targets were displayed as blips on the PPI scope at the appropriate distance and direction. As targets moved, they could be tracked from sweep to sweep—that is, blip by blip. In this way, the target velocity and direction could be calculated.

Since our ADs operated off of aircraft carriers, they had tailhooks to enable carrier landings. Right after touchdown, tailhooks catch one of a number of cross deck pendants, heavy steel cables on the flight deck, which, using an energy absorbing hydraulic system, rapidly pulled the plane to a complete stop—as I recall, within about 100 feet or so.

And so, after about six months with the VC-11 unit at NAS, North Island, I embarked on a Pacific cruise in a VC-11 splinter group aboard the USS Boxer. We left North Island, as I recall, in June of ’55, and then arrived back there in December of ’55. While most of the cruise time was operations at sea, the carriers would also make shore stops, for maintenance or getting supplies—for example, at Hawaii, the Philippines, Guam, Japan, and Hong Kong. After another period with VC-11 back at North Island, I embarked on a second cruise of about the same duration, on the USS Lexington, arriving back at North Island in Dec of ’56. From my training at North Island and these two cruises, I have a couple hundred carrier landings, a dozen or so at night. As you may have deduced, the main purpose of a peace time Navy is training.

My active duty was to be up in August of ’57, so with very little chance of a further deployment, I finally saw the opportunity to get married and live ashore. So, Betsy and I were married in January of ’57, and lived in Coronado, which is a lovely small town adjacent to the North Island NAS. A lot of retired navy officers live there. In the course of the three years plus in the Navy, I’d gotten a promotion, so I left active duty that August as a lieutenant junior grade.

Zierler:

David, was your intention to enter into the business world, and that’s why you pursued a degree at the Harvard Business School?

Fryberger:

That’s right. But first, I should tell you an anecdote related to my choice of engineering as my major at Yale. At Yale one typically declared one’s major at the end of freshman year, and I was interested in both physics and engineering—electrical engineering [EE], in particular. And my question was: “Do I want to go into physics or into electrical engineering?” And, so, I went out to the Yale physics department, and met a graduate student there whose PhD thesis topic was cosmic ray physics. For his thesis experiment he’d sent his apparatus up in a balloon. To take data on cosmic rays, for some experiments it is better to be high in the atmosphere. It was probably an emulsion experiment but I don’t remember its details.

Anyway, when I got out there and started talking to him, it turned out that his balloon had burst, and his entire experimental apparatus had crashed to the ground. He’d lost all his apparatus and about two years of work. And, so, [laugh] I’m thinking to myself, “I don’t think I want to do this.” And, so, that was an element in my deciding to go into engineering instead of physics.

It’s also true that employment prospects were better if you were pursuing a career in engineering rather than physics. But part of my reasoning that I went to Harvard Business School [HBS] after my stint in the Navy was that if you had both an engineering degree and a business degree, then you had particularly good career prospects. Also, I should remark that one of my Yale roommates, who had been drafted into the Army, had, after his Army service, already gone to HBS and highly recommended it. So, that’s why I went to HBS. By the way, we’re still friends.

Zierler:

Was that a good experience at Harvard?

Fryberger:

Well, I won’t say it was bad, but I soon realized that business wasn’t something I wanted as a career. As you probably know, it’s common for young people starting out their careers to want to something for the betterment of society. And from my time at the HBS, the path to such a career was not obvious. As an example, there was a course called Business Responsibilities in American Society [BRAS], and it seemed to me that the main thrust of BRAS was teaching how to avoid antitrust litigation. That is, rather than being socially responsible, it was teaching how to get out of being socially responsible. I was certainly disappointed by that, and after a few months, I was ready to try a different path.

But Betsy was studying art history to get a master of arts degree [MA] from Harvard, and it didn’t seem right that I should cause her to pull out before finishing. So, in the spring of ’58, when she got her MA degree, and after I had successfully completed my first year of the two-year master of business administration [MBA] program, I wrote a letter to the business school saying I’d like to take a little time off. In response, they said, “Fine, come back and resume your studies any time in the next five years.” But, as things turned out, I let that offer lapse.

Zierler:

What did you want to do next, David?

Fryberger:

Actually, jobs were a bit scarce at that time. We were in the midst of what was called the Eisenhower recession. But, through some friends of my father, I found out that there was a job opening at a mining supply firm called Lakeshore, in Iron Mountain MI. I applied and was hired. So, off we went to the upper peninsula of Michigan. My job was selling electrical hardware, like high voltage insulators, street lights, and transformers. It was a reasonable match to my education and experience up to that time, but I soon realized that this job also didn’t really mesh with my interests. At that point I wanted to continue my studies in EE, and we both wanted to live in a more urban setting. So, we began thinking about the possibilities.

Now, as I said, Betsy’s family lived in Chicago, and we used to visit them in Chicago on the weekends. It was about a six-hour drive from Iron Mountain. So, for our next move, Chicago seemed like a good place to look. As far as my employment and my studies went, there seemed to be two salient possibilities. One was to join the staff of Armour Research Foundation [ARF] as a full-time employee, and study part time at the Illinois Institute of Technology [IIT], a university at 35th and State street on the south side of Chicago. ARF was literally next door to IIT, and encouraged its employees to take courses there. It would pay for the courses and give you time off, if the courses were given during the day. In fact, ARF’s name was later changed to IITRI [IIT Research Institute]. The other was to go full time to the University of Chicago [U of C] and hope for some financial support—they wouldn’t consider part time students. Another problem with the U of C option was that they didn’t have an EE department; I would need to switch to physics.

So, I opted to get a full-time job at ARF and pursue at IIT my studies in EE on a part time basis. We moved in May of ’59 to the south side of Chicago, finding an apartment on 57th street, quite near the main campus of the U of C. I got a masters in EE from IIT in early ’62.

By that point, I definitely wanted to continue my academic studies beyond the masters. One possibility was getting a PhD in EE at IIT. Another EE possibility was at Northwestern University, located in Evanston just north of Chicago. However, a colleague of mine, who had gotten a PhD from IIT and who had recently joined the faculty of Northwestern, told me that Northwestern’s EE department was not much better than IIT’s. So, I was motivated to consider more seriously my options in physics. As I mentioned earlier, one of the courses that I had taken at IIT was the quantum mechanics course that I never took at Yale. Also, during this period I had read the pair of physics books by D’Abro,[3, 4] and as a result, my interest in fundamental physics had grown considerably. For example, one of the things I wanted to know was, “What is the electron made of?” And by studying physics, I could get answers to my questions about the electron. Regrettably, it turns out that even to this day there is no accepted theory about the structure of the electron. It’s considered an elementary particle with no structure. Perhaps I can make some comments on this subject later in this interview.

Now, the U of C, conveniently enough, was just down the street from our south side apartment. And since we enjoyed living in Chicago, it was fairly easy to convince myself to go to the U. of C to study physics. We wouldn’t have to move. And after all, physics wasn’t all that different from EE—actually, not quite true, I found.

Zierler:

Plus, Chicago’s a pretty good program in physics.

Fryberger:

Oh, yes, excellent—both theoretical and experimental. And though Fermi had passed away before I arrived, he was much revered, and his spirit lived on. It was and it still is an excellent physics department.

So, I ended up in the physics department at the U of C, and we didn’t have to move. Betsy had gotten a good job in the Print Department of the Art Institute of Chicago. She could commute from the south side to downtown Chicago on the Illinois Central, and I could walk to classes. It was a pretty good situation for both of us. In addition—and this is an aside—after I was at the U of C for a couple of years and had passed the qualifying exam for the PhD program, the Hertz Foundation, which had recently been formed, was looking to start offering fellowships, and they had chosen five schools for starters. The U of C was one, MIT was one, Caltech was one, I think Stanford was one, and I don’t remember the fifth. Anyway, I was put forth as the U of C candidate, was interviewed, and was fortunate enough to be accepted. But an important aspect of the fellowship is that, along with books and tuition, it had a $5,000 tax-free stipend. As a result, I was actually financially better off at the U of C with my $5,000 tax-free fellowship than I was working full time as an engineer at IIT. By now, such fellowships are no longer tax-free.

Let’s see. I told you I got a masters at IIT, didn’t I?

Zierler:

Yeah.

Fryberger:

In the course of my studies at the U of C, I got a masters in physics—in ’64.

Zierler:

David, when did you first meet Val Telegdi?

Fryberger:

Well, after I had my masters in physics, it was time for me to decide on a specific study program at the U of C. And while I was very interested in theoretical physics, I felt that my mathematical skills were probably not quite good enough for that route. So, I opted to seek a PhD in experimental particle physics. Wisely so, I would say in retrospect. And I felt the best professor in experimental particle physics at the U of C was Val Telegdi. So, I asked him if I could join his group as a doctoral student, and he took me on.

Zierler:

What was Val working on at the point you connected with him?

Fryberger:

Well, he’d been doing various muon experiments—muon X-rays and muon decay. Richard Ehrlich, Dick Powers, and Bruce Sherwood were already Telegdi’s students, and with my arrival we were four doctoral students, and as I recall no post docs—at least at that time. Ehrlich and Powers were working on muon X-rays. Negative muons were stopped in various targets in which then supplanted the atomic electrons. In this process, the atomic muonic transitions emitted X-rays, the energy of which gave interesting information about the capturing nucleus. And Bruce was intending to study the muon decay spectrum, using positive muons. In physics language one writes: μ+ -> e+ + ve + vμ. In particular, he would measure the electron energy, which in muon decay ran from zero to one half of the energy of the muon mass.

Zierler:

What did you want to do at that point? What were you interested in?

Fryberger:

Well, I was just interested in studying physics and getting a degree. Oh, and by the way, after I got there, my question about what electrons are made of—what I came to realize was that nobody at the U of C could tell me either. [laugh]

Zierler:

David, what was Telegdi like as a person?

Fryberger:

Well, he was very smart, and he knew a lot of physics. But sometimes he could be very aggressive, and, as a consequence, he made a lot of enemies. In fact [laugh], I used to joke that he was so smart and aggressive that he could win arguments even when he was wrong. I’m not talking about arguments about fundamental physics—he was solid on topics of fundamental physics—but rather more mundane things like, why isn’t this piece of apparatus working? He would put forth a hypothesis and then ask, “What else could it be?” And if you couldn’t think quickly enough, the argument was over, and he had won.

Zierler:

[laugh]

Fryberger:

But he always treated me well. I think he respected my technical abilities. He used to call me commander. As a bit of background here, after I arrived in Chicago, I began to fulfill my Navy obligation for inactive duty training—as a weekend warrior, several two-week training cruises, and various correspondence courses. I even translated articles from French to English—my two years of French at Exeter and one at Yale gave me enough proficiency. And if I really got stuck on an arcane idiom, I had friends whose native language was French. Usually, after leaving active duty, most people in the reserve just blew off that post active duty training—without consequence. Anyway, those who participated got points toward satisfactory years of service. And if you qualified for 20 years of satisfactory service, you would retire with a small pension, which I now receive based upon my 21 years of satisfactory service. By the time I entered the U of C, as a result of this inactive duty training, I had been promoted to Lieutenant Commander. And, as I said, it amused him to call me commander as an ad hoc title.

Zierler:

David, would you say that your background in electrical engineering was really useful?

Fryberger:

Absolutely.

Zierler:

In what ways?

Fryberger:

Well, I can name a couple of things where—because of my electrical engineering background and my years as a do-it-yourself designer/builder of various electronic circuits—I was able to make some important contributions to our wire spark chamber spectrometer experiments.

But, first, as background, I’d like to mention that Mike Neumann and others at the Institute for Computer Research at the U of C had been developing wire spark chambers with a ferrite core readout.[5] These spark chambers consisted of two parallel wire planes of closely spaced wires—the spacing between wires was on the order of a millimeter, and that between the wire planes was about a quarter of an inch—to determine the location of charged particle tracks that passed through them. The sparks were in an especially pure neon-helium gas mixture. Spark chambers were used in pairs to determine both the X and the Y location of each track. To oversimplify, when a coincidence in the scintillator counters detected a particle, this caused a fast risetime pulse of high voltage—5000 Volts, say—to energize the spark chambers, and sparks would jump between the wire planes at the track location. The spark currents in the struck wires would then pass to ground through the write wire threading the ferrite cores—mounted directly on the spark chamber, one core on each wire—and flip the magnetic field circulating in that core. Such cores were readily available because they used in the computer memories of that time. Subsequent to the spark event, the individual cores would be read out, just as in computer memories—using read and sense wires, also threading each core. The readout process, which identified the relevant wires by the re-flipping of the magnetic circulation in the struck cores—the re-flipping gave a signal in each appropriate core sense wire, which went into the data stream— and, in the end, this readout process would leave all cores uniformly polarized, awaiting the next event. Jurgen Bounin, a very fine electrical engineer from Switzerland, developed our core readout system, which was initially used to feed our spectrometer data into a tape recorder and later to the MANIAC III computer system developed by the Institute for Computer Research at the U of C. I believe that the Telegdi group was the first one to actually use such spark chambers for physics experiments. A number of years later, when Betsy and I were visiting Washington, we saw on display at the Smithsonian my thesis spark chamber assembly. Telegdi had never mentioned to me that he had donated it to the Smithsonian. Nor were any student names on the accompanying label, which made the “first experimental use” assertion.

Now, as I said, the testing of the wire spark chambers and readout system was done with several spark chambers, lined up a few feet apart, in a particle beam from the U of C cyclotron, that had done duty producing particle beams for numerous experiments over many years. With this test layout, we were able to test all aspects of the operation of the chambers, the readout system, etc. But the design of our muon decay experiments, to measure the electron energy spectrum from such decays, required a magnetic field. And Telegdi, who had a good eye for repurposing unused equipment realized that a magnet—the so-called coffin magnet—used by Professor S. C. Wright for earlier experiments, would be the perfect magnet for our muon decay spectrometer. The coffin magnet enclosed an interior volume of about 2x2x8ft3 —which is why it was called the coffin magnet— of very uniform vertical magnetic field. Our entire wire spark chamber spectrometer assembly would simply be placed inside be placed inside the coffin magnet volume enabling the experimenter to directly measure the curvature of the decay electron’s track. In the uniform field, the electron track would be a perfect helix about a vertical axis, projecting as a segment of a circle onto the horizontal plane, with a uniform vertical rise along the helical trajectory. The radius of the circle and the amount of vertical rise were both measured quite accurately using the wire spark chambers—four for the radius and three for the rise. The geometry of the chambers was such that the radius of track helix could be measured in the range of about 30 to 40 inches. Which part of the electron energy spectrum this would correspond to would depend upon the field setting of the magnet. Bruce’s experiment measured the ρ parameter of the isotropic decay spectrum and mine measured the δ parameter of the asymmetric spectrum—which was possible because the muon, after stopping in the target, but before it decayed, precessed in the magnetic field of the coffin magnet. The values of ρ and δ gave important information about the weak interaction, which governed the decay process. If you’re interested in the details, you can consult Bruce’s thesis[6] or my thesis.[7] They also contain numerous relevant references for the theory, as well as other related experiments.

Anyway, a serious problem arose because we couldn’t place the cores on the chambers inside the coffin magnet. The spectrometer magnetic field would saturate the cores, preventing them from functioning properly. Consequently, we located them in a rack outside the magnet with about 4 meters of wire between them and the spark chambers. On our very first test run with the spectrometer inside the magnet but using the remote core readout, we found that the spark chambers appeared to be working just fine but the setting of the cores and their readout was very problematical. Sometimes the cores would set, sometimes not, meaning that the accuracy of our track measurements was severely compromised. It was essential that we get the remote readout system working with the spark chambers inside the magnet as well as they did when located outside the magnet.

Now I remember being at the lab one night trying to figure out what was wrong, and I realized that the distributed inductance of the several meters of wire between the spark chambers inside the magnet and the cores in the rack outside the magnet, in conjunction with the various capacitances in the circuitry, made an LC resonant loop, which stored energy from the spark discharge. And then, as the oscillating energy in this LC circuit died down after the spark, the core could end up being set in either direction, with roughly equal probability. Using a lower voltage to generate the spark wouldn’t help because the circuit would continue to oscillate after the spark well below the threshold required to set the cores in the first place.

With this understanding in hand, the solution was straightforward—it’s easy when you know how. Put a capacitor on each wire to collect the charge associated with the spark, and put a series resistor in each wire to damp out the problematic oscillations. This also had the advantage of slowing the effective spark discharge time from 10 or 20 ns to one on the order of a μs, which was suitably matched to the core function. The rating of the capacitor wouldn’t have to be five thousand volts—the spark driving voltage; it would just have to be large enough, such that when it captured all of the charge associated with the spark on that particular wire, its voltage would remain below its rating [E = Q/C]. As I recall, we used ceramic capacitors of 0.01 μF with about a 1 kV rating. And a standard 40 Ohm carbon resistor of low wattage supplied the requisite damping—giving a time constant of RC = 0.4 μs. I was somewhat daunted by proposing a solution requiring the mounting of 256 capacitors and resistors on each chamber, but Telegdi realized that it made sense and said, “Good, let’s do it.” And when this fix was implemented, our magnetic wire spark chamber spectrometer with remote ferrite core readout worked just fine.

Another instance in which my EE background was useful relates to the control of the magnetic field in the coffin magnet. For the purposes of our experiment, an accuracy of 10-4 for the magnitude of the magnetic field was more than adequate for our experiment. And setting the current in the coffin magnet to that accuracy was fairly easy, but keeping it there was a problem. The problem was that the coffin magnet current was furnished by a large motor-generator [MG] set, which had a tendency to drift as the main power line voltage drifted, and possibly for other reasons. So, attention had to be paid on an ongoing basis—either by the experimenters, i. e., us, or the cyclotron operator—to keep the field at a steady at the desired value. We monitored the field using a commercial NMR, which furnished—with orders of magnitude to spare—the requisite accuracy. But for us the need for continuing monitoring was a distraction. When our experiment ran, it ran around the clock, and the graduate students took shifts to cover the 24 hours per day. And as I said, there were only four of us. Actually, fewer than four. Bruce, his thesis complete, was planning to leave for Cal Tech. And Richard and Dick needed to spend time on their own thesis experiments. It was clear that automation would be a great benefit.

Today, of course, such a problem would easily be solved by computer control—which wasn’t available in the mid ’60s. However, since I had taken a course in servo-mechanisms at Yale, I knew the basics of feedback loops—and Nyquist plots to evaluate stability of those loops. I also knew about establishing a reference, and then comparing the quantity to be controlled to this reference, developing a small error voltage ε , which quantified the sign and the magnitude of the error. ε = 0 meant that the controlled quantity was in proper alignment with the reference. Now our water-based NMR had a variable frequency, which would relate to the measured field. The ratio is 42.577 MHz/T for protons. In essence, the frequency was the servo reference, proportional to the desired B field. In our NMR, a frequency appropriate to the desired field was generated locally by an oscillator, which was fed into a coil wrapped around a water sample. The axis of this coil was perpendicular to the B field being measured. The proton resonance was detected by a sensing coil, perpendicular to the oscillator coil as well as to the B field. The oscillator coil and the sensing coil were thus coupled through the precessing protons in the water sample. But rather than varying the frequency, our NMR had a small coil with its axis parallel to the B field being measured, scanning the B field at the NMR sensor—with a 60 Hz sine wave—above and then below the desired resonance setting. When the local B field and the fixed reference frequency were in resonance, the signal in the sensing coil dipped due to energy transfer to the precessing protons—twice per cycle. When these dips were centered in the display, then the reference frequency and the B field matched. It is important to observe that at the point of a proper match—i. e., ε = 0—the signal dip was symmetric about the central matching point, while the 60 Hz scanning signal was antisymmetric about that same point. Thus, we see that the product of these two waveforms, when averaged, could furnish the requisite ε function.

Now, I happened to know about a vacuum tube used for modulation and demodulation, among other things. In a circuit of suitable design, this tube could, in effect, multiply two analogue waveforms. As I recall, it was called a beam deflection tube, but I don’t remember the tube designation. Anyway, it resembled combination a standard tetrode and an old cathode ray tube [CRT] used early TV sets or oscilloscopes. The tetrode part had a cathode, a control grid, and a screen grid, which then directed the electron beam between two deflection electrodes to a pair of plates. The electrostatic transverse beam deflection geometry operated in the same way as that in a CRT. In this waveform multiplication application, one waveform—the signal with the resonant dip—was fed to the control grid and the other—the 60 Hz sweep waveform—was fed to the deflecting electrodes, which directed the electron beam back and forth between the two plates. You can see that the voltage between these two plates would contain a signal component proportional to the product of the two input waveforms—and it would be null when ε = 0. Then, I amplified this product waveform using a 6L6—a so-called beam power tube—used to feed the speakers in high-power audio amplifiers. Thus, I had an effective dc amplifier, which derived and amplified the original ε signal. With an appropriate voltage offset, the plate circuit of the 6L6 would furnish a suitable MG control voltage. This 6L6 control voltage replaced the old open-loop MG control voltage, which had been derived from a potentiometer in the cyclotron control room. That is, we had an effective servo control of the B field. And I got it all to work. This was the final step in automating our mu decay experiment. The Ne-He gas flow to the chambers and the core readout system already operated reliably without attention. I remember I’d leave the lab at night, say, at 9 or 10 o’clock. The chambers were sparking away, and the NMR was centered. Nobody stayed there at night except the cyclotron operator, who was in his own control room. He could call if anything went wrong, but we never had any such calls. And then coming back to the lab in the morning, the first thing I’d look at was the screen of our NMR magnetometer, and it was always centered. And you could hear the sparks going. Everything was fine. It had run all night without the need for experimenters to man the night shifts. We could run 24 hrs/day. So, I think those two things, and possibly others, impressed Telegdi. I believe he felt that I was a significant asset to his experimental program.

Zierler:

David, beyond Telegdi, do you remember who else was on your thesis committee?

Fryberger:

No, I don’t. But I’ve got an amusing story about my thesis defense. Since I was in the Navy, Telegdi asked me to wear my formal whites—navy whites—to my thesis defense. And I said, “No, no, I’m not going to do that.” [laugh] And he said, “Well, it’s a formal occasion, and for such an occasion you should wear your white dress uniform. Of course, I still had my navy whites—white hat, shirt, coat, trousers, and shoes, complete with my Lieutenant Commander shoulder boards. And then he said, “Well, if you wear your whites, I’ll wear my tuxedo.” [laugh] At that point, I realized that he really wanted me to do this, and I gave in.

I remember the two of us wandering around the Fermi Institute that morning, he in his tux, and me in my formal navy white dress uniform. And so I defended my thesis to my committee in my navy whites. Of course, I knew the experimental part of the thesis extremely well, and it was a good experiment. And I also knew enough about the theory of muon decay that didn’t expect any difficulty there either. So, I wasn’t worried about passing my thesis defense. And looking back on it, I think that maybe making it a formal occasion was a good idea. Val said that’s what they did in Europe where he came from. But back to your question, I don’t remember who else was on the committee. But no one remarked about Val’s tux or my whites.

Zierler:

When you were all set at Chicago, what opportunities were available to you next?

Fryberger:

Well, when I got my degree, Telegdi wanted me to stay on as a postdoc. In fact, he offered me a Fermi Fellowship to stay on. But I thought it would be better if I moved away from Telegdi’s sphere. He was very good at keeping people busy, and I thought he was exploiting me. He really didn’t care how many long hours I put in. In his view, it was all for a good cause. And, so, I thought it would be a better career path for me to leave the U of C. I should also say here that there was another reason I didn’t really explore the postdoc opportunity at the U of C. They had a policy that U of C students with PhDs, who become U of C postdocs, are not then directly promoted to U of C faculty positions. After the postdoc position, you’re expected to go away and do something important, and, if you did do something important, then you could come back to become a U of C faculty member. So, I didn’t have a clear faculty path at the University of Chicago. And I didn’t want to join a faculty somewhere else because of my time in the Navy, my time at the business school, and my time at Lakeshore, I was six years older than my contemporaries who were then getting PhDs. As a married man, having to worry about not getting tenure and having to restart a career was too big a concern. So, that’s why I decided not to go into teaching.

As for the other opportunities, the laboratory in western Illinois—which is now called Fermilab—was just getting started, but had not yet been fully approved. SLAC had gotten started. This was in ’67, and SLAC already had its first beam. And another possibility was Livermore Lab. They had a pretty good physics group there too. So, I explored those three possibilities—all at national laboratories. I should add here that Betsy had a good job offer in hand from R. E. Lewis, a highly regarded print and drawing dealer in San Francisco.

Zierler:

Now, was there a job you applied to at SLAC or did they recruit you?

Fryberger:

I should say here that Telegdi thought that it was incumbent upon thesis advisors to see to it that their PhD students ended up with a good position. And in keeping with his desire that his students be well placed, Telegdi contacted the SLAC Director W. K. H. Panofsky—who was affectionately known by all as Pief. I assume that he said to Pief, “I’ve got a good PhD graduate, have you any job openings,” or something to that effect. And, so, I was invited to SLAC to give a talk about my thesis work, and also to talk to a number of senior members of the faculty and management. While I was on the west coast, I also talked to people at Livermore about openings in the physics group there. And Telegdi had talked to Clem Heusch at UC Santa Cruz, who expressed interest to my joining his group there. In the end, of the several offers I received, I decided that the SLAC offer was the best one for me.

Zierler:

Which was what? What group did you join.

Fryberger:

The job that I got was as an Engineering Physicist. It was a staff position, not a faculty slot. SLAC had faculty in experimental as well as theoretical physics. Telegdi tried to talk me out of taking the job. He said, “Why don’t you stay here at Chicago, and they’ll make you an even better offer next year.” Well, I don’t know if that was true about SLAC, but that was his argument. I should remark here that the staff engineering physicist offer that I got had an important “sweetener” because Pief felt that someone with a PhD joining SLAC in a support role should also have a significant physics motivation. As a consequence, in my offer letter it stated that while I was being hired in as a staff member, I could spend as much as half my time on physics, which I think he envisioned would be joining one of the physics experiments as a collaborator. And I, as well as a number of other staff members, did just that.

Anyway, my SLAC job was in the Research Area Department [RAD] of the Technical Division [TD]. RAD was headed by Ed Seppi, who came from Caltech, and the TD was headed by Dick Neal. The Accelerator Department [AD], which was also under the jurisdiction of the TD, furnished an electron beam of energies up to 20 GeV, or more, at 360 pulses/s. The pulse durations could be varied, but they were nominally 1.6 μs. The full SLAC beam could carry almost a MW of power, which entailed significant radiation safety questions.

The responsibilities of RAD included setting up the experiments, supplying them with utilities [e. g., electricity and water], and supporting them during their operation. RAD then took the experiments down when they were completed. The set-up role included building beam lines, magnets, collimators, slits, and beam dumps, as well as designing and stacking shielding. Since the beam carried so much power, generally the collimators, slits, and beam dumps had to be water cooled. Dieter Walz was our leading collimator, slit, and dump engineer.

RAD’s operations group was responsible to take the accelerator beam—which was prepared by the AD’s operations group— and to steer it through the beam switch yard [BSY] on a pulse by pulse basis to the various experiments, as required. Typically, we ran two, sometimes three, or even four experiments simultaneously. It was good to have more than one experiment running, because if one experiment was having problems and needed some down time, its beam pulse allocation could be steered to another experiment.

One of my first responsibilities was to be made a scheduling assistant to Gerry Fischer, who was the Program Coordinator for the lab. He was also Secretary of the Program Advisory Committee [PAC]. Gerry worked directly for Pief in these roles. With respect to Gerry, as Program Coordinator, it was thought that someone in RAD, who would be familiar with the progress in the setting up and operation of the various experiments, would be in a good position to help him with his coordination responsibilities—hence, my scheduling position as Gerry’s assistant.

Zierler:

David, when you first got there, did SLAC feel like it was mostly built up at that point, or it was still being built up?

Fryberger:

It was still being built up. But some beams had been running for about a year. Martin Perl had just finished an experiment looking for a heavy lepton; Richard Taylor was doing electron scattering experiments in End Station A [ESA]; and Burt Richter and Dave Ritson had also done a number of experiments in ESA. ESA had three magnetic spectrometers—the 1.6, the 8, and the 20 GeV—which were used to measure the angle and momentum of particles scattered from a target, usually hydrogen, sitting on what was called the pivot. The three ESA spectrometers all looked directly at the target on the pivot. The beam entered ESA from the BSY, passed through the target, and then exited ESA into a berm behind ESA, where its energy was absorbed in a large water-cooled beam dump, called Beam Dump East.

So, when I arrived, SLAC already had an experimental program going. SLAC was operating on a 14-day schedule. The beam would start turning on on a Monday, run through the week, the weekend, and then through the next week, shutting off the following Friday—12 days of beam operation. The subsequent weekend the beam was off and the time would be used for maintenance and repair. The next 14-day schedule would start again on Monday, or the following Monday, if more maintenance and repair work was required.

As it turned out, so much time at the beginning of each 12-day beam operations run was spent getting the accelerator beam turned on and stabilized, that the amount of time left for the experimental program was significantly curtailed. It was soon decided that to have fewer, but much longer, runs—several months each—of continuous 24-hour a day operation would be much more efficient. As RAD’s scheduling officer, I was a part of the decision to make this rather significant change to SLAC’s accelerator scheduling philosophy.

Zierler:

David, what were you—what would you say were some of the major research questions that prompted this project?

Fryberger:

You mean SLAC itself?

Zierler:

No, what you were specifically working on.

Fryberger:

Well, at the beginning, beyond the scheduling responsibilities that I just described, I was given the responsibility for developing the BSY computer control system—it was an early SDS computer— for as much of RAD’s equipment—magnets, vacuum pumps, safety interlocks, and the like—as was feasible. Sam Howry had written the original SDS operating program, which used what he called a chore wheel to allocate the computer’s CPU cycles to the various computer subroutines. I think that it’s fair to say that Sam’s chore wheel was a precursor to today’s computer operating systems.

Zierler:

David, tell me about the patent developed for the touch panel, and some of the long reach of that technology, even beyond its design for SLAC.

Fryberger:

Well, the motivation for that development comes with an interesting story. As I said before, the AD’s operations group, headed by Vernon Price, prepared the beam, and then RAD’s operations group, headed by John Harris, steered the beam through the BSY on a pulse-by-pulse basis to the experiments, as appropriate. The AD’s control room, or central control room [CCR], was located near the end of the accelerator, about a quarter of a mile from the BSY. RAD’s control room was in the Data Assembly Building [DAB], sitting more or less on top of the BSY. RAD’s view—that is, Ed’s, John’s, and my view—was that this arrangement was very inefficient; the two operations groups were not coordinating very well, leading to significant inefficiencies. And, so, we made a proposal to Panofsky that these two operations groups should be combined into one operations group, and that RAD would run it. The implementation of our proposal meant moving all of the accelerator controls from the CCR down to the DAB, which would be a monumental task—entailing a large budget and probably a significant accelerator down time— unless we were smart about it.

So, I conceived of the idea of an image displayed by a computer-controlled CRT. Into this image, we would incorporate simulated buttons, and the buttons would represent whatever we wished to control using our computer. To implement operator control of the computer, and hence all of our equipment, the original idea was to propagate acoustic beams of surface waves, also known as Rayleigh waves, directly on the face of the CRT glass—I was already familiar with Rayleigh waves, because of some of my earlier work at IITRI, another case of my engineering background being useful. Specifically, we would have a set of n vertical X beams and another set of m horizontal Y beams, and the simulated buttons would be located at the n x m intersection points of the X and Y beams. The operator, then, by touching the CRT display at one of the intersection points would absorb energy from one X beam and one Y beam, telling the computer which button was being “pressed.” Our first prototype had n = 10 and m = 13. Images of all of the requisite panels, along with all relevant ancillary information, would, of course, be stored in the computer memory. The acoustic frequency I chose was 8.5 MHz, which had a wavelength short enough so the propagating beams could remain physically distinct. Our initial public report of this work, giving important technical details, was a talk in 1971.[8]

But we quickly realized that it would be much more practical to use a flat glass plate the same size as the CRT face but mounted directly in front of the CRT. So, that was the motivation for the invention of the touch panel, and our prototype actually worked, except for one minor, and unanticipated, difficulty. One operator loved to play the guitar, and he had big calluses on his fingers [laugh]. And when he touched the button, it didn’t sufficiently absorb the Rayleigh waves, and so he had difficulty operating this kind of touch panel. As a supplement, Paul Sandland, who had worked with me on the original Rayleigh wave device, developed a second approach. In place of the acoustic beams, he had wires, X wires and Y wires, that were routed across the glass, with about a millimeter of vertical separation. And when the operator touched an X-Y intersection point on this panel, he’d push the two wires into contact, completing an X-Y circuit. Since both types of these touch panels were used for our computer control room for some years, I assume that some solution was found for the “guitar fingers” problem.

By the way, it wasn’t that I lost interest in this question, as it turned out, the computer control of the beams was moved out of our jurisdiction. Recall that we in RAD had proposed a consolidation of the two beam control operations groups into RAD. However, Dick Neal, the head of the TD, had a different view. In effect he said, “Yes, the groups should be unified, but I should run the new group.” To make a long story short, Pief, the ultimate decider, bought Dick’s view. Consequently, Dick took over our operations group rather than our taking over his operations group. But there was more to it than that. The name of our department was changed from RAD to EFD [Experimental Facilities Department] and we were transferred from the TD to the Research Division [RD], headed by Joe Ballam. Our organizational responsibilities were diminished, but, as before, we were still responsible for the same care and feeding of the experiments in the Research Yard. I’ve come to realize that if we had gotten what we had asked for, I probably would have become much more deeply involved in laboratory management responsibilities and would have had much less time to pursue my physics interests—a much different career path. Beware what you ask for.

Another consequence of the lab’s decision to reorganize RAD into EFD, reducing the size of our department, was that Ed Seppi decided to leave SLAC. Being a senior engineering physicist in RAD, I was approached by upper management about heading up the new department, EFD. I responded that I would do it out of loyalty to SLAC, but I actually wanted to have time to finish up some physics ideas that I was working on. So, if they could find someone else, that would be fine with me. I believe that they thought that I wasn’t enthusiastic enough, and a physicist I had recently hired, Lew Keller, was tapped for the job. Though I was in EFD for many years, Lew, as department head, was content to have me serve as a staff engineering physicist in EFD, and also continue as Gerry Fischer’s—and Pief’s—scheduling assistant.

The physics that I had in mind was that I was working with Buck Rogers [actually, Arthur Rogers], another experimental physicist at SLAC, on a baryon-antibaryon [BB] model for meson mass and structure that he had developed. I was particularly interested in this model because he had postulated a magnetic binding force in the BB structure of the hadronic mesons. It was this work with Buck that initiated my work in particle structure and, in particular, gave me the notion that I might actually be able to contribute something in this area. It was also fortuitous that I then found myself with some time to actually pursue this interest. And I felt that it was consistent with my offer letter.

But back to your question about the touch panel patent. SLAC applied for a patent, on our behalf. Now patent lawyers are quite smart, looking at things from a much broader viewpoint. They specified that the patent was for the idea of computer control using a touch panel, and not the technology of its implementation. Thus, our patent would cover both of our touch panel implementations, as well as any that might be developed in the future. Though there were similar ideas at that time, I believe that ours was the first one actually reduced to practice. In those days, the rights to a patent for any invention at a national lab had to be turned over to the sponsoring agency, which for us was the Atomic Energy Commission [AEC]. Now Ralph Johnson, who did the computer work for me, and I were joint holders of the patent. I had included his name on the patent, though I was later informed that the tradition for patent holders was not as inclusive as on scientific physics papers. Anyway, that really doesn’t matter, because we were both obligated to turn over our rights to the patent to the AEC for $1 and other considerations. And, so, while I was hoping to get a check from the government for $1, which I would hang up in my bathroom as a nice memento, it didn’t happen that way.

Zierler:

[laugh]

Fryberger:

When the man from the AEC came down from Berkeley to effect this transfer, he said, “Well, I don’t have checks for you but I picked up a couple dollars from the coffee fund. [laugh] Here they are.” And he gave me a dollar and he gave Ralph a dollar. That was kind of insulting. Our patent rights were gone, and I didn’t even get my check to put up in the bathroom. As it has it has turned out, touch panels are now ubiquitous—cash registers, iPhones, iPads, to name a few. And if I had been able to monetize my patent at the rate of even a penny for every touch panel that was built, [laugh] I’d be a wealthy man. But it’s also true that after 50 years at SLAC, my TIAA/CREF retirement with grandfathered health insurance from Stanford is quite enough. It seems to me that for some people, the concept of enough has gotten lost.

Zierler:

[laugh] David, how did you get involved with Mark I at SPEAR?

Fryberger:

To answer that question I should go back to a few years earlier. Before I actually came to SLAC, Telegdi had said to me, “When you go to SLAC, you should go join a man that’s a better experimenter than I am” —Yes, he actually said that—and he mentioned Mel Schwartz, who was the leader of Group G at SLAC. At that time, SLAC had seven experimental groups—from A to G. Mel and Stan Wojcicki, also a member of Group G, were on the faculty at Stanford, as was Dave Ritson, who was head of Group F. Mel was always expansive, and when I approached him, he enthusiastically invited me to join Group G. And my joining a SLAC experimental group was consistent with my job offer letter from SLAC. So I was able to devote a considerable amount of time to their experimental program. They were doing K decay experiments at the time, and my name is on a number of their papers.

After Group G finished its series of kaon decay experiments, they decided to look for rare decays, which to me was much less interesting. When I told Telegdi of my quandary, he said, “Why don’t you join Richter’s group?” Burt, who was head of Group C, was in the throes of designing and building the SPEAR storage ring and the Mark I detector. The Mark I was called a 4π detector, where 4π signifies the total solid angle about the interaction point in the center of the detector. That is, the Mark I was designed to detect all of the particles emanating from the e+e- collisions in the detector, except for neutrinos, which had an exceedingly small cross section, and particles exiting into the two small pieces of solid angle along the storage ring beam line. It consisted of a solenoidal magnet, whose axis was along the beam line, surrounded by layers of particle detection and tracking components. The 4π detector was Richter’s conception, and it is now the standard design for general-purpose detectors for storage rings. So, it was a significant innovation. Burt was glad to welcome me, for a couple of reasons, I think: someone else would be paying my salary, and I had considerable experience with spark chambers, which he envisioned as the main particle tracking device for the detector. And, so, that’s how I got involved with Richter. And, as with the Schwartz group, such an activity was consistent with my job offer letter.

Zierler:

David, what was Richter’s leadership style like?

Fryberger:

Richter’s leadership style was pretty good, just not as good as Pief’s. Burt was very able in physics and technology, smart, and he’d run things with a firm hand. But, unlike many talented physicists, he was not abrasive. I enjoyed working with him. He would listen to the arguments of others who disagreed with him. He wasn’t warm but he was fair, and a very good physicist.

Zierler:

In what ways? In what ways did he have good sensibilities or intuitions as a physicist?

Fryberger:

Many ways. For example, he fully participated in the concepts as well as the technical details while we were building the Mark I detector. We were in frequent substantive conversations with him. Later, when the SLC [Stanford Linear Collider] was being built, he’d go to the 8 o’clock meetings, and I'm told—the 8 o’clock meeting was generally too early for me— he would fully engage in the technical conversations and the decisions that took place.

Yes, his intuitions as a physicist served us all well. But he also relied on relevant data. In this regard, one thing that I remember was that he wanted to copy the Schwartz diode detector wire spark chamber readout system, which Dan Porat in the Schwartz group had perfected when I was working with them. When Burt consulted me, as a local expert on that topic, I disagreed, saying, “No, I think we should use magnetostrictive readout.” I felt the magnetostriction readout system was considerably simpler and more reliable than the diode technique. With respect to simplicity, one used one magnetostrictive wire to read out an entire plane of a spark chamber, whereas with the diode system one needed a diode on each wire, just like we needed an RC circuit on each wire of the U of C coffin magnet spark chambers. The question was, how well would magnetostriction work inside the Mark I solenoid? For proper operation, the magnetostrictive wire needed a small longitudinal magnetic bias field, and this bias field was much smaller than the contemplated B field for the Mark I. The idea was to see if, by proper orientation of the magnetostriction wire, one could obtain a suitable bias field from the Mark I field itself. To test this idea, Richter sent Harvey Lynch, who had prior experience with magnetostrictive readout, and me to Berkeley with a small spark chamber, which we placed in the field of a suitable dipole magnet. To make a long story short, it worked, and the Mark I detector used magnetostrictive readout for its spark chambers.

Zierler:

David, from your vantage point, of course, it’s so hard to see these things as they’re unfolding in real time. But at what point during what became known as the November Revolution did things start to really feel revolutionary?

Fryberger:

I have my own recollections, but I’ve also read some of the relevant AIP interview summaries. Vera Lüth gives a very good summary of that period, and Schwitters does too. But I think one aspect that wasn’t mentioned, but which deserves note, was that during our prior data run in the summer of ’74, there was a labor strike at SLAC, and the Mark I physicists were tasked with setting of the storage ring beam energy. Generally, this wouldn’t make any difference because the precise beam energy wasn’t a crucial aspect for the data obtained. But for the psi, this assumption wasn’t true. As has been stated, we missed the peak on our scan because the peak was much narrower than anything anyone had expected, and it fell between the selected data point energies for our scan. But on the tail of the psi, the cross section drops rapidly from the peak. And at our energy setting just above the peak, we had several data sets that were statistically incompatible. And while we didn’t know it at the time, the reason was that this data point was on the very steep part of the tail of the psi peak, where small errors in the setting of the beam energy translated into large changes in data rates. I think one can make the case that if the trained machine operators had set the beam energies for that earlier run, events might have evolved very differently.

Anyway, we had the conundrum of the statistically incompatible data sets, and there were many serious discussions about what to do about it. Our fall run was coming and a group of us were arguing that we should first look into this statistical incompatibility rather than proceed to an exploration of higher center of mass energies. I remember meeting with Schwitters, Breidenbach, Perl, Lynch, Rudy Larsen, and others. We had formed a sort of cabal in Rudy’s office. Initially, Burt said, “Well, that’s probably a background, and pursuing it will waste a lot of beam time. Forget it.” He really wanted to get to our exploration of higher energies.

But we finally persuaded Burt that we should scan more closely above 3 GeV. He gave us a weekend for our search. On Saturday, when we undertook this new scan, we had some early indications of anomalies. But by Sunday, it was clear that we had a major discovery on our hands. We observed a rate increase of more than two orders of magnitude. And it showed a definite peak, at about 3105 GeV center of mass energy—our original number was slightly high due to an incorrect calibration of the SPEAR beam energy. There was no way that this peak could be dismissed as just some kind of background. So, I would say that the November Revolution began on Sunday 10 November 1974. After the so-called November revolution, Schwitters, who ended up as being a spokesperson for the Mark I experiment, began his psi talk saying we had a collaboration of three groups at SLAC and two—or was it three—groups from Berkeley. Now, the three groups at SLAC were Richter’s Group C, Perl’s Group E, and me [laugh]. I was in EFD at the time, and I was the only collaborator in the third group. Though Roy never explicitly identified the third group, I enjoyed being the entirety of one of the three SLAC groups that was collaborating on the Mark I experiment.

Zierler:

Where were you for Sam Ting’s involvement in all of this?

Fryberger:

Let me give you a bit of background as part of my answer to this question. As I mentioned before, I was also the scheduling officer for SLAC, working in conjunction with Gerry Fisher, who was the SLAC Program Coordinator, as well as the Secretary of the PAC. In these roles, we both effective worked for Pief. The PAC members were chosen by Pief from the physics community at large, some from SLAC, but most from other institutions—even foreign institutions. They had fixed terms, three years as I recall, so that we would have members retiring each year and the same number of new members joining each year. When we had enough proposals for new experiments at SLAC, the experimental spokesmen would be asked to make presentations to the PAC, which meant that the PAC met at SLAC several times a year. At these meetings, after hearing the presentations, and after receiving the advice of the members of the PAC, Pief would approve or not approve the specific experiments. Or sometimes the experimenters would be asked to give further information on some areas of concern to be discussed at a later PAC meeting. And Gerry, as Secretary, would both schedule these meetings as well as write up the minutes, which included Pief’s decisions. It was Pief’s practice to make his decisions at the end of the meeting instead of adjourning to render decisions at some later date, as is often done by other decision makers.

Now, it was quite a coincidence that we had a PAC meeting scheduled at SLAC to begin on Monday the 11th of November, and also that Sam Ting was a member of our PAC. And, in this regard, I have an interesting story that I haven’t seen mentioned elsewhere. It is relevant as to when Ting first found out that he was about to be scooped by SLAC’s most recent data. Ted Kycia from Brookhaven National Laboratory [BNL], was also a member of our PAC. And Sam and Ted flew on the same plane together on Sunday coming out our PAC meeting. As the story goes as they were seated together on the plane, and Ted asked Sam if he had heard about SLAC’s recent SPEAR data. And, so, Sam, after a short discussion, realized that the e+e- mass peak that he’d been sitting on for maybe a year was the same physics. Now Sam’s experimental result was not so clear cut as ours. And Sam, being a careful experimenter, had wanted to make sure that his peak wasn’t some kind of background before he made any claims. Maybe it was a computer-induced peak? But, after this airborne conversation with Ted, his concerns quickly evaporated, and that very night he was announcing to his many friends around the world that he had found an e+e- mass peak at about 3 GeV/c2.

Sam’s initial take on the situation was that we had already known about his result, and that’s why we went there to look. But being a member of the Mark I team, I know that’s not true. Even Richter had discouraged us from following up on our earlier ambiguous data. So, Ting and Richter discovered the same particle. Sam called it the “J” because—at least as I understand it—the Chinese character that represents “Ting” looks like a “J”. And we called it the Ψ.

Zierler:

[laugh]

Fryberger:

So now it’s the J/Ψ. Both teams got to name it, but the discoveries were separate, independent, and simultaneous. Although it’s also true that he got his data before we did, only when Sam was on the way to SLAC was he fully convinced that he had actually made a discovery. So, I think that it’s fair to say that it’s a joint discovery. And I believe that Ting now acknowledges that these discoveries were separate and independent.

Zierler:

What did you do after the November Revolution, and, more generally, how was SLAC changed as a result of the November Revolution?

Fryberger:

Well, that’s a pretty big question. I’m sure I can’t answer all of it. But SLAC became the center of the high-energy physics universe. People that I’d known in grad school were calling me from all over the world. “Hey, what’s going on? What’s happening today?” Everybody wanted to know what we were doing.

It was obviously very important physics and new physics. There were a number of physical hypotheses about the nature of the J/Ψ. The charm-anticharm bound state, or charmonium, was just one of a number of possible explanations—color was another. But charmonium was the one that won out in the end. That energy region of the e+e- cross section turned out to be a gold mine. It wasn’t long before the tau was discovered.

The level of excitement was up, and there was a sense of electricity in the air. But as far as I know, it didn’t change anything organizationally. But one unfortunate side effect was that it made major new physics discoveries, versus workman-like studies, into a major motivation to do physics. Physicists often seemed more interested in getting the Nobel prize rather than doing physics research.

Before leaving this topic, one thing I should mention is that the Mark I was a storage ring experiment. This is relevant to SLAC’s operation because a storage ring just needed a short period for filling and then it would take data until it needed a refill—a qualitatively different requirement from our typical electron beam experiments. As a scheduler, I realized that if we ran the machine at 180, instead of 360 pps, we could essentially double the amount of calendar time that we delivered beams to the experimental program. That is, this change would mean that we would spend the same amount of money for electricity—which was a major budget expense for the lab—but beam-on time would last, say, eight months instead of four months. The upshot of such a change was that all of the other experiments would get the same number of pulses, but that SPEAR would get twice the calendar time of running—that is, twice the amount of data taking time. I got a lot of resistance from Dick Neal about changing. “Oh, we can’t change pulse rate. Too much will go wrong. You know, everything’s in balance now.” But Gerry and Pief saw the benefit from my proposed change, and the machine repetition rate was reduced to 180 pps. Also, it turned out that that none of Dick’s fears were realized. In short, we doubled the beam time for SPEAR at no extra budget costs to the lab.

So, that’s one change that happened. While, technically speaking, this change happened before the November Revolution, one might even make the case that it made it possible for SLAC to be such an important player in the November Revolution—an interesting factoid. I think the accelerator runs at 60 pps now—even lower than the 360 that it started with. And it works just fine.

Zierler:

When did you get involved with the search for magnetic monopoles at PEP?

Fryberger:

To give some background for the answer to this question, I should first mention that Buford Price at UC Berkeley had pioneered the development of a technique using etchable plastics sheets, such as CR-39 and Lexan, to detect the passage of particles of large charge, either electric or magnetic. After exposure to particle flux, particle detection was initiated by etching the sheets with a sodium hydroxide solution, which, due to radiation damage, preferentially etched away the plastic material along the particle trajectory. By controlling the amount of etching time, holes would be etched through the plastic at the locations of the particle passage. And the size of these holes would be a function of the charge on the particle and the etching time. And it was easy to locate these holes for detailed examination. An etched plastic sheet and a sheet of blueprint paper would be placed over an ammonia bath. The ammonia vapor passing through the holes in the plastic would turn the blueprint paper blue at the location of the holes. An entire sheet of plastic could then be read out in a glance, unlike photographic emulsions for which one had to laboriously search for tracks using a microscope. A major advantage for this technique was that singly charged particles would not create enough damage along their track to yield holes in the plastic. In 1975, Price and collaborators had announced a Dirac monopole candidate found in data taken in a balloon experiment. This result became quite controversial, and was later retracted when Louis Alvarez pointed out that such a track could have resulted from a suitably high Z nucleus suffering spallation interactions as it passed through the apparatus. Buford was quite taken aback by the severity of Alvarez’s criticisms.

In thinking about Price’s plastic etching experiments for monopole searches, I realized that this technique would be a perfect application for a storage ring experiment like at SLAC’s newly proposed Positron Electron Project [PEP], which was scheduled to start operation in the near future—it started operation in 1980. Pair produced Dirac monopoles would make detectable tracks in the plastic, but the copious background of singly charged particles would not. It would be possible to leave such plastics exposed at the interaction point [IP] for months at a time and then read out the total integrated result only once. In order to estimate the monopole pair production rate, the control room data would enable a sufficiently accurate integrated luminosity. That is, the experiment would not need to construct a luminosity monitor—another simplification, in cost as well as in operation.

And, so, I talked to Buford. I suggested he come to SLAC with a Lexan detector, and that we collaborate on a monopole search at PEP. The proposed experiment would be performed at Interaction Region 10 [IR10], which was too small for any of the larger contemplated PEP experiments, but quite adequate for a small Lexan monopole detector. My contribution would be that I would oversee the design and construction of the beam pipe configuration at the IP and also a mechanism to withdraw the detector package away from the IP during filling, and then reinsert it when the fill was complete and the data taking would commence.

Zierler:

[laugh] David, this was with a partnership from Berkeley Lab, or that was a separate project?

Fryberger:

No, it was just Buford and his Berkeley group and me. After overcoming an initial reluctance to getting back in the monopole search business, Buford agreed to collaborate with me. Our proposal was PEP-2, the second one submitted to SLAC. We were co-spokesmen. Two papers were published on the data from PEP-2. And I like to claim that’s the most cost-effective physics program that SLAC has done because, while the other experiments produced many more papers, they cost millions of dollars. On the other hand, our experiment came to less than $10,000. His student, Kay Kinoshita, has gone on to perform numerous similar monopole searches at other storage rings.

Zierler:

David, moving into the 1980s, what were some of the major projects you were involved with then?

Fryberger:

Well, after Mark I, my next involvement with a major lab project was the inclusion of my name on the SLD proposal, mainly because I had earlier been a collaborator on the Mark I experiment. Proponents often list physicists on proposals in the hopes that later they will actually participate. Of course, the SLD at SLAC was more than a decade later than the Mark I; it was a proposal to take data on the Stanford Linear Collider [SLC], the construction of which was completed in 1987. And the SLD was to be installed in the SLC Collider Hall after the Mark II experiment completed its run in that location. Anyway, to answer your question, as the SLD was just beginning, I participated in some of the early physics discussions. But Helen Quinn, who was a member of SLAC’s EPAC at that time, perceived that I had a possible conflict of interest. By then, I was Secretary of the EPAC, and since the SLD as a SLAC experiment would eventually come before the EPAC for laboratory approval, these two responsibilities of mine could be construed as a de facto conflict of interest. I’m not sure it was Helen’s intent, but she actually did me a favor by giving me a good reason to withdraw gracefully from the SLD collaboration. For me, it was an easy decision. Dropping out of the SLD collaboration would give me more time to continue doing my own work, which, as I have told you, I felt was consistent with the fifty-fifty stipulation in my original job offer letter. And being Secretary to the EPAC was in good service to the laboratory.

Zierler:

What does EPAC mean?

Fryberger:

Experimental Program Advisory Committee. In an earlier incarnation it was called the Program Advisory Committee [PAC]. It’s essentially the same committee with a different name. Its main function was to advise the Director of SLAC on the proposals for beam time at SLAC—first advising Panofsky and then later Richter when he became Director.

Zierler:

David, what was the physics on your own that you wanted to work on at that point?

Fryberger:

I just wanted to continue the work on electromagnetism and magnetic monopoles that I had embarked upon years prior starting with Buck Rogers. But, in this regard, I should say that my decision to devote the bulk of my available personal time to my own work versus programmatic laboratory experiments or projects actually came about at a specific time shortly after the Ψ discovery—specifically in a meeting with Pief, probably about the lab operating schedule and the Mark I operation. Anyway, at the end of our meeting, Pief told me what a wonderful job he thought Roy Schwitters had done on the Mark I experiment. I was profoundly disappointed that Pief seemed to be quite unaware of the contributions that I and my other colleagues had made to that effort. So, I thought, “if I’m not going to get recognition for my work on these lab experiments, I should devote much more time to my own physics projects and ideas.” For example, when the Mark II detector was proposed, I was not a collaborator.

Somewhat later, I published a short paper[9] arguing that two highly charged magnetic monopoles would collapse to a minimum energy state with the possibility of forming a point-like spin ½ object. An obvious possibility for the scale of this point-like fermion is the Planck length. Then, shortly after that, I published my vorton paper,[10] that detailed a static solution to the symmetrized set of Maxwell’s equations as a configuration of electromagnetic sources and fields having an electromagnetic charge of Qv is approximately equal to 25.83 e, without any singularities and independent of the scale of the vorton. I also note that the electromagnetic charge distribution is spherically symmetric about the origin. This configuration is best described in a toroidal coordinate system which is scaled by a ring of radius a, lying in the x-y plane. QV is a large charge, but not equal to the Dirac charge QD is approximately equal to 68.5 e. The vorton configuration has two types of angular momenta—one associated with the usual circulation around the z-axis, and the other with a circulation about the toroidal ring of radius a, both of which are quantized in units of the reduced Planck’s constant , assumed to be plus or minus 1 in the ground state. As a result of these two types of angular momenta being nonzero, the vorton configuration is characterized by a topological or Hopf charge QH equals plus or minus 1, which renders it absolutely stable—except against annihilation.

Now the concept of a topological charge is a rather arcane concept, but it’s important, so I think it would be useful to make a few remarks here. When considering a configuration like the vorton, the topological charge QH is a property of the entire configuration, not any one part or region. And simply deforming the configuration one cannot eliminate its topological charge. A closed loop of string, like a rubber band, can be used as a simple illustration. If, before you join the ends of a length of string to form the loop, you tie a knot in it—an overhand knot, for example—afterwards, no matter how you deform the loop of string you can’t untie the knot without cutting the string. So, the simple loop and the loop with the knot carry different values for topological charge—in this case the overhand knot. By extension, one can also imagine loops with different numbers of knots or different kinds of knots. A Möbius strip with different numbers of twists would be another example.

In the context of a fully symmetrized electromagnetism, in which electricity and magnetism are treated equally, the electric and magnetic components of QV are given by the sine and cosine of an angle called the dyality angle ΘD, which can be any angle between 0 and 360 —that is, from 0 to 2π radians. So, the vorton is truly a general electromagnetic object. And in my view its features are ideal for the fundamental constituent of matter. Furthermore, I argue that symmetrized electromagnetism, described by Maxwell’s equations that are invariant under rotations of ΘD, is the fundamental physics interaction. I should mention here that the word “dyality” was coined by Han and Biedenharn.[11] Their set of dyality invariant symmetrized Maxwell’s equations, had magnetic charge and source terms, and also included an associated magnetic potential Mμ and magnetoelectric field tensor Gμv, as analogues to their electromagnetic counterparts Aμ and Fμv. Following their work,[12] and using space-time algebra, I was able to construct a Lagrangian from which one can derive not only the set of symmetrized Maxwell’s equations, but also the symmetrized Lorentz force equation. By the way, I should note here that the introduction of the magnetic potential Mμ eliminates the troublesome Dirac string that accompanies magnetic monopoles in the Dirac theory.

The full resolution of the discrepancy between QV and QD will require a better understanding of the renormalization of electric charge and the generation of mass. In this regard, I had already published two papers on the electromagnetic generation of leptonic mass.[13, 14] These papers don’t resolve this discrepancy, but I believe they offer a conceptual path to a resolution, which someday I hope I will have time to explore. I should mention here that Landau, who also studied this problem, thought that gravitation might play a role in quenching QED. Finally, this series of papers culminated with a paper on the structure of point-like fermions comprised of a spin ½ magnetically bound state of two vortons.[15] This bound vorton pair should visualized as a very small object with a physical extension in space—point-like, but not a point. Hence, for these objects, it is appropriate to describe their angular momentum using the symmetric top angular momentum functions , which can accommodate half-integral as well as integral angular momentum. Consistent with the elementary fermions that they are meant to represent—electron, proton, etc.— j = ½ is assigned. m = ± ½ is the projection of j on the z-axis, completing the usual picture of a fermion. And, m' = ±½ is the projection of j on the figure axis of the top. Hence, m' = ±½ can be viewed as an internal quantum number. In this role, I argued that m' is the source of isospin ½ observed in the leptonic and hadronic isospin pairs (e, ve; μ, vμ; p, n; etc.) of the Standard Model. When j is integral and m' is zero, the D functions become the well-known spherical harmonics used to described the orbital angular momentum of the electrons in atoms. In earlier versions of this magnetically bound vorton pair, as others had done before me, I proposed that one could just insert l = ½ into the Ylm, and go from there. But as Sid Drell and others pointed out to me, this leads to mathematical inconsistencies. I later came across the D functions, which do not have these inconsistencies and are thus clearly a much better description of the intrinsic angular momentum of spin ½ fermions. As far as I know, no other model or theory designates a source for elementary particle isospin. The fermions, in this model, are four component Dirac spinors, which means that the neutrinos, as isospin partners to the charged leptons, are also four component Dirac spinors. Now that the neutrinos, through the neutrino oscillation data, are known to have a small mass, this description makes a much more consistent picture, an idea already suggested by Pontecorvo. In addition, I claim that the number of states—which derive from different internal orientations of two types of intrinsic vorton angular momenta available as a feature of this magnetically bound vorton pair model—is consistent with a four generation Standard Model. The fourth generation is probably too heavy to be seen yet. And, through a dyality angle shift of ±π/2, it has the right number of available states for a straightforward extension beyond the Standard Model—into a magnetic sector, if you will.

This seems like a good time to make note of another decision point for me. Much of the theoretical work I just described was worked out with numerous consultations with Stan Brodsky and BJ Bjorken in the SLAC theory group. Their comments considerably clarified my thinking and significantly improved my published papers. Anyway, after one of our final discussions, even though he was quite familiar with the ideas that I was proposing, relevant to this model Stan said to me, “I don’t know, it just doesn’t feel right to me.” Needless to say, I was discouraged, even dismayed, by his remark. He seemed oblivious to the many good features of the magnetically bound vorton pair model. At that point, I concluded that the best path forward for me as an experimentalist was to try to find a monopole, hopefully a vorton.

Zierler:

Now, with magnetic monopoles, were you following Blas Cabrera’s work at all?

Fryberger:

In 1982 Blas Cabrera published his monopole candidate, which he had detected in a 4-turn superconducting coil. Magnetic monopoles generate a step in current when they thread such a coil. This paper generated even more interest in magnetic monopoles than did the 1975 paper by Price. But I want to emphasize Blas’s restraint. He never made any claim of a discovery, even though his signal was quite striking. While the signal size was exactly what one would expect for a Dirac monopole, Blas always described his event as a candidate. Blas’s restraint turned out to be quite prudent. After many years and many much larger experiments, the total accumulation of the time x area product for all of the subsequent experiments far exceeded that of the original Cabrera experiment, and no confirming events have been detected.

Now, related to this topic, though somewhat before the Cabrera event, SLAC, at my behest, had applied to obtain one of the surplus superconducting spin tipping magnets from Argonne National Laboratory. Our planned use for this magnet was to explore the detection of magnetic monopoles, in particular in their extraction from materials and their subsequent acceleration into a detector. For detection, we planned to use an electron multiplier tube. Our request was approved, and once SLAC received this surplus Argonne magnet, Steve St. Lorant’s group at SLAC, which had a significant cryogenic capability, converted the magnet from a cold bore to a warm bore, which was a much better configuration for our various monopole searches. The internal coil support system also needed some rework. I think that it had suffered some damage in shipment. The work that we did with this magnet was described in a plenary talk that I gave in San Diego[16] and a subsequent paper in Review of Scientific Instruments.[17] Of course, using a magnetic field to accelerate putative monopoles for the purposes of detection was not a new idea, but our large integrated B dl would extend our detection capability down to an extremely low magnetic charge. As an interesting reference point, I note that our 1.8 m of a 50 kG field would accelerate a Dirac monopole to an energy of ~200 GeV. Unfortunately, none of our work with this magnet resulted in any monopole events. However, our magnet did turn out to be useful to other groups at SLAC for testing the effects of large magnetic fields on special purpose rare earth magnets—used in compact permanent magnet quadrupoles, for example.

Zierler:

What extent were you involved with SLD after the initial work?

Fryberger:

Not at all, other than the support function I had with the EFD Cryogenics group. We furnished SLD with pure high-pressure helium gas, that was then liquified at the Collider Hall to be used in the SLD superconducting magnets. The helium boil-off was then returned to EFD Cryo for compression, purification, and reuse. In that way, I was part of the SLAC support staff, but I wasn’t part of the experimental physics aspects of the program.

Zierler:

And then what was the next major project after SLD?

Fryberger:

For the lab or for me?

Zierler:

For you.

Fryberger:

Well, [laugh] actually, somewhat before that time, I’d seen a paper in Nature,[18] that reported on some experimental work to explore the possibility that ball lightning [BL] might be associated with antimatter annihilation. Two gamma rays detectors were used: 20cc of Ge(Li) and 900cc of NaI(Tl). In particular, the authors, Ashby and Whitehead, were searching for an excess rate of the 511 keV positron annihilation line in coincidence with thunderstorm activity. Over a period of 12 months, they had been looking for high rates of gamma ray singles in these detectors, hopefully to be seen in coincidence with thunderstorm activity. They reported four such bursts of high rate events [A, B, C, and D] in both crystals. None of these four events had any obvious environmental explanation. But most interesting for me, they reported a pulse height analysis for event D, which had ~7700 counts in a peak quite consistent with the 511 line of positron annihilation. The problem was that while there was a thunderstorm outside the lab at the time of event D, no observation of BL had been made, because no one had actually been outside looking. It was a tantalizing event, but inconclusive. So, you look at this result, and ask, “Where did it come from?” Well, you don’t really know for sure. There was a thunderstorm outside, but no BL observation to go with it. It seemed possible, even probable, to me that it was due to gammas from the annihilation of positrons from a BL. But maybe it wasn’t. I should add here that at the time, I already had formulated the elements of a BL model based upon a specific configuration of many vortons produced by the lightning discharge, which, among other things, would lead to positron production. That BL formulation certainly added interest for me.

So, intrigued with the possibility that BL might produce positrons, and hence positron annihilation radiation, I decided to try to reproduce the Ashby and Whitehead experiment. To this end, I obtained two water damaged NaI(Tl) crystals, which had been discarded from a SLAC experiment. After a straightforward repair of these crystals—by simply removing the damaged NaI(Tl) material and then resealing in a smaller container—I had two good sized NaI(Tl) crystals for the small cost of a repair. Such crystals were fine for me—just somewhat reduced in volume. However, they couldn’t be used in the original experiment because they were now too short to match with the other crystals. So, with these two crystals, two phototubes, readily available electronics from the lab, and some assistance from members of Steve St. Lorant’s group, I had a functioning gamma ray detector—complete with an automatic computer data taking system.

But, of course, I couldn’t do the experiment at SLAC—on average, I’d say here in the Bay Area, we get less than one thunderstorm/year. After considering several other possibilities, I concluded that an excellent location to do this experiment would be at Langmuir Laboratory for Atmospheric Physics atop of Mt. Baldy in New Mexico. In their literature, they anticipated about 50 thunderstorms/summer. Assuming ~20 lightning flashes/storm, I had the possibility of observing perhaps as many as 1000 nearby lightning strikes in a couple of months. Being devoted to the study of lightning, Langmuir had the necessary technical infrastructure for such an endeavor, as well as living quarters on the mountaintop for experimenters. Relevant to the experiment itself, they had a good-sized underground Faraday cage, called a KIVA—named after a native Indian religious enclosure. The KIVA had a supply of electricity, which, for electrical isolation, passed through a couple of isolating transformers and other filtration. Thus, during a storm, the experimental apparatus and the experimenters would be safe inside. The KIVA could even take a direct lightning strike with impunity for the equipment and experimenters inside! For the experiment, I had built what I called a top hat electrical shield that mated to a flange in the flat plate roof of the KIVA. Thus, electrically this top hat was fully integrated into the KIVA as a Faraday cage. For the experiment, the NaI crystals were located inside the top hat where they were above the ground level and could detect gammas in the full 2? solid angle above the KIVA. And the top hat material was thin enough so that the 511 gammas could easily penetrate it to reach the NaI crystals inside.

That summer, during our 55 days of operation, we experienced only 29 nearby storms, and most of the associated lightning activity was too distant from the KIVA to be useful. In fact, from these storms, I estimated that only a dozen lightning strikes were near enough to be of interest—considerably fewer than my initial optimistic number of 1000. And associated with these lightning strikes there was no detectable 511 activity in the apparatus. Thus, there was no confirmation of the 511 BL hypothesis. However, due to the paucity of strikes, there wasn’t a strong negative result either. After all, BL sightings are far fewer than lightning strike sightings.

I presented this data at a BL symposium in Pasadena.[19] But my attendance at that symposium was important for another reason. At the symposium, I met Erling Strand, one of the other speakers, who is Norwegian and on the faculty of Østfold college. I don’t know if you know much about the Hessdalen Phenomenon—

Zierler:

I don’t.

Fryberger:

Well, before I went to the Pasadena symposium, I didn’t either. Anyway, as some background, Hessdalen is a small rural valley in the middle part of Norway, south-east of Trondheim—about 200 people live there. It takes about five or six hours to drive from Oslo to Hessdalen. In the early ’80s, at Hessdalen they were seeing strange lights in the sky, mostly at night but sometimes during the day—maybe 15 or 20 times a week. People in the field sometimes refer to such a localized activity over an extended period of time as a flap. Often, they were quite bright, and of different colors and also of different shapes. Sometimes they moved rapidly, sometimes very slowly—and sometimes had durations of over an hour. Though some might refer to these lights as UFOs, following Paul Deveraux,[20] I think that “earth lights” [EL] is a much more appropriate name—not UFOs. UFOs are something else, often connoting alien craft, the discussion of which I will gladly leave to others. In my view, EL are a natural plasma phenomenon, describable by a proper physics analysis. And I assert that it’s new physics, to be sure—like BL. In fact, I claim that EL and BL are different manifestations of the same physics—both involving a kind of vorton plasma. The EL are like a big brother to BL.

Anyway, back to the Hessdalen Phenomenon. Erling told me that as he was hearing more and more about these Hessdalen lights, he was thinking that someone should go investigate them. But the weeks went by, and the months went by, and nobody was investigating them. So, he finally concluded that, “I’m the one that must go and investigate.” So, he got a radar—a surplus army radar. He got magnetometers, photographic equipment, radio interference equipment, lasers, Geiger counters, and more. Then, in addition to some colleagues from Østfold, he got a lot of volunteers from Hessdalen to man his operation. With this instrumentation and manpower, the Hessdalen Project took data for about 5 weeks in Jan. and Feb. of 1984. In their Technical Report, they list 188 sightings, of which they assigned 135 to a known source, and 53 to the Hessdalen Phenomenon. That’s more than 5 EL sightings/week! It’s quite a good report—especially considering the shoestring budget that Erling had to work with. This, and related, information is easily accessible via a Google search on Hessdalen.

Shortly after the Pasadena symposium, Erling put together a workshop in Hessdalen, specifically on the unidentified light phenomena being seen there. And he invited me to give a talk. By then, I’d further developed a specific physics description of BL, and, as I said, I thought EL would also be properly described in terms of this same vorton plasma description. So, at Erling’s workshop I gave a talk on the vorton model for BL[21].

Zierler:

David, with all of this research on lightning, what were you most curious about? What were some of the main research questions that were driving this work?

Fryberger:

For me, it was ball lightning. Some time ago, M. A. Uman wrote a book on lightning in which he included an appendix on BL. From a theoretical point of view, salient questions on the nature of BL include: 1) where does the BL energy come from, 2) what force locally contains or localizes this energetic object, 3) how does one explain its mobility, and 4) what accounts its often long lifetimes? Though there are more recent books on BL, I would say that the best book on the observational aspects of BL to date is the one by Singer.[22] Lightning as a physical phenomenon is reasonably well understood. Simply put, lightning is large spark discharge from cloud to ground, or cloud to cloud—but let’s set aside the cloud-to-cloud discharge for the purposes of this discussion.

Now, BL, which is presumably generated with some probability by the lightning discharge, is not understood at all. Or more precisely, I should say that while there are many proposed explanations for BL, there is no generally accepted explanation for it. The vorton BL model envisions a crucial possibility for new physics to take place in and around the lightning discharge. Thousands of Amperes—comprised of both electrons and ions—flow for a short duration in a channel on the order of a centimeter in diameter, and, in accord with the equations of electricity and magnetism [E&M], this current generates a substantial cylindrical magnetic B field circulating around the current channel. After the current and B fields crest and begin to diminish is when the new physics processes can start to take place. Like the B field around any diminishing electrical current, as dictated by standard E&M theory, some of the field and its energy will collapse back into current source, or in the case of high frequencies, some will be radiated away, forming what is called the radiation field—radio antennas are a prime example of this radiation. But when current in the lightning discharge begins to diminish, unlike a diminishing current in a metallic conductor, a major aspect of this process will be the recombination of the electrons and ions that just before were flowing in the channel as electric current. Brought on by this abrupt disappearance of the current source of the circulating magnetic field, I postulate that a new phenomenon that I call “orphaned” fields takes place. As a result of this recombination process, I suggest that some fraction—not necessarily very large—of the energy in the magnetic field circulating around the current channel is converted into numerous pairs of overlapping electric vortons. These vortons have to be produced in pairs in order to conserve electric and Hopf charge. The scale a of these vortons, being a free parameter for vortons, will tend to match the radius of the current channel. In this way, some fraction of the energy in the B fields that was associated with the lightning current will then become associated with the toroidal circulation of the newly created vorton pairs. This field energy conversion, or transfer, is maximum when these pairs are oriented with their z-axes along the direction of current flow in the lightning discharge—the internal B fields of the toroidal circulation will then add, forming a reasonable match to the original cylindrical B field pattern of the lightning stroke, while the B fields associated with the vorton current circulation around the z-axis will subtract, and therefore cancel. Since the “mass” of the vortons is all electromagnetic, the conservation of energy is automatic. By the way, the individual vorton mass goes like a-1, so these vortons are extremely light.

So now, after the lightning flash, even with a low conversion efficiency, you’ve got an enormous number of vorton pairs per meter of lightning discharge channel, which, if the conditions are right, can collect and form a stable—actually metastable—BL. In the vorton model, the stable BL state, as described in my Hessdalen paper,[21] is a physical object comprised of a localized cloud of perhaps many trillions of vortons—that were originally generated by the original lightning discharge—in a state of coherent dyality rotation. By coherent dyality rotation, I mean that all of the vortons have the same dyality angle ΘD, and hence the same amount of electric and magnetic charge, coherently rotating in the electromagnetic plane with the same diangular frequency ωD = 2πf = D/dt. The units of ω are radians/s and of frequency f are cycles/s or Hz. This rotation goes like plus, north, minus, south, etc.—or the reverse. In this model, this dyality rotation would be the reason that BL is associated with significant electric and magnetic effects.

Spin-up is the name that I use to label this process of collection and localization of these many individual vortons into the localized macroscopic BL state of coherent dyality rotation. At present, unfortunately, spin-up is a weak point in the narrative for the vorton model for BL. It is straightforward to imagine that the very large electric potentials subsequent to lightning flashes could exert in a coherent fashion a suitable impulse of torque on the individual dyality angles of all of these vortons. And in a fraction of a second, under the right conditions, the individual dyality rotations could become coherent, and the vortons would collect into a localized cloud—observed BL typically ranges in size from about 1 cm to about 1 m. I see no in principle physics barrier to such a spin-up process. However, a proper computer simulation, though straightforward, is formidable because of the very large number of constituent vortons estimated to be in a BL. It’s easy to see that a BL with NV constituent vortons, the simulation would have to track on the order of pairwise interactions. Now, I estimated[21] that NV = 1.6×1011±1, which means that there wouldn’t be a computer with a large enough memory and a fast enough cycle time to do the simulation in full detail. It’s clear that to obtain a credible approximation, one would still have to be clever. While there is no reliable data for the spin-up time for a BL, there is some for cavity lights [CL]. In this case, data indicates[24] that, in the mean, CL spin-up is accomplished in about three video frames, or ~0.1 s. This is an enormous amount of time for any typical plasma physics process. Facing these problems, it’s not surprising that we have left this problem to be addressed at a later time.

Rather than go into other technical details here, I suggest that anyone interested should read my Hessdalen paper[21] to see how the vorton BL model can furnish answers to most of the outstanding questions about these phenomena. Also, some predictions were made. While there’s a lot more work to do, I feel that there are paths, both experimental and theoretical, to a much fuller understanding of ball lightning.

Zierler:

David, kind of a SLAC-wide question, what changes, if at all, did you detect when Richter succeeded Panofsky as director?

Fryberger:

As a preamble to the answer to this question, I would like to say that soon after I arrived at SLAC, I said to myself that national labs are great places to work. But sometime after that I realized that, no, it was SLAC that was a great place to work, and that it was Pief’s leadership that made it so. Pief valued everyone at the lab—from the senior physicists to the groundskeepers. Under Pief, we all had a sense of common purpose—under Burt, less so. But I should say that Burt was a good director. When Burt came in as director, he had in his mind making a linear e+e? collider, which came to be called the SLC [Stanford Linear Collider], and so he really devoted most of SLAC’s energy and budget to the SLC. That was a major change. But I think it was a reasonable change because we’d sort of mined out much of the experimental work that could be done using fixed targets and lower energy storage rings. And, so, that was a significant shift in focus for SLAC. From the point of view of physics history at that time, we became in a race with LEP at CERN, which was a high energy e+e- storage ring. Our common purpose was in general to explore higher energies, and in particular to find the Z0. We claimed to have won the race because we got the first few Z0s, but LEP with its much higher luminosity soon swamped us in the total number of Z0s produced. However, I should observe here that at the present time the notion of a much higher energy linear e+e? collider is under serious consideration by the high energy physics community. So, Burt’s legacy lives on.

Zierler:

David, in what ways were you paying attention to the rise and then the fall of the SSC in Texas?

Fryberger:

I would say that I was an interested bystander. I think you got a pretty good description from Roy Schwitters on that one. In my view, the main driver of the fall of the SSC was political. In simplistic terms, I attribute it to political budget grandstanding by members of the US Congress. At the same time, it was unfortunate that there was an increase in the estimated cost of construction, which helped enable the grandstanding. But I didn’t think the SSC would suffer that fate because it was sited in Texas. My reading of the politics was clearly in error.

A number of SLAC people—Roy Schwitters, Harvey Lynch, Rainer Pitthan, and Fred Gilman, to name a few—went there for its construction and presumed future operation. Roy and Fred stayed too long and weren’t able to return to SLAC. But a number of people did come back.

Zierler:

As we’re moving into the 1990s, when did you start thinking about retiring?

Fryberger:

Well, I was not unhappy with my situation. I had permanent employment with a paycheck from SLAC. I was doing my SLAC work as head of EFD’s Cryo group, Secretary of the EPAC, and Chairman of the Safety Overview Committee [SOC]. But with all of this, I still had some time to think about fundamental physics. And as we have discussed, over the years I had written maybe a dozen papers that considerably enhanced at least my own understanding of some basic physics questions. So, things were good for me and I wasn’t really thinking of retiring.

But then an opportunity came up for me to retire. It included a special arrangement for me to continue pursuing my own physics ideas and interests fulltime at SLAC, but without a SLAC paycheck—for budgetary reasons SLAC liked to reduce its headcount, especially in slots occupied by senior employees. To me the paycheck didn’t matter nearly as much as the chance to spend full time pursuing my physics interests—and leading the EFD Cryo group was actually taking more of my time than I would have liked. Burt orchestrated this arrangement, part of which was my emeritus designation. As a result, I could still have a SLAC office, some laboratory space in the Research Yard, a SLAC computer account, a Stanford email account, and a free A parking sticker on the Stanford campus. To further enable me to pursue my physics interests within the SLAC family, I joined Bob Siemann’s group, Advanced Accelerator Research Department [AARD], as an unpaid consultant. For budgetary support, Burt talked John into letting me spend some AARD money on travel, equipment, etc.—up to perhaps $10K a year, which is in the noise for a large group such as Bob’s. In return, I would go to AARD group meetings, consult with them on technical problems, and give them talks on the physics that I was doing.

So, timed to be after an upcoming EPAC meeting, I retired in October of 1998, after more than 31 years at SLAC. One of my first retirement projects was to try to make BL in the laboratory. Dieter Walz had actually built a prototype device that I had designed for that purpose. And I had even gotten a preliminary approval from the SOC. I add that for that particular part of the SOC meeting, to avoid a possible conflict of interests, I had Gary Warren, head of ES&H [Environment, Safety and Health] replace me as the Chairman of the SOC.

But I never got to my experimentation with my prototype BL generator because I soon found out about a recent superconducting accelerator cavity experiment done at Jefferson Laboratory [JLAB] located in Newport News VA. It was the topic of a short talk at the Proceedings of the Particle Accelerator Physics Conference in New York in 1999, a conference that wasn’t even on my radar as a source for new information about fundamental physics. The authors, Jean Delayen and John Mammosser, had looked inside a powered JLAB superconducting RF cavity—actually two different cavity configurations, a single cell and a five cell. And they had observed mysterious—quite baffling, actually—light phenomena. Perhaps the most baffling was the fact that small lights were observed that seemed to move around freely in the vacuum space but were unattached to the cavity walls—behavior never before seen in RF cavities. But this behavior very much resembled the observed behavior of ball lightning, which, of course, made learning more about their experiment of serious interest for me.

My finding out about these cavity light phenomena was quite fortuitous. I don’t know from your interviews how much accident has played in the career path of others, but in this particular case, it certainly did for me. I found out from John Weisend—an American physicist employed in Germany at the Hamburg laboratory called DESY—who had recently been hired to join EFD. He was just tidying up some loose ends at DESY in preparation for his joining SLAC, when John Mammosser—on a European trip to evaluate some possible vendors—stopped in Hamburg to visit DESY. While there, Mammosser gave a talk on their recent cavity lights experiment, and, fortuitously for me, John Weisend happened to go to the talk.

Briefly, the motivation for the JLAB cavity lights experiment was to try to improve their understanding of the sources of the quenching problems of superconducting RF [SCRF] cavities. The SCRF JLAB cavities are made of high purity niobium and run at a temperature of 2K for optimum operation. 1.5 GHz is the working frequency of the JLAB accelerator. Quenching of SCRF cavities is attributed to two major causes: 1) runaway field emission from imperfections or contamination on the interior surface of the cavity and 2) resistive heating due to RF currents in the niobium. Unlike with direct currents, superconductors have some electrical resistance when conducting RF currents. And if the heating is too great at some location, the superconductor at that location will become a normal conductor. Once there’s a patch of normal conductor in the cavity, it quickly spreads and the cavity quenches, completely losing its power to accelerate electrons. Though the cavities generally recover quickly, clearly such behavior is most undesirable.

Jean and John had mounted a small CCD video camera looking through a view port into an operating cavity, but, of course, without a beam. It’s challenging because such cameras won’t operate at 2K. The camera temperature problem was solved by enclosing it in a vacuum-insulated container, but heating the camera inside up to room temperature. Because of the vacuum insulation for the camera, a negligible amount of its heat is transferred to the liquid helium bath, which surrounded both the cavity and the camera housing. The camera’s output was a standard black and white video at the NTSC 30 Hz frame rate that one could view on the commercial TV monitors in common use at that time—before flat screens. Since each frame consisted of two fields, video data was produced at a rate of 60 fields/s and each field was a time exposure for one 60th of a second. Thus, a field-by-field analysis of the video data would furnish a complete record of the motion of these small light emitting objects. These luminous objects moved freely in the cavity vacuum away from the cavity walls, often in elliptical orbits about the cavity axis—and, on occasion, they would actually bounce off of the walls, and then continue on their way. One got the distinct impression that the they were obeying Newton’s laws of motion under the influence of the EM fields in the cavity. The pressure inside a SCRF cavity is extremely low. Though one can’t actually measure it directly, due to cryopumping at 2K one expects it to be on the order of 10-10 Torr, or better—unless there is a leak.

After hearing the Mammosser talk, John Weisend said to himself, “Fryberger would be interested in this.” John emailed me even before he left DESY. So, I consider John’s going to that talk a lucky accident for me because otherwise I probably never would have found about what we now refer to as cavity lights [CL]. After receiving John’s email I called Mammosser, and was invited to come to JLAB to give a talk on ball lightning—that was a time when labs and lab scientists had much more freedom to define the details of their physics program. Getting together after my talk, Jean, John, and I agreed to collaborate on further experimental runs at JLAB. The cavity light phenomena exhibited behavior that was quite similar to that of ball lightning—bright flashes of light, often followed by small luminous objects [MLOs, as we later called them] moving inside the volume of the cavity, but not in contact with the walls—often in well-formed elliptical orbits. Recall that BL seems to form as the result of a lightning flash. I thought—and I still think—that it’s the same physics as BL. As a result, CL has turned out to be a major focus of my post retirement physics activity.

I collaborated as an individual from SLAC with significant contributions from my SLAC colleagues Perry Anthony, John Weisend, and Zen Szalata. Bill Goree, from 2G Enterprises in Pacific Grove CA, helped us with efforts to make a SQUID magnetometer—although we ultimately ended up using a commercial fluxgate magnetometer. As it turned out, John Mammosser did most of the work at the JLAB end, with some support from the JLAB technical staff. He had a dedicated 1.5 GHz single cell cavity, and a stand to support the cavity with its attached camera enclosure, its instrumentation, RF coaxes, etc. The stand, cavity, and instrumentation were lowered into a test Dewar, which was sealed, pumped, and purged before filling with liquid helium. As I recall, there were eight or nine such Dewars available at the JLAB cavity test facility. Also, it had its own dedicated helium liquefier.

After filling the Dewar with liquid helium at 4K, the helium pressure in the Dewar is pumped down to around 20 Torr, lowering the cavity temperature to 2 K. After reaching 2 K, we had about an hour to do an experimental run before too much helium boiled away. Our collaboration did eight runs after the original two, with our tenth, and final, run in August of 2006.

I should mention here that in an effort to try to understand the orbiting MLO phenomenon, I developed what I called a Small Particle Model,[23] which showed that a small conducting sphere, with a mass obeying Newton’s equations of motion, and electromagnetically interacting with the cavity electric and magnetic fields—via the standard Lorentz force equation—would find itself in a cylindrically symmetric harmonic oscillator potential well, which has, as solutions, elliptical orbits centered on the cavity axis. Using this model, it is straightforward to derive the orbiting frequency in terms the maximum accelerating electric field in the center of the cavity, the resonant frequency of the cavity, and—interestingly enough—the mass density of the sphere. The radius of the sphere doesn’t enter into the final formula for the orbital frequency because the force constant and the mass of the sphere are both proportional the cube of the sphere radius, and they enter into the harmonic oscillator force equation as a quotient. Thus, inverting this relationship, one derives the mass density as a function of the cavity parameters and the inverse square of the observed orbital frequency. I considered this a significant first step because there were numerous examples of elliptical orbits already in the first two runs. There was a practical question, however. Since the range of observed orbital frequencies were from about 5 to 80 Hz, a ratio of 16, this implied that the range of mass densities of the orbiting spheres was 162 = 256. What material could offer such a range? This was compounded further by the deduction that the lightest material—in the 80 Hz orbit—was estimated to be about 1.66x10-3 g/cm3, roughly the density of air at STP, seriously complicating the question of what an MLO could possibly be made of. In addition to this question about the MLO density, it was clear that the model itself was incomplete: the force calculation that led to a stable radial motion at the same time led to an unstable axial motion. A possible answer was found later. As more complete simulations indicated, this axial stability problem could be solved through the introduction of magnetic charge, an essential feature of the ball lightning model.[21]

It took me a couple of years to write up experimental cavity lights results. And I want to say here that my good friend and colleague Mike Sullivan made significant contributions to the analysis—especially with regard to the video data in runs 7 and 10, which exhibited quite astounding, yet different, MLO behavior. An analysis and discussion of the data from the first nine runs has been published.[24]

Major features of Run 7 were the fact that MLOs could form what might be called stable, long-lived macro-molecules comprised of as many as 7 visible well separated individual MLOs. Such molecules were centered on the axis near or on the cavity equator, and well away from the cavity walls. The MLOs were at rest or moved slowly clockwise or counter-clockwise, rotating as a quasi-rigid but stable entity, lasting many minutes. Sometimes, some kind of a glitch in the cavity would cause these MLOs to vibrate with respect to each other—at around 10 Hz. This motion was damped out after a few cycles. The amplitude of the relative motions was on the order of, but less than, the MLO separation distances, which to me indicated that their relative motion could be analyzed in terms of a set damped harmonic oscillator potential wells between pairs of MLOs. Or perhaps a general potential function of all of the MLO coordinates could be used. Now, both BL and EL have been observed exhibiting similar macro-molecular formations, arguing that the they are governed by the same physics, even though they are in significantly different environments. For BL, I refer to the book by Singer,[22] and for EL just Google 'Kenneth Arnold' to find out about his celebrated 1947 sighting of 9 flying saucers travelling in linear formation at very high speed. Deveraux[8] also has some EL examples. I may be out on a limb here, but I claim that we are seeing strong, even compelling, evidence of new physics.

Zierler:

Now, David, what do you mean “new physics”? In terms of beyond the Standard Model?

Fryberger:

What I mean by new physics are phenomena that established physics can’t explain—often features that are put in by assumption. So, the new physics that I am talking about is in terms of the Standard Model and also beyond the Standard Model—absolutely. As far as the Standard Model is concerned, the magnetically bound vorton pair model for point-like fermions explains many features of the Standard Model that are presently put in ad hoc. The physical scale here is assumed to be very small—quite possibly at or near the Planck length. As far as beyond the Standard Model, there are two categories here. The most direct is an extension of the Standard Model into a magnetic sector, which I mentioned before. The other “beyond the Standard Model” has to do with macroscopic vorton physics. First there was ball lightning, sightings of which go back for centuries. But no one could understand it or explain it in terms of established physics. After the denial phase subsided, the explanation part was put in the category as too complicated for present understanding—but presumably, given enough work, understandable at some future time. Then there were earth lights, which also have been observed for centuries. Again, after the denial phase subsided, no one could understand it or explain it. Similarly, any explanation in terms of established physics was put in the same category—assumed to be possibly explicable, but too complicated for now. You already know my view of the alien space craft explanation. And now we have cavity lights, which exhibit many of the same inexplicable behavior as that of ball lightning and earth lights. For me, while their physics explanation is indeed complicated, all three, CL/BL/EL, are understandable in terms of new physics, as my analyses and simulations have indicated. I interpret these three phenomena as different manifestations of the vorton ball lightning model that I presented at the Hessdalen Workshop[21] over 25 years ago.

And if this is true, that means that vortons exist in nature as fundamental electromagnetic entities, obeying the set of symmetrized Maxwell’s equations, and the symmetrized Lorentz force equation, with the same specified magnitude of electromagnetic charge. And each vorton has its own dyality angle as a continuous degree of freedom. Vortons also possess two types of intrinsic angular momenta, which—as I have already described—lead to the individual vortons carrying a topological charge, rendering the vorton configuration stable against decay. While the size and mass of each vorton will depend upon the circumstances of its creation, they still all have the same configuration—and magnitude—of electromagnetic charge, internal angular momenta, etc. That is, at the fundamental level, there is only one kind of vorton with the several universal features that I just enumerated.

Following this line of argument, it seems to me to be quite natural and appropriate to imagine that vortons could be used as the single fundamental entity in the structure of elementary fermions—an idea that I developed and published over 35 years ago.[15] Think of it—a single fundamental particle and a single fundamental interaction. As such, I believe that this model could furnish a firm foundation for QED, the weak interaction, QCD, etc. to come into being as emergent physics. As I said earlier, the vorton model for point-like fermions has the right number of independent stable states to accommodate a four generation Standard Model, but the fourth generation is probably too heavy to have been seen yet. The Standard Model fermions—which I think of as comprising an electric sector—are at their fundamental level magnetically bound vorton pairs carrying an intrinsic orbital spin ½, with an integral unit of electric charge, 0 or ±1, which, after renormalization, equals the electronic charge. By the way, since in the vorton model the fundamental leptonic and hadronic fermions have the same underlying structure, it is not surprising that they would also have the same magnitude of electric charge—furnishing a natural answer to a significant mystery of physics.

As I mentioned, this vorton particle model has a natural extension to include a magnetic sector, formulated as electrically bound vorton pairs in an orbital spin ½ state. I refer to these fundamental spin ½ magnetic fermions as magneticons. And their basic charge magnitude on these magnetic fermions would be the same as that of the electron—except magnetic. I presume that the particles of this predicted magnetic sector would also have four generations of particles as counterparts to those in the electric sector. Of course, their masses would be greater, even much greater, than their electric sector counterparts. In addition, I have formulated an analysis that extends QED into what might be called QEMD,[25] the quantum dynamics of electric and magnetic charges, which also contains the cross interactions of electric with magnetic charges. In this formulation, once above production threshold, the pair production cross section of magneticons in a e+e– collider would be the same as the muon pair production cross section.[25] By analogy, above threshold, the electric and magnetic fermions should also have the same Drell-Yan production cross section. It follows that since muon pairs are produced copiously, then magneticon pairs—above threshold—should also be produced copiously. And the major Large Hadron Collider [LHC] detectors, ATLAS and CMD, should be able to detect some of the low-lying magnetic states, if they would just look.[26] If magneticons are light enough, they could even be seen at the BelleII detector at the e+e– ring at KEK in Japan.[27]

Zierler:

Now, if it’s physics beyond the Standard Model, of course, this would generate a lot of excitement, easily as much as the g-2 muon anomaly that’s happening at Fermilab. What is your sense of why vortons are not more well-known at this point?

Fryberger:

Well, there are only two of us—Mike Sullivan and me—talking about vortons, and, in effect, no one is listening—by listening, I mean thoughtfully listening. I include Mike here because after he became involved in the analysis of the video data obtained from the JLAB experimental runs—in particular Runs 7 and 10—he was able to see first-hand that indeed this data provided a serious challenge to established physics. Thus, as a result of the MLO data, as well as more recent corroborating data, his view that the vorton model constitutes a valid working hypothesis for physics at the fundamental level was strongly reinforced.

In the past fifteen years, we have given a number of talks to the AARD group, as well experimental seminars to the lab as a whole. In addition, I have given some talks to groups at some other institutions, for example, JLAB, BNL, OLD Dominion in Norfolk VA, and the Navy Postgraduate School in Monterey CA. Mike, who because of his expertise in understanding of backgrounds in particle detectors, sits on many review committees. These committees meet at numerous laboratories all over the world, and when there, Mike regales his colleagues, when time permits, with vorton ideas and analyses. And beyond that, we have talked to as many of our local colleagues as would give us some time. While some of them have listened to what we had to say, and have given useful suggestions about one or another aspect of our analysis—or given us useful references—no one has taken up our challenge to provide a coherent explanation based upon established physics. Perhaps in the back of their minds is the notion that if they had the time and the interest they could surely come up with one. But for them, it isn’t a priority at the moment.

Let me add here an anecdote about Pief. Well before I retired, he asked me if I had any books about ball lightning so that he might better understand it. I loaned him my copy of Singer and another book. The books were back in my in-box in a couple of days, too quickly for a thorough reading, I thought. A few days later, when I happened to meet him in the hallway, I asked him if he had actually read them. He replied, “Yes.” I then said, “I think it’s new physics,” to which he replied, “No, it’s complicated physics.” I wish that I had been able to show him then our JLAB MLO data, but that wasn’t available until many years later.

Now when we first came to SLAC—I as a newly minted PhD, and Mike as young PhD student—there was no generally accepted paradigm to organize and explain the vast, and growing, array of particles that one now finds listed in the Particle Physics Booklet, published every two years by the Particle Data Group. The quark model was new, and a number of other particle models were also being introduced and discussed. At that time experimentalists and experimental groups were willing to entertain unproven hypotheses, and even to search for supporting evidence—especially if the search wasn’t too expensive or time consuming. Many of those searches were carried out at SLAC.

But, why haven’t the seeds of these vorton ideas grown in the soil of the physics community—why isn’t there a school of thought based on vortons? I think that perhaps the idea of the vorton, as a new generalized electromagnetic particle with unusual and unfamiliar features, is just too far off the beaten path. Why go there? Certainly, as the way things stand today, a young physicist won’t find a promising career so far off the beaten path. And an older established physicist will already have a well-developed career path—a perfect catch-22 situation.

Now, as a more general answer to your question, the fact that there is the beaten path which serves as a stern guide to physics research can be viewed as a problem related to the present sociology of today’s physics. While I, being an experimental physicist, have little in the way of credentials to mount a serious criticism of this aspect of the string theory program, I can cite some who do. For example, Lee Smolin[28] sees the string theorists in a state of groupthink, which tends to freeze out competing ideas. Similarly, Roger Penrose[29] describes it as a bandwagon effect, which sweeps in new members to join the bandwagon, lest they be left behind.

These effects also influence the experimental community, especially those doing experiments at large facilities such as the LHC. We have approached friends and colleagues who are members of ATLAS, and they are often quite interested as individuals. But the experimental program isn’t guided by the ideas of individual members—even senior members. Rather the program is steered by the wisdom of program committees, which usually seek input from or have a significant membership from the theoretical community, members who just might be afflicted with Smolin’s string theory groupthink. Relevant to this point, I note that Peter Woit in his book, Not Even Wrong, devotes an entire chapter to “The Only Game in Town.”

There is an additional practical problem with detecting pair produced magneticons in these 4? magnetic detectors. Magneticons with magnetic charge 1e—the same charge as that of the electron, but magnetic—move differently in the magnetic field of the detector. They make parabolas instead of helices. So, you also have to have a tracking program that tracks parabolas in addition to helices. Hence, magneticon tracks won’t just fall into your lap. While it’s not an enormous amount of work to code in magnetic tracking, it does take some dedicated effort. But the threshold to expend that effort—to enable what I consider a fairly straightforward extension of scope—seems to be too high for those groups to overcome. As I see it, the will to do the coding is the problem. No new apparatus would be required—although new trigger criteria would have to be developed. If they were actually motivated to do the additional coding and triggers, it certainly wouldn’t be a major project.

Now, back to your original question—or rather to the preamble to your question—you mentioned that the experimental and theoretical g-2 discrepancy revealed by recent measurements at Fermilab is a possible indication of physics beyond the Standard Model. I agree, and on that topic, it’s appropriate that I mention here that in my paper about magneticons,[25] I showed that if you add leptonic magneticon loops into the higher order correction terms to the muon magnetic moment, this discrepancy will be diminished, though not by very much. However, it seems to me that the additional corrections associated with the hadronic part of the magnetic sector just possibly might be large enough to eliminate the present discrepancy. I argue that this possibility should motivate the experimental community to actually search for magneticons, and that the major detectors at the LHC would be a good place to start. Finding a magneticon, presumably, would initiate a more general theoretical interest.

Zierler:

David, of course, so much of the frustration around not seeing physics beyond the Standard Model is that we don’t have the experimental or observational facilities to do so. Are you saying that the vorton can be observed with the instrumentation currently available to us?

Fryberger:

My answer to that question is a definite yes. In fact, as I’ve already described to you, my interpretation of the data we obtained at JLAB is that we have already seen manifestations of the vorton, and vorton physics—in the form of the orbiting MLOs. The most unusual orbit is the 40 s 40 Hz precessing elliptical orbit in Run 10. As it precessed, this orbital ellipse exhibited a slow rocking motion of its major axis through about ±35 degrees with a rocking period of ~5s. As the orbiting ellipse rocked to its extreme it one direction it would become a line passing through the axis of the cavity, which carries no orbital angular momentum. Then, after the rocking reversed direction, with the orbital ellipse opening up again—but remaining centered on the cavity axis—we observed that the direction of orbital revolution had also changed sign too. At the other extreme, the orbital trace would again become a line, which then began to precess back in the other direction. So, the revolution of the MLO about the cavity axis and the precession of the major axis of the orbit was observed to always be in the same sense. The reversal of orbital direction—twice during each rocking period—clearly doesn’t conserve the angular momentum of the MLO in orbit about the cavity axis. There must be some variable torque about the cavity axis exerted on the MLO as it orbits. And I claim that established E&M—that is the standard Lorentz force—coupled with Newtonian mechanics, offers no mechanism for such a torque. Consequently, it follows that this rocking orbit presents a serious challenge for established physics—but much less so for the vorton BL model.

Let me review a few details here. Using realistic electromagnetic fields in the cavity and a simple electromagnetic model for the orbiting object—a small conducting sphere—I showed that the small sphere interacting with the cavity fields was, in effect, moving in a cylindrical potential well, which has elliptical orbits centered on the cavity axis as standard trajectory solutions.[23] But this success for established physics has some serious flaws. For one, this solution for stable elliptical orbits entailed an axial instability for the orbit. But the 40 s orbit was obviously axially stable. Another problem with the small particle model using established physics is that a precession is predicted—but at a much larger rate than that actually observed and well outside any possible errors in the input parameters. And finally, the rocking motion, with its periodic reversal of the orbital angular momentum can find no foundational basis unless the cylindrical symmetry of the cavity is somehow broken in a way that introduces some other kind of new physics. We measured the cavity at the equator, and it was well within tolerance to be described as cylindrically symmetric. Now, if one includes some Dirac monopoles—a couple hundred, say, as a component of the MLO—and includes magnetic charge in the Lorentz force equation, the axial stability and the rate of precession problems can be resolved—magnetic charge repels its image in a superconductor. But this is magnetic charge and new physics. However, the rocking motion is still a problem. Dirac monopoles can’t induce the rocking orbit.

Using the vorton model with standard EM fields—and applying the symmetrized Lorentz force equation, that is, one including magnetic charge—in the resonant 1.5 GHz mode appropriate to a JLAB cavity, and with an assumed number of vortons to be NV = 6 x 106, I successfully simulated the 40 Hz orbit in size, shape, and frequency—including its rocking motion. These cavity EM fields are cylindrically symmetric, as is the cavity itself. But in this simulation, the cylindrical symmetry of the problem is broken by assuming that a small transverse component of the earth’s field leaked into the cavity interior. Since niobium is a type II superconductor, one expects that the Meissner effect is not perfect, and that some fraction of the earth’s field will penetrate the cavity walls. While the magnitude of the interior magnetic field was assumed—you can’t measure it—it was gratifying to find that the vector direction of the measured transverse earth’s field outside the cavity was the same as the mean of the observed rocking axis of the 40 Hz orbit, as required for a consistent simulation. While the effects of this transverse field are too small to meaningfully interact with the electrical charges and currents in the MLO, the internal transverse dc B field exerts a transverse force on the magnetic charge of the orbiting MLO. This results in a varying torque about the axis of the cavity—the rotating dyality angle of the MLO, which for this simulation I assumed to be in synchronism with the orbital revolution, comes into play here. The moon exhibits a similar synchronism of two otherwise independent rotational frequencies. Just to be clear, I repeat that because the Dirac monopoles have a fixed magnetic charge, they cannot induce the rocking motion of the orbit—even with a transverse magnetic field.

Now, the simulation solved all of these problems—orbit size, shape, rocking precession, and axial stability—without the Dirac monopoles. I only added vortons in coherent dyality rotation, features already postulated in the original BL model.[20] At the same time, I assert that a successful simulation of all of the observed features of that orbit cannot be made using established physics only. With respect to earth lights, I would say the same thing—we have seen vortons and vorton physics—except that unless there would be another EL flap like that in the ’80s at Hessdalen, it wouldn’t be very productive to try to mount an experimental program. On the other hand, such flaps do happen from time to time, so it might be a good idea to prepare a mobile set of instruments that could, in a timely way, go the EL flap region to gather data, like Strand did at Hessdalen.

While we’re talking about the Hessdalen data, there’s a very interesting aspect of that data that I haven’t mentioned yet, and this would be a good time to tell you about it. I think it deserves serious attention. As I told you Strand had obtained an X-band radar with a 360 degree sweep—at 25 rpm—as one piece of his instrumentation. The radar output was displayed on the usual PPI scope, which enables the operator to track moving targets, sweep by sweep, as they move along outside. They reported 36 radar recordings. Now, in general, the EL gave very large radar returns—some were accompanied with a simultaneous visual sighting and some not. But what I want to bring to your attention is there were several occasions in which a sweep did not give a radar return in its expected location even though there was a continuous simultaneous visual sighting of the EL. For example, from sweep to sweep the radar would give a return every other time. But the visual observer continued to follow the EL target as it moved along outside without any evident change. And this happened more than once. My interpretation of these events is that they are not due to an atmospheric condition or a radar malfunction, but rather to the nature of the EL themselves. In the vorton model, an EL is a localized collection of a very large number of vortons in a state of coherent dyality rotation. The physics of this coherent dyality rotation is what gives rise to the force that contains the EL as a discrete localized electromagnetic object—I might add here that it also furnishes the binding force for the macro-molecules. Yes, it’s new physics. As I described it before, in the course of one cycle of dyality rotation, the electromagnetic charge of the EL would go plus, north, minus, south, or the reverse. A reasonable estimate of a typical EL dyality frequency is given by a 10 second time exposure photograph in which an EL exhibited a 7±2 Hz oscillation, or perturbation, along its trajectory. In the vorton model, this trajectory oscillation would be caused by the force associated with the oscillating magnetic charge of the EL interacting with the local earth’s field—down when north and up when south, etc. Hessdalen is quite near the magnetic pole, which, counterintuitively, actually is characterized by a south polarity. By definition, when we refer to the north pole of a magnet, we really are saying the north seeking pole.

To continue, the explanation in the ball lightning model for the missing reflections is that this null radar return occurs at the four specific dyality angles in one dyality cycle at which the magnitudes of the electric and the magnetic charges are equal, that is when ΘD = ±45 degrees and ±135 degrees. If you think about the process of electromagnetic reflection in an electric conductor—and also at the same time its analogue in a magnetic conductor—you should be able to convince yourself that the reflected wave off of the magnetic charge component will just cancel the reflected wave off of the electric charge component. The missing radar blips, then, are just due to the particular angle of the EL dyality rotation when the radar sweep goes by. It would be a good exercise for a student to demonstrate this. A prediction of this analysis is that if one were to lock a tracking radar onto an EL, and observe its radar return in what is called an A-scope, then one would observe a radar return signal with its magnitude oscillating at four times the EL’s dyality frequency, that is, a radar return having four maxima and four minima in one full dyality cycle. To quote my late thesis advisor, Val Telegdi, “What else could it be?”

Zierler:

[laugh]

Fryberger:

[laugh] As I said before, he won more than one argument with that question, but I don’t know if it will be persuasive in this case. Good A-scope data would be better, I think.

Zierler:

David, did you coin the word “vorton”?

Fryberger:

Yes.

Zierler:

Why? Why that word?

Fryberger:

Well, as I have described to you, after my work with Buck Rogers, my motivation was to try to find an object which was characterized by magnetic charge, a monopole, if you will, but not necessarily Dirac’s monopole. Considering that symmetry properties were deemed very important in physics theories, I wanted to get as much symmetry into this monopole as seemed reasonable. Following this goal, I had constructed a static semiclassical solution to a set of symmetrized Maxwell’s equations, which is characterized by a number of specific features, none of which are assigned ad hoc, but rather are derived from the structure of the configuration itself.[10] The fact that Maxwell’s equations are invariant under the operators of the conformal group was a strong enabling factor in arriving at this maximumly symmetric form for this electromagnetic structure, which turned out to be best described in the toroidal coordinate system—scaled in size by a ring of radius a in the x-y plane. In its ground state, this configuration carries a specific magnitude of the total electromagnetic charge QV, which is approximately equal to 28.53 e. The electromagnetic charge density is spherically symmetric, continuous in 3-dimensional space, and without singularities. The exact equation for the magnitude of the electromagnetic charge is given by Qv2 / ℏc = 2π √3/5, which is about 4.8. As part of the structure of this configuration, it also carries two different kinds of internal angular momenta, one around the z-axis, and one around the ring of radius a. For the ground state, each of these angular momenta is semiclassically quantized independently to be plus or minus one unit of ℏ. This configuration also has a topological, or Hopf, charge QH = ±1, depending upon the relative sense of the two types of angular momenta. The mass of this object goes like 1/a, but QV is independent of a, and also of the dyality angle ΘD. Therefore, the charge QV is allocated to be electric and/or magnetic by ΘD —specifically QV sin ΘD and QV cos ΘD. The question was what to call this configuration, or object, of dual electric and magnetic charge.

Now, Schwinger had proposed a model for hadrons based upon a particle he called a dyon—a name he invented—which are objects carrying both electric and magnetic charge. It was published in the journal Science in 1969. He proposed that the hadronic fermions—for example, the proton and the neutron—were composite particles made up of three magnetically bound dyons, which themselves also had fractional electric charge, 2/3 and -1/3 of the unit e. I note that he managed to write this whole paper without ever mentioning the word “quark,” which I thought was quite an achievement. Although he did offer his own argument for arriving at these electric charge assignments.

Zierler:

[laugh]

Fryberger:

To be clear, quarks were certainly well-known by then. Gell-Mann and Zweig had published their papers in early 1964—Gell-Mann about quarks, and Zweig about aces, deuces, and treys. But, of course, electrically charged quarks or aces, deuces and treys wouldn’t properly characterize my electromagnetic object, which could carry both electric and magnetic charge. So, I was thinking that I might call it another kind of dyon—but not Schwinger’s dyon. Now at about that time visiting SLAC there was a recent PhD theoretical physicist from MIT named Philip Yock. And Phil was also familiar with the building of particle models—he was working on his own model, involving what he called sub-nucleons. Anyway, he said, “Don’t call it a dyon. Think of something else.” So, as I considered the major features of my solution to Maxwell’s equations, I realized that one could characterize the internal rotations of the electromagnetic charge around the z-axis and around the ring of radius a, as vortices. The topological charge associated with these two types of rotations is what gives it stability. And being semiclassical, I had, in effect, a quantized vortex, or a vorton for short. In physics, the beginning of the word describes an important aspect of the object, and the suffix “on” is often tacked on to connote a particle—such as electron, proton, or neutron. At the time I knew of no other usage for the word vorton, so that was my choice. These days it is also used as a label for a hypothetical cosmological string loop stabilized by an angular momentum along its length. To the best of my knowledge, I used it first, but at present the cosmological vorton is far better known.

As I have said, the electromagnetic charge on this object is large: . For the bound vorton pair structure to model point-like fermions, it’s important that this quantity is greater that 1.

Zierler:

David, why is that significant that it’s greater than 1?

Fryberger:

To understand my argument, let’s first consider the hydrogen atom, an electron and a proton bound by their equal and opposite electric charges of magnitude e. This electric coupling, or binding, is characterized by the fine structure constant , which is quite small, about 1/137. The size of the hydrogen atom, called the Bohr radius, depends upon a and the masses of the electron and the proton—mostly that of the electron. Now the Bohr radius can be estimated by a straightforward minimum energy argument, where there two terms to the energy of the hydrogen atom: the Coulomb binding energy, which is negative [V = -e2/r], and the Heisenberg localization energy, which is positive. This localization energy is a quantum effect deriving from the Heisenberg uncertainty principle, which states that if you localize an object to within a distance, Δx, it will acquire a certain amount of momentum Δp. And the smaller Δx, larger must be Δp. In mathematical terms, to within a constant of order unity, the Heisenberg uncertainty principle can be written as ΔxΔp is greater than or equal to ℏ=h/2π, where h is Planck’s constant. For the hydrogen atom, which is nonrelativistic, the kinetic energy T is given by T = p2/2m. Now, using the Heisenberg uncertainty principle to use Δx instead of Δp in the equation for T, the size of the hydrogen atom at the minimum of V + T is just the Bohr radius, showing that a very simple, but straightforward, approach using quantum mechanics can tell you the size of an electromagnetically bound quantum object, that is, the hydrogen atom. Taking this same approach to magnetically bound vorton pairs, when you substitute the vorton charge QV for the electron charge e in the formula for ?, you find that as you consider larger and larger QV, the size of the bound pair will shrink, and at a certain point the Coulomb binding force will totally overcome the Heisenberg localization force, and the magnetically bound object will collapse. By the way, using a relativistic expression for T doesn’t help either. This threshold of collapse occurs when exceeds 1.[9] So, the bound vorton pair will collapse to some small limiting size. I, as well as a number of physicists, think that a good candidate for that small size limit is the Planck length, and that gravitation will somehow enter the picture. Of course, there needs to be a lot of work to properly understand the details at this very small scale—more new physics.

Zierler:

What do you think it’s going to take for research on vortons to catch on, and, best-case scenario, what might we learn in physics as a result of increased study of this?

Fryberger:

Well [laugh], I don’t know what it’ll take for the vorton concept to catch on. My SLAC post-retirement support for my own physics interests was terminated in 2006 when it was deemed by upper management that Richter’s special arrangement had gone on long enough. He had retired in 1999, so it’s true that I had a pretty good go at it. Actually, I should note that there was also an upside to this loss of SLAC support. It gave me a chance to change my focus from taking cavity lights data to analyzing and writing up the data that we had already accumulated.[24] By the time I had gotten to Run 7, while discussing with Mike (over lunch) the data and what I thought its implications were, Mike offered to help—an offer that I couldn’t refuse. So, he began analyzing the video frames of Run 7 and then Run 10. And as I have related to you, the analyzed results were spectacular to say the least—clearly a strong indication to us that we were seeing new physics. Since then, Mike and I have given a number of talks in a number of venues, both on vortons and on magneticons. It’s true that there is interest, but it seems to die out rather than grow. To make an analogy to a concept we’ve heard a lot about lately, our vorton virus seems to be encountering some kind of herd immunity.

In 2008 the lab started an LDRD [Laboratory Directed Research and Development] program. I didn’t submit a cavity lights proposal that first year because I wanted enough time to put together the best possible case. I enlisted a number of collaborators, and we submitted a proposal in 2009, but it was turned down—so many other worthy proposals and so little money. In 2010, I thought that there was a much better chance of approval because the lab explicitly included “discovery potential” as an important criterion for the evaluation of proposals. So again, we proposed that we continue the earlier JLAB work. I thought to myself, surely a proposal to further investigate the very baffling cavity lights phenomenon and the prospect of new physics would have significant discovery potential. The analyzed videos of the JLAB data, in particular the MLO orbits, would surely be convincing. But the time for the presentations to the LDRD Committee was limited to 15 minutes for both the talk and questions. And I hadn’t understood the difficulty of convincing professional physicists in 15 minutes that something that wasn’t in their field and that they had never heard of could possibly be new physics. After all, I had been pondering ball lightning physics for decades, and it had taken me some time to come to see that it was likely that there was new physics involved. On the other hand, as far as the committee members were concerned, it was much safer for them to stay closer to the beaten path. In addition, it was clear that the lab’s understanding the meaning of “discovery potential” and my understanding of “discovery potential” were quite different. It really is a catch-22 situation. No one will venture to look unless there is better evidence, and no one will find better evidence unless they venture to look. I haven’t submitted an LDRD proposal since.

With respect to the second part of your question, there are really two major areas to this new physics, the more fundamental electromagnetic vorton of large electromagnetic charge—about 25.83 e—and the composite spin ½ fermions, whose members comprise both an electric sector—called the Standard Model—and a predicted magnetic sector, made possible by a rotation in dyality angle of the bound vorton pair by plus or minus π/2. And the most accessible member of this magnetic sector would be the lightest magneticon having a unit of magnetic charge of 1e. Finding the magneticon, then, would open a door to an extension of the Standard Model into a magnetic sector. In my view, what physicists are calling the dark sector is really due to the misunderstood manifestations of the magnetic sector particles—in particular magnetic hydrogen, my favorite candidate for the dark matter particle. And at a high energy collider like the LHC—even at its present energy—there could be a significant program to find the presumed many other particles populating the magnetic sector. Such search possibilities would be even more bountiful at a high energy e+e– collider like LEP. e+e– collisions have far less background than hadronic collisions. It’s too bad they didn’t look down to magnetic charge 1e when they had a chance.

In addition, it’s conceivable that if enough magneticons could be suitably collected, one could make a colliding ring with counter circulating north and south magneticons. Also, don’t forget, that unlike the muon, the lowest lying magneticon should be stable. And since acceleration of magneticons with solenoids is so powerful, these rings could easily operate up to extremely high energies. And the much higher mass of the magneticon would greatly reduce the problem of synchrotron radiation. Mike has done some thinking about possible designs for magneticon storage rings.

As far as vorton physics is concerned, there are two vastly different scales here—first, the extremely small, perhaps on the order of the Planck length. As I have described, in this role, vortons combine in pairs to form the elementary spin ½ fermions, both those of an electric sector and those of a predicted magnetic sector. And second, there are the macroscopically sized vortons, which, through a continuous dyality rotation, give rise to the CL/BL/EL physics. To me, it seems most likely that a better understanding of the cavity lights phenomena could lead to useful improvements in superconducting RF cavity design and operation—and, more generally, to a better understanding of ball lightning and of earth lights.

With regard to ball lightning, I should say that there is considerable evidence that they release large amounts of energy.[22] In my Hessdalen paper,[21] I postulated that catalyzed nucleon decay is the source of this energy. The nucleons that would decay in this putative process would be ordinary components of the nuclei of the local air molecules, probably actually inside the volume of the ball. So, if vorton research could give us enough understanding of ball lightning physics, it’s conceivable to me that one could build a ball lightning machine generating enormous amounts of energy. Per nucleon of the source fuel, the fission process releases about 0.7 MeV; the fusion process about 6 MeV; and for nucleon decay we would get almost the full nucleon mass/energy or about 900 MeV. And there’s an unlimited and essentially free fuel supply. Another benefit is that catalyzed nuclear decay should be much freer of unwanted radiation backgrounds. It’s interesting, though—I wasn’t even able to get Burt to take this idea seriously. And he wrote the book, Beyond Smoke and Mirrors, about sustainable energy sources and climate change. Also, on one occasion Mike and I presented these ideas to a couple of fairly senior, but non SLAC, scientists, and they pointed out a problem that hadn’t occurred to us: this whole idea would get enormous push-back from the fossil fuel interests. We hadn’t thought about this aspect of a possible ball lightning power generator. A viable vorton BL energy source, using free fuel, would do serious damage to their business plans—unless they could co-opt it.

Zierler:

David, in your capacity as emeritus at SLAC, do you have opportunity to suggest thesis topics to graduate students? It seems like this would a pretty exciting one for a young particle physicist to take on.

Fryberger:

I agree that such topics would be exciting. But I don’t have a faculty position. I could possibly try to align myself with some faculty member at Stanford. I expect that I couldn’t be a sponsor, but perhaps I could be an advisor of some sort. But I haven’t tried that. Right now, I think its more fruitful for me to continue working on the implications of vorton physics in other areas. There certainly is plenty to work on there. Speaking of students, I should add here that a couple of years ago Mike was in contact with a PhD student who planned to make a magneticon search his doctoral thesis topic at the BelleII detector at the KEK lab in Japan. His advisor, Toru Iijima, is at the Kobayashi-Maskawa Institute in Japan. Mike and I both thought that a magneticon search would be a good thesis topic, even if no magneticons were found. Null results are important. And, if nothing else, one can set limits. Also, it might get the ball rolling, and he could go to other labs with greater energy capability. But as I understand it, he has since left the field. It would be interesting to find out why.

Zierler:

What are some of the responses that you get explaining why they might not be interested?

Fryberger:

Well, in this regard, there are two categories of colleagues—experimentalists and theoreticians. In conversations with our experimental colleagues, we find that our ideas aren’t rejected outright. Our colleagues just don’t rise to our challenge to provide viable explanations for the phenomena we see—in particular, the cavity light orbits of Runs 7 and 10. I don’t think the problem is that they aren’t smart and knowledgeable physicists, it’s that there aren’t any easy explanations based upon established physics. It’s complicated. And for them, this physics is too far off the beaten path for them to spend any time seriously thinking about it. As I said before, I think that in the back of their minds is the idea that if they had the time and the interest, they would be able to figure it out. But there is no real need for them to do that.

I shouldn’t try to generalize about the theoretical community. There are many, many areas of interest. But for those who are trying to understand physics at the most fundamental level, I doubt that they think that the phenomena of CL/BL/EL are in an area that has anything to do with physics at the most fundamental level—plasma physics, maybe. And, as I discussed before, the thinking about physics at the most fundamental level is dominated by string theory. “It’s the only game in town.” And it’s also true that the vorton model certainly appears to be incompatible with string theory—another reason to disregard it.

Now, the two theoretical physicists I’ve spent the most time with are Stan Brodsky and bj Bjorken—they aren’t string theorists. As I told you earlier, they were most helpful to me, decades ago, when I was developing my ideas that magnetically bound vortons could be the underlying structure for the point-like spin ½ fermions—and also about questions concerning the infinities in the self-mass integrals of QED. Renormalization was spectacularly successful, but that very success has enabled all of the questions about these infinite integrals to be, as Feynman once said, swept under the rug. Also, at that time, there were a lot of people trying to make models for the substructure of what were then called elementary particles. It may be true that I’m stuck in the past, but I haven’t seen any reason to abandon my efforts. In fact, as I have indicated to you, I think that there’s a lot evidence to support my models. But back to Stan and bj. Even now, whenever I ask them for a few moments, they are happy to give me some of their time. And I think that it’s fair to say that they are even interested in my ideas and questions. They’re always polite, but in the end, while they are open minded, I think they don’t really believe in my basic premise about vortons. And being near the end of their active careers, they would rather work on their own ideas than those of someone else. I can’t fault them for that, I’m the same way.

But I think that there is another reason that I can’t connect with members of the theoretical physics community. It’s that our mode of thinking differs significantly. In their daily work, they find solutions to specific equations—Schrödinger’s equation, the Dirac equation, the Yang-Mills equation, etc. And of course, the mathematics of these solutions—in particular for string theory—is generally well beyond the capabilities of even talented physicists not in the field. In fact, it could be said that many think that if there will be another major breakthrough in physics, it will derive from the finding of a new equation to work on. For example, I cite Michio Kaku’s most recent book, The God Equation.

In contrast, I sometimes think of myself as something like a stone mason, looking at details to try to see how various things might fit nicely together. I’m starting from the bottom—at the very foundation—in thinking about how the elementary particles are put together, as opposed to solutions to some equation. My mode of thinking and analysis involves the visualizing of objects, and using intuition and some relevant mathematics. And, as I have told you, the basis of my mathematics is the symmetrized set of Maxwell’s equations, which have been around for over a century. To be sure, my intuition is guided by equations and relevant experimental data, but intuition is still a very important component. My aim is to put together a consistent model that has good explanatory power to describe the experimental data. And with respect to the macroscopic vortons, I claim some success. On the other hand, Woit, for example, elaborates at some length that string theory has yet to meet any experimental data test.

Zierler:

But isn’t that the beauty of data and experiments? It doesn’t matter what the theories are? It doesn’t matter what the theorists believe?

Fryberger:

You’re right, in principle it shouldn’t matter what the theorists believe. But, as I’ve said, as a practical matter, with these experimental mega-projects, it does matter what the theoretical community thinks. Their ideas influence the thinking of the program committees, and the program committees exercise a lot of control over the experimental programs. One certainly doesn’t want research money to be spent frivolously. With regard to 1e magneticons, we need some good data. As I’ve said, the problem here is that we don’t have any data because no one has yet looked—because of tracking, 1e magneticon data won’t just fall into your lap. And that’s why I think there should actually be a dedicated search for the magneticon at LHC, or at a future e+e– collider, if one is ever built. As I remarked earlier, as far as magneticon searches are concerned, this is very much like a catch-22 situation. On the other hand, with regard to searches for supersymmetric partners, that’s mainstream physics—even without supporting data.

Zierler:

And just to be clear, you’re envisioning finding the vorton at energies that are currently available. It’s not like supersymmetry or new particles that would only be found theoretically at higher energies than are currently available.

Fryberger:

I should say again that in my view the vorton and important aspects of vorton physics have already been seen—in particular in Runs 7 and 10 in our cavity lights experiments at JLAB.[24] And if circumstances were right, that experimental program could be restarted—at JLAB or at SNS [Spallation Neutron Source in Oak Ridge TN]. John Mammosser, our collaborator on the JLAB work, is now at SNS. We haven’t seen him in over a year now because of COVID, but he would be key if the program were to be at SNS. I know that he would like to continue the cavity lights experiment. He has a cavity, a stand, vacuum pumps, and some other instrumentation. But right now, it’s like it was at JLAB—evenings and weekends only, and don’t spend any money. Cavity lights research isn’t part of the mission statement.

With regard to a magneticon search, magneticons could be seen up to whatever the limit of the LHC is. You know, we’re talking about many hundreds of GeV. So, building a new machine is not required. The search just has to be in a different place—in this case, a search for a particle of magnetic charge 1e. I wonder if you’ve read Sabine Hossenfelder’s book called Lost in Math. Have you seen it?

Zierler:

Yes, I have.

Fryberger:

Well, she interviewed a lot of people as material for her book, and she argues that it’s the math—or rather, a focus on some version of mathematical beauty—that is leading the physics community astray. And a number of the people she talked to said that the experiments were just looking in the wrong place. That’s my contention too. But I have a specific suggestion to address this problem. They should look in the magnetic sector. And there’s plenty of beam energy at the LHC to do it—no new machines are required.

Zierler:

David, just to bring out conversation up to the present, what are some other issues in physics that are interesting to you currently?

Fryberger:

Well, right now, dark matter. In September of 2009 there was a Dark Forces Workshop at SLAC, and a physicist named Neil Weiner was there—he’s on the faculty of New York University. And he gave a talk on inelastic dark matter scattering, or iDM. His talk opened my eyes to a significant possibility for an application of vorton physics and dyality symmetry. I don’t know if you’re familiar with his ideas.

Zierler:

I’m not.

Fryberger:

Let me give you some background. The problem was that in 2000 an Italian group called DAMA/LIBRA was getting an annual modulation signal in their data—at the 4 sigma [4σ] level—presumably from dark matter scattering events in their detector, which is sodium iodide doped with thallium—NaI(Tl). The annual modulation was attributed to the variations in the galactic velocity of the earth associated with the earth’s orbit around the sun. In a Gaussian distribution, 4σ means that the signal size is four standard deviations away from a null result, a very unlikely occurrence [~10-4, if there are no systematic effects]. And at the same time, a competing experiment CDMS [Cryogenic Dark Matter Search] was seeing no events—a result in serious conflict with the 4σ DAMA/LIBRA result. The point of the Weiner talk was that if for some reason the dark matter scattering off of the detector nuclei could only take place by iDM scattering, then the scattering kinematics would be significantly altered. The scattering event would take place by the dark matter particle χ scattering into a nearby state χ', heavier by an energy increment δE, where δE would be much smaller than mχ, the dark matter particle mass—forgive me for using mass and energy interchangeably. The amount of energy δE would have to be available in the center of mass for the scattering event, or there would be no scattering event. Detectors containing heavier scattering nuclei would be favored over those with light nuclei because the available center of mass scattering energy would be larger for heavier nuclei. And if mχ and δE were in a suitable region in parameter space, then scattering in the detector with the lighter nucleus could be altogether forbidden—purely by the kinematics of the problem. Since the iodine nucleus is much heavier than that of germanium—the heaviest nucleus in the CDMS detector—then these results would no longer be in conflict. Problem solved.

By 2019 the DAMA/LIBRA collaboration had gotten considerably more data, and they now report that their annual modulation signal is at the 12.9σ level. This clearly isn’t a statistical fluctuation. But nobody believes them because, by now, there have been a series of experiments [by the XENON1T collaboration] using large quantities of xenon as the detector material, and they see no events above the estimated backgrounds. And their reported sensitivities are on the order of five orders of magnitude below that implied by the DAMA/LIBRA results. And the original iDM argument that iodine nucleus is heavy won’t work because the xenon nucleus is even heavier than that of iodine. To answer this objection, Weiner and his collaborators have proposed that perhaps the scattering in the NaI(Tl) in DAMA/LIBRA is off of the thallium doping material—the thallium nucleus is heavier than the xenon nucleus. But they cautioned that this idea brought an additional problem: the doping level of thallium in the NaI(Tl) is at the 10-3 level significantly increasing the bridge that the iDM argument must explain. Also, for the iDM explanation to work, requisite the χ-Tl cross section appears to be unreasonably large.

And, so, it’s a conundrum that isn’t resolved. The DAMA/LIBRA result is certainly not a statistical fluctuation. It’s a background nobody’s thought of, or a systematic nobody’s thought of, or it’s an actual dark matter signal. But because of the xenon data with a sensitivity many orders of magnitude below that implied by the DAMA/LIBRA result, the credence given to the DAMA/LIBRA result is diminishing by the year. In review papers, the DAMA/LIBRA result is often not even mentioned—even though no one has been able to find a systematic or background effect as an alternative explanation. So far, all alternative explanations are not viable.

Anyway, I’m still taking the iDM model of Weiner and collaborators seriously, and am investigating whether or not atoms of magnetic hydrogen [mH] might be a suitable candidate for dark matter particles. mH would be a basic feature of the proposed magnetic sector, copiously produced in the early stages of the Big Bang—just like ordinary hydrogen. Its physics should follow that of hydrogen, so in that sense, it’s not a complicated object, but rather a composite object whose atomic physics is quite well understood. In the mass range under contemplation, the ionization energy is in the hundreds of keV—it is essentially an inert object in the present state of our universe. I should remark here that mH in its ground state is without net electric or magnetic charge, as well as without an electric or magnetic moment. Its only electromagnetic feature is its magnetic polarizability, which is easy to estimate using ordinary hydrogen as its electric counterpart. It’s a dark particle because it’s heavy, not because it’s not electromagnetic. The energy splitting δE for the iDM model would be the energy splitting between the 1S atomic ground state and the 2S excitation, easily estimated after one has assumed the masses of the constituent magnetic electron and the magnetic proton. It’s a perfectly natural candidate for the iDM object. And the possible range of these assumed masses would enable this model to cover a very large region of parameter space.

However, in view of the xenon results, it’s clear that in applying the iDM model to mH as a dark matter candidate, the scattering in the NaI(Tl) detector of DAMA/LIBRA can only be off of the thallium doping material. Of course, this requirement puts an additional burden on the iDM model as a viable concept. And since there are so many orders of magnitude between the estimated event rates of these two experiments, the suppression of scattering of mH off of the xenon nucleus must be kinematically excluded—fully. If this is the case, in applying the iDM concept, it’s easy to convince oneself that this requisite full kinematic separation of xenon iDM scattering relative to that for thallium can only be found near the edge of the galactic halo velocity distribution. Which, of course, means that assumptions about the halo velocity distributions are also in the mix. Now with suitable halo assumptions and suitable mH mass assumptions, it’s clearly possible to fully exclude and iDM scattering involving the xenon nucleus. The question is, with these so-called suitable assumptions, is there a large enough iDM cross section off of the thallium nucleus to be consistent with the reported DAMA/LIBRA annual modulation signal? I believe that the answer to this question is yes.

And, so, that’s my present interest. I wrote up all of this iDM analysis, proposing magnetic hydrogen as a dark matter candidate. I sent this paper to the Cornell archives, but they said it was too long, and wouldn’t post it. [laugh]

Zierler:

[laugh]

Fryberger:

It’s true that it probably is too long. But it covered a new idea and I thought shortening it would be difficult. So, they said, “Well, send it to a journal.” But I don’t know if a journal would publish it in its present form, it is pretty long. Furthermore, I now think that it is in need of some revisions, which I haven’t had a chance to complete yet. Also, I would like to consider whether or not the recent XENON1T electron events could be due to the scattering of the 3S1 nuclear spin state of the mH atom off of the atomic electrons of the xenon atoms. The amplitude for the 3S1 state is initially null, but it will build up as the mH propagates through the earth. This process of 3S1 amplitude buildup could lead to an important sidereal diurnal modulation in these events. So right now, I’m not ready to send it to a journal. But magnetic hydrogen is still my working hypothesis for a dark matter candidate.

Zierler:

David, I’ll just note editorially that there’s a real similarity between these dual interests with the vorton and dark matter in that the search for physics beyond the Standard Model is the highest item of priority in physics, and dark matter is probably the one mystery that more people in physics are involved with than anything else. And, yet, on both of them, you hold views that are perhaps out of the box or even iconoclastic. Would you say that’s a fair way of describing your approach—

Fryberger:

Yes, but I need to include a short narrative. When I started this journey, some 40 years ago, I was very much in the box, so to speak. The possible symmetrization of Maxwell’s equations had been suggested well over 70 years prior. And even the notion of magnetic charge was in the box. Dirac had suggested the possibility of a magnetic monopole in 1931, and active searches had been and were being mounted—though none have been successful. So, with the motivations that I have described to you, I was able to construct what I thought could be an important step forward—a static monopole solution to the symmetrized set of Maxwell’s equations with a number of interesting features. But the reviewer at the Phys. Rev. thought that my result was so trivial, it wasn’t worth publishing. He didn’t say it was wrong, or that I had made some questionable assumptions, he just didn’t think that it was of sufficient interest to warrant publication. Which is why I published it in the Hadronic Journal. Of course, the reviewer at the Phys. Rev. was just doing his duty to see to it that only worthy papers were published. And I haven’t sent a paper to the Phys. Rev. since.

Combining vortons in pairs to make the point-like spin ½ fermions was a new model, but it was consistent with other models of that time. Using the D functions to describe the angular momentum of the pair not only put the description of the l = ½ orbital momentum on a sound mathematical basis, it also enabled me to identify the source of isospin in these spin ½ fermions—a unique new achievement. And, as I as have already described to you, there are significant other new features, as well.

I would argue that my model for ball lightning is the same thing. I used the dyality symmetry of Maxwell’s equations in a new way, but compatible with the basic principles of physics. These results are both new physics, in that they have never been done before, but I didn’t really invent anything to arrive at this place. I was like a stone mason fitting already known pieces into place as best I could. But, as you can see, thinking independently and following my own ideas for assembling these pieces, I ended up out of the box and well off of the beaten path. I’m not exactly iconoclastic, but where I have presently arrived seems to be incompatible with establishment thinking.

Zierler:

What you’re indicating of course is that science is very much a human endeavor, and that even if the data is promising, if the politics aren’t right, the science doesn’t go through.

Fryberger:

You’re absolutely right. I think that says it as well as I could, and that’s what I’m up against.

Zierler:

David, for the last part of our talk, I’d like to ask one broadly retrospective question, and then we’ll end looking to the future. So, a theme of your career has been this duality at SLAC where you have your SLAC projects and your SLAC career, but you also maintain a—I’m not sure if private or personal is the right word, but a research agenda that’s very much your own that’s not wedded to the mission of SLAC. In what ways has that duality served you well, and what have been some of the limitations both scientifically and sociologically that you’ve experienced over the course of your career as a result of this duality?

Fryberger:

Well, that’s a long question. But yes, being at SLAC but not in the physics part of SLAC was good for me. I earned my paycheck through the regular work that I did for the lab. That was my SLAC career, and it earned me emeritus status upon retirement. My own physics narrative, as I just described, would not have earned me a paycheck at SLAC, nor, probably not at an academic institution either, I would venture to say. But being at SLAC was crucial to my physics journey. I could talk at considerable length to theorists like Bjorken and Brodsky, who were quite helpful to me. In fact, I’m eternally grateful to the pair of them for working with me on these ideas. They even read some of my papers. [laugh] I mean, I recently gave Brodsky the magneticon paper, and when he finished it, he even made some typo corrections. And then he added, “Well, it could be right. It doesn’t violate any principle I know of.” But it appeared to me that he had not really given serious credence to the ideas.

So, yes, I think the duality of being at SLAC in a staff support position kept me close enough to the physics where I could know what’s going on. And I could audit courses at Stanford. For example, I audited a full year of Steve Weinberg’s course on quantum field theory. I knew many people in the field, and I could turn to them for discussions about my own physics ideas, and yet I had a salary that didn’t depend upon my physics ideas. At the same time, I was fortunate enough to have enough spare time to think about my physics ideas. As I mentioned, I’ve written maybe a dozen papers. Of course, Brodsky’s written over 700 papers, but he’s exceptional. I understand that there’s a unit called the Brodsky, which quantifies the number papers written in a year. In this regard, most physicists I know are definitely sub-Brodsky.

Zierler:

David, last question looking to the future, it might be a tough one, but what are you most optimistic about: advances on the vorton, or advances on your ideas, your unique ideas about what dark matter might actually be?

Fryberger:

I would say that advances in either are possible. But I think I have to hope that someone opts to pick up the experimental end of it. And the experimental end of it for vortons would be the cavity lights. Actually, we’ve got real cavity lights data, if we can get anyone to actually look at it. Perhaps John and Mike and I could’ve gotten something together in this past year, but that was put on hold because of COVID. When COVID is finally past, we’ll try to mount some sort of experimental program then. Yes, I think a cavity light program is probably the more likely. On the other hand, it’s hard to be optimistic that our colleagues working at ATLAS or CMS will actually look for the magneticon because they’ve got committees to deal with. But, either one could take off, if for some reason someone picked it up and said, “Let’s do this,” I mean, it could take off right there.

As far as the dark matter is concerned, as time passes, more data will keep coming in, and maybe some of it will point more directly at the iDM model. Or, perhaps my next effort at iDM predictions will find support or prove to be more persuasive. I’ll have to wait and see.

Zierler:

Well, it’s exciting to think about, and maybe in our own small way, publication of this interview will help get the word out to the right people.

Fryberger:

Well, I hope so. By the way, before we close, I should remark that this journey has led me to an understanding of what the electron is made of, how it is constructed—a motivating question that I had decades ago as I was starting out. So, I certainly invite anybody that reads this and is interested to ask me questions. I’ll be glad to entertain them.

Zierler:

There you go. I love it. David, it’s been a great pleasure spending this time with you. I’m so glad we were able to spend this time, and to hear all of your perspective and insight over the course of your career. So, thank you so much.

Fryberger:

Well, you didn’t hear all of it, but that’s probably enough for now.

 

[1] W. H. Timbie, Elements of Electricity, 3rd Ed. (John Wiley & Sons, Inc., New York, 1937).

[2] W. L. Everitt (Ed), Fundamentals of Radio (Prentice Hall, Inc., New York, 1942).

[3] A. D’Abro, The Evolution of Scientific Thought from Newton to Einstein (Dover, 1950).

[4] A. D’Abro, The Rise of the New Physics (Dover, 1951).

[5] Michael J. Neumann and Herrick Sherrard, IEEE Trans. Nucl. Sci., 9 (3), 259 (1962).

[6] Bruce Arne Sherwood, Phys. Rev., 156 (5), 1475 (1967).

[7] David Fryberger, Phys. Rev., 166 (5), 1379 (1968).

[8] D. Fryberger and R. Johnson, Talk at Part. Acc. Conf., Chicago IL (1971), SLAC-PUB-876.

[9] D. Fryberger, Nuovo Cimento Lett., 28, 313 (1980).

[10] D. Fryberger, Hadronic Jour., 4, 1844 (1981).

[11] Y. Han and L. C. Biedenharn, Nuovo Cimento, 2A, 544 (1971).

[12] David Fryberger, Found. Phys., 19 (2), 125 (1989).

[13] David Fryberger, Phys Rev., D20, 952 (1979).

[14] David Fryberger, Phys Rev., D24, 979 (1981).

[15] David Fryberger, Found. Phys., 13, 1059 (1983).

[16] David Fryberger, Applied Superconductivity Conference, Sept. 1984, San Diego, CA.

[17] D. Fryberger et al., Rev. Sci. Instrum., 57 (10), 2577 (1986).

[18] D. E. T. F. Ashby and C. Whitehead, Nature, 230, 180 (1971).

[19] David Fryberger, SLAC-PUB -5980, 3rd Int. Symp. on BL, Pasadena, CA (Jul 1992).

[20] Paul Deveraux, Earth Lights Revelation (Blanford Press, London, 1989).

[21] David Fryberger, SLAC-PUB- 6473, First Int. Hessdalen Workshop (Mar. 1994).

[22] Stanley Singer, The Nature of Ball Lightning (Plenum Press, New York—London, 1971).

[23] David Fryberger, Nucl. Inst. and Methods in Phys. Res., A 459, 29 (2001).

[24] P. L. Anthony et al., Nucl. Inst. and Methods in Phys. Res., A 612, 1 (2009).

[25] David Fryberger, arXiv: 1807.09606 (July 2018).

[26] Michael K. Sullivan and David Fryberger, arXiv: 1511.02200 (Nov. 2015).

[27] Michael K. Sullivan and David Fryberger, arXiv: 1707.05295 (July 2017).

[28] Lee Smolin, The Trouble with Physics (Houghton Mifflin, Boston, New York, 2006).

[29] Roger Penrose, The Road to Reality (Knopf, New York, 2005), p. 1018.