You are here
From Lone Genius to Wisdom of the Crowd: Hyperauthorship in High-Energy and Astrophysics
From Lone Genius to Wisdom of the Crowd: Hyperauthorship in High-Energy and Astrophysics
How many scientists can wield one pen?
After the success of the Manhattan Project, which employed at least 130,000 employees and involved over 30 institutions in the U.S., United Kingdom and Canada, collaboration on a macroscopic scale emerged as a necessary and defining feature of modern science. Following the Second World War, major scientific undertakings—such as landing on the moon, constructing particle colliders, and establishing several national laboratories—required enormous levels of funding, labor, and interdisciplinary organization. Physicists were also becoming increasingly specialized during these years. David Kaiser has pointed out that the number of categories for submissions to Physics Abstracts was initially eight in 1930 (general physics; meteorology, geophysics and astrophysics; light; radioactivity; heat; sound; electricity and magnetism; and chemical physics and electrochemistry), but in 1955, nuclear physics was added with further divisions of six subfields, which increased to thirty-five subcategories by 1965. Additionally, at this time, solid state physics alone had thirty-eight subfields. Because of physicists’ increased specialization, as well as efforts by the United States government to fund large scientific projects, the sheer scale of scientific collaboration—and consequentially, levels of co-authorship on publications—increased dramatically. Alvin Weinberg, a nuclear physicist and administrator at Oak Ridge National Laboratory during the Manhattan Project, warned of his hesitation regarding the United States’ heavy post-war investment in the “spectaculars” of his coined phrase “Big Science.”
When history looks at the 20th century, she will see science and technology as its theme; she will find in the monuments of Big Science—the huge rockets, the high-energy accelerators, the high-flux research reactors—symbols of our time just as surely as she finds in Notre Dame a symbol of the Middle Ages. ... We build our monuments in the name of scientific truth, they built theirs in the name of religious truth; we use our Big Science to add to our country's prestige, they used their churches for their cities' prestige.
Big Science is an inevitable stage in the development of science and, for better or for worse, it is here to stay. What we must do is learn to live with Big Science. We must make Big Science flourish without, at the same time, allowing it to trample Little Science—that is, we must nurture small-scale excellence as carefully as we lavish gifts on large-scale spectaculars.
The term hyperauthorship, credited to information scientist Blaise Cronin in 2001, refers to massive co-authorship levels, often on the order of hundreds or even thousands. Although a relatively recent phenomenon in the physical sciences, the seeds to its fruition were planted decades ago as the United States invested more heavily in large-scale science collaborations and scientific institutions following the Second World War.
“To understand the way nature works requires collaborative action as much as individual thought. Physics as we know it today is a product of the mass mobilization of materials and social resources on an unprecedented scale.”
Collaboration and Big Science
In 2012, the ATLAS Collaboration at CERN (European Council for Nuclear Research) published experimental results that verified the theoretical Higgs boson, a central and—up until then, missing—piece of the Standard Model. The paper, titled “Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC,” is 38 pages long: 24 pages of analysis with 14 pages dedicated solely to the author list. Another paper was published later (2015), titled “Combined Measurement of the Higgs Boson Mass in pp Collisions at s^(1/2) = 7 and 8 TeV with the ATLAS and CMS Experiments,” credited to the ATLAS and CMS Collaborations together for their shared measurement of the mass of the newly discovered Higgs boson. This paper included an author list of 5,154 individual contributors, a record-breaking number in the history of scientific publishing.
There is a similar trend in astrophysics, too. In 2016, scientists at the LIGO (Laser Interferometer Gravitational-Wave Observatory) published experimental results that verified gravitational waves, a phenomenon predicted by Albert Einstein a century prior. The 2017 paper, titled “GW170817: Observation of Gravitational Waves from a Binary Black Hole Merger,” includes an author list of over one thousand individual contributors representing 162 institutions, who collectively make up the LIGO and Virgo Collaborations.
It is undoubtedly exciting to see evidence of successful collaborative efforts on such a large scale. Questions in high-energy physics and astrophysics demand it, as they rely on massive particle accelerators and high-precision telescopes to probe fundamental questions about our universe. Additionally, it is deeply inspiring that Big Science projects bring together scientists from many different countries, cultures, and backgrounds motivated to discover something new about our world together; these collaborations are a testament to our very human concern for understanding and mapping our shared cosmological history. However, publications with such lengthy author lists bring about new issues concerning authorship, accountability, and credit in science. How can we attribute credit to one author amongst several thousand? To what extent is the entire collaboration held accountable for potential mistakes of one or few? How does hyperauthorship complicate an academic system based on an individual researcher’s success?
Issues of Individual Accountability
With dozens, hundreds, and even thousands of credited authors on a single scientific paper, what kinds of issues can arise? One certainly comes from the fact that authors are listed alphabetically by last name. In the case of the ATLAS and CMS papers, Dr. Georges Aad—who was a relatively junior physicist at the time—was the lead author (until he was ousted by Dr. M. Aaboud in 2016). Therefore, the Higgs discovery paper is credited to “G. Aad, et al.” with a full list of authors at the end after the data and analysis are presented. Alphabetical listing is standard practice in high-energy physics, but this isn’t necessarily true of other sciences, such as in medicine where the principal investigator (PI) on an experiment or study is listed first. The alphabetization standard means that the order of the authors doesn’t necessarily align with how much labor and credit can be attributed to each one; the lead author is simply the lucky one whose last name comes first alphabetically.
We associate authorship with accountability for the contents of a publication. (In fact, Cronin refers to this assumption as part of the “standard model” of scholarly publishing). In the context of large-scale collaborations, who, then, is to be held accountable if there is a mistake? This issue is connected to the fact that hyperauthorship makes it nearly impossible to assign credit to individuals involved in the work, which can be especially difficult for early-career researchers today, who must make a case for their individual influence in the process of competing for a very limited number of available academic positions. Hyperauthorship, therefore, can make it seem like they are less able to make an impact on their own. Cronin writes:
The potential subversion of the historical conception of authorship…by contemporary scientific practice has created a number of ethical and procedural difficulties for those, such as university promotion and tenure committees, responsible for evaluating the nature of individual contributions and also for those concerned with quality assurance in complex, multi-institutional research projects.
Therefore, in an academic reward system structured in part by both the quantity and quality of publications, hyperauthorship present entirely new challenges.
Metrics for Recognition?
Members of the high-energy physics community use several metrics which aim to help clarify the issues. Particle physicists refer to the author profiles on hep-inspire to get a sense of individual contributions via their author profiles. Several statistics are available, including the number of papers analyzed, number of citations, average citations per paper, and the h-index, defined as the number of papers with a citation number higher or equal to h. The h-index is currently the standard metric for “characterizing the scientific output of a researcher,” and a useful way to quantitatively gauge an individual researcher’s influence. This h-index, though, isn’t conducive to science on the scale of the Large Hadron Collider (LHC), and does not distinguish between the practicalities of being a theoretical versus experimental physicist. An experimental physicist involved in a large collaboration is likely to have a larger h-index than a theorist in part because of their association with so many other collaborators, whereas a theorist typically publishes in small groups or alone. It is the possibility of direct association with an influential experimental discovery—such as the Higgs boson or gravitational waves—that can be, to some extent, advantageous to experimentalist’s measured scientific output. It seems that, perhaps, recognition may not be quantifiable.
Some journals, such as Nature Physics, have taken a qualitative approach. Since 2009, Nature requires an author-contribution statement that provides a basic outline of who did what on the project or experiment to be included as part of any article submission. (example here) Although the intent is to provide clarification for each contributor’s role in the published work, this method is still limiting; it would be impractical to provide such a statement for a collaboration on the order of hundreds or thousands of experimentalists, especially a statement that would be agreed upon by all contributing authors.
The ATLAS and CMS collaborations are made up of many scientists, but the 2013 Nobel Prize in Physics was only awarded to two people: François Englert and Peter Higgs. LIGO is a similar case—although the discovery of gravitational waves is credited to thousands of authors listed on the paper, the 2017 Nobel Prize in Physics was awarded to three people: Kip Thorne, Rainer Weiss, Barry Barish. Something clearly isn’t matching up. This numerical disparity implies that the Nobel Prize doesn’t accurately reflect the sheer scale of the labor necessary for making such discoveries. But it also suggests that, in a time of Big Science, we need to think more critically about our metrics for scientific recognition.
Authorship is inextricably linked to authority. Much of physics history is written with an underlying assumption of lone geniuses: heroic figures who could work individually and single-handedly make revolutionary contributions to their scientific communities. The Einsteins and Newtons were authoritative figures within physics and even in popular culture more broadly. The idea of an individual scientist—the “lone genius”—is changing, too. In 1967, particle physicist Alan M. Thorndike summarized this sentiment well:
“The experimenter, then, is not one person, but a composite. He might be three, more likely five or eight…. He may be spread around geographically, although more often than not, all of him will be at one or two institutions… He may be ephemeral, with a shifting and open-ended membership.…he is a social phenomenon… One thing, however, he certainly is not. He is not the traditional image of a cloistered scientist working in isolation at the laboratory bench.”
When considering co-authorship, and especially the more recent trend of hyperauthorship, we as historians of physics should continue to interrogate the shifting landscape of scholarly communication. With the ever-increasing scale of resources necessary to tackle high-energy and astrophysical questions, it could become all-too easy to overlook critical labor (which, more often than not, is taken on by underpaid graduate students and postdoctoral fellows) to the process of scientific discoveries. Indeed, collaboration is a necessary aspect of contemporary science. This is not to say, though, that experimental knowledge in the age of big science can or should be traced back to each individual author or contributor in detail, but given that the success of individuals, research groups, and institutional departments depends on giving credit where credit is due, the effort to do so is sure to continue.
Castelvecchi, D. (2015). Physics paper sets record with more than 5,000 authors. Nature News.
Cronin, B. (2001). Hyperauthorship: A Postmodern Perversion or Evidence of a Structural Shift in Scholarly Communication Practices? Journal of the American Society for Information Science and Technology, 52(7), 558–569.
Galison, P. (2003). The Collective Author. In M. Biagioli (Ed.), Scientific Authorship: Credit and Intellectual Property in Science (pp. 325–355). New York, New York: Routledge.
Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences, 102(46), 16569–16572.
Kaiser, D. (2012). Booms, Busts, and the World of Ideas: Enrollment Pressures and the Challenge of Specialization. Osiris, 27, 276–302.
Morus, I. R. (2005). When Physics Became King. Chicago, Illinois: University of Chicago Press.
Nye, M. J. (2019). Shifting Trends in Modern Physics, Nobel Recognition, and the Histories That We Write. Physics in Perspective, 21, 3–22.
Sharma, A., & D’Hondt, J. (2019). Standing out from the crowd. CERN Courier.
Weinberg, A. M. (1961). Impact of Large-Scale Science on the United States. Science, 134(3473), 161– 164.
“What did you do?.” Nature Phys 5, 369 (2009).
 Kaiser, “Booms, Busts, and the World of Ideas,” 294-295.
 Ibid., 295.
 Weinberg, “Impact of Large-Scale Science on the United States,” 161.
 Ibid., 162.
 Cronin, “Hyperauthorship,” 558.
 Morus, When Physics Became King, 5.
 Cronin, “Hyperauthorship,” 559.
 Ibid., 561.
 See J.E. Hirsch, “An index to quantify an individual’s scientific research output.”
 See “What did you do?” from Nature Phys 5, 369 (2009).
 Mary Jo Nye, “Shifting Trends in Modern Physics,” 9.
 Ibid., 5.
 Quoted in Galison, “The Collective Author,” 329.
 Ibid., 353.