top of page

Disposable Science

One of the great challenges to science is the ever-increasing number of publications. More items mean greater difficulty keeping up with the literature and judging novelty and relevance. One of the dangers is “disposable science”, that is, if an article is little cited in the first few years post-publication, then - regardless of its content - it may never have a substantial impact. Readers tending to cite already highly- cited publications exacerbate the problem. Below* I discuss this issue and present possible ways to reduce the loss of meritorious articles.


My doctoral research was on the interactions between natural enemies (parasites and pathogens) of insects. I used mathematical models and conducted controlled laboratory experiments and a field experiment. Part of the dissertation involved a complete literature review, which necessitated regular visits to the local library to retrieve and read relevant publications. I was interested in how interactions between the natural enemies changed through time and the effect this had on the host insect population. I studied numerous facets of the three-way interaction with the objective of understanding what made each population tick.


These were the early days of personal computers and I didn’t have access to an electronic article database—I kept a card file. At the end of my dissertation, I counted the number of cards. They came to about 400. I had scoured the literature and, as far as I was concerned, I had found each and every published article or book chapter relevant to my area of study.


In the 30 years since my thesis, the number of published articles has increased several-fold. And that is not all. The number of articles published per year has also increased—growing at a rate of about 3.5 percent per annum over the first three centuries, and from 2001 to 2013 this increased to about 6 percent [1,2]. The total number of article records in the highly curated Web of Science is about 70 million [2].


Science is cumulative and it accumulates [3].


A primary research article nowadays typically cites more than 50 papers and it’s not uncommon to see over 100. To arrive at such numbers, the authors would have likely read more than 200 useful articles. Many of these would not have been cited either because of the authors’ view that it was unnecessary or limits imposed by journals.


Due to the sheer volume of published science, the majority of the articles we now read simply cannot be cited. Worse, given time constraints, many potentially relevant articles are not even read.


Science has become disposable.


Some Papers Emerge, Others Don’t

More papers is more knowledge and more understanding. But acquiring this knowledge assumes that we can actually process it and zero in on relevant findings. This is a considerable challenge even for the most avid reader, and would be utterly impossible if it were not for three phenomena:

1. New findings often languish and die. If a new paper does not garner attention in its first months or years of life, then it is unlikely to ever be recognized as a scientific advance. This does not mean that the paper remains uncited, but rather that there is no discernable catalytic effect resembling the black curve in the Figure.

2. New findings that do emerge often update or replace previous results, rendering the latter less necessary to include on a reading list or to cite in a paper.

3. Scientists can modulate their preference to be more discerning in what they read. Some may still digest increasing numbers of articles, but it is an ever-decreasing proportion of what is actually being published.

A scientific paper is metaphorically akin to a living entity. It is born at publication; matures as it is read, discussed and cited; may or may not continue to grow, produce “offspring” studies and possibly become a classic [5]; and except in rare circumstances (e.g. sleeping beauties, classics), eventually goes in decline.


Many papers however do not follow this path and are either rarely cited or never cited at all [6]. Estimates of purely uncited papers vary considerably (from a few percent to over 50 percent), depending on the data source, time period, journal of publication and discipline [7].


Why do some papers dawdle in obscurity? The simple answer is that not all science is created equally. Whether a paper emerges depends on a variety of factors.

Finding science and limits to citing. Some science languishes because it is difficult to discover, locate or access. Database incompleteness is one reason; paywalls are another. Most journals impose limits to citation numbers, meaning that some references may not “make the cut” because, for example, they are not as topical as other (more citable) references.

The science. Some papers do not emerge because the science is simply not very good, not timely or too specialized.

Tested science. You have the choice between two papers to cite. The first was published 3 years ago and has garnered 50 citations. The second was published last month and has not had time to be cited. Which would you cite? If you are working directly in the area, then you will make a judgment based on scientific quality and relevance to the claim to be supported. But if you are a bit outside of the area you may be concerned that you cannot properly judge contents. You are most likely to cite the time-tested paper.

Names. Names influence why some papers (and authors) are more cited than others. Names include (i) highly cited, authoritative authors; (ii) high-impact, prestigious journals; and (iii) citing articles that cite other highly cited, prestigious authors or journals. It is perfectly reasonable to expect that—on average—highly cited, authoritative, reputable authors or journals do publish more meritorious research. But averages belie underlying distributions. Some papers by unknown authors, published in little-known journals, and not well cited may be higher quality, more interesting and more important science than others authored by big names, published in major journals and highly cited.

Cite what is cited. Subtly different from tested science and name effects is citation purely based on existing citations. Now, you are choosing between two papers published 3 years ago, where one has received 50 citations and the other five. Unless you read the papers carefully to decide which to cite, you may just go with the majority vote. This leads to “preferential attachment” [8] and a skewing in citation distributions [9]. What is sinister is that many of the 50 citing articles may have cited the paper for the same reason! Indeed, this is also a reason why some papers never emerge: regardless of the merits of a paper, not receiving many citations in the first years of life could be taken to indicate that the study does not merit attention. The lack of citation is perpetuated by the lack of citation!


In sum, more than just content and relevance enter into citation decisions. Some papers get indefinitely pinned at few or no citations. Many of these would not emerge even if citation biases were to vanish, but some arguably would. Being passed over during the first crucial years can relegate a good paper to perpetual obscurity.


Possible Solutions

Selecting citations is often subjective, influenced by opinions and open to bias. Because there is so much citable work and no regulatory authority, we are largely free to cite as we wish. The only real checks on this are having a method, our own integrity, and corrections and suggestions from reviewers and editors.

Author integrity. Author integrity is reflected in the science and how it is written. The scientific standard extends to all aspects of the article, including accurate and appropriate citations. High citation standards require understanding the literature and consistently applying norms in citation. All too often, papers cited are not read carefully or are cited based on “the front page” rather than the content. Conscientious citing is important for primary research articles, and absolutely crucial for review and synthesis papers. The latter two are places where both specialists and non-specialists are particularly likely to consult the cited literature.

Journals have a role to play. Journals rarely have much to say about citation policy. They assume that author integrity guides all aspects of a study including citations, and that the dedicated checks will come from reviewers and editors. Although it is not the journal’s place to suggest how to cite any more than how to do science, simply indicating on their “Advice to authors” and on the “Reviewer assessment form” that they view accurate and appropriate citations as part of high scientific standards, could go some way toward increasing citation integrity. More engaged journals could provide references or links to what they view as good citation practice.

Reviewer checks. While reviewers are supposed to check the veracity of a manuscript’s analyses, most would consider careful citation checks to be too time-consuming and low priority. This is both because of the magnitude of the task and the fact that often there is no single “correct” citation. Some reviewers never comment on citations, whereas others only do so in key places. Sensitizing reviewers to the problems of citation bias would not only contribute to higher standards, but also influence their own citation behaviors.

Review papers. Reviews are an invaluable resource to keep pace with the literature. Depending on the theme, tens, hundreds or even thousands of relevant papers may be published each year. A review both brings this literature into focus and puts it into perspective. The authors therefore have some responsibility in presenting an accurate account of the subject together with key literature cited. The key here is “key”—the amount of relevant literature can be huge and to keep the bibliography manageable, some level of selectivity is necessary. To accomplish this, most journals impose citation limits, obliging authors to think carefully about the papers cited. Typically, this could mean that some of the most innovative articles do not appear, either because the authors miss them, or because they have been generally neglected by the scientific community—are under-cited—and therefore of lower priority or possibly even viewed as somehow flawed. One way forward is for authors to focus their review on recent advances (and indeed many journals encourage or require this), meaning that there is more room to explore and cite some of the gems that would otherwise be ignored.

Citation appendices. It is not uncommon that authors run up against journal-imposed citation limits. Some journals are very strict on limits and make no exceptions; others will consider arguments for exceeding the limit, and even then, the extra margin is usually thin. One way to address this perennial problem is for journals to establish citation appendices. With a citation appendix, a journal can enforce lower numbers of citations in the main manuscript making the authors think harder about what appears in the front matter, but also providing the flexibility to include more complete citation lists in electronic appendices for interested readers. Citation appendices would be no different from other supplementary information: they provide key background information and can be with or without commentary. With the advent of hyperlinks, a symbol could be included wherever additional citations are made, which when clicked on, would go to the corresponding citations in the supplementary information.

______________________________________________________


*This post is based on a chapter from An Editor's Guide to Writing and Publishing Science (Oxford University Press). Cover image credit: Alex Cagan.


[1] Bornmann, L. & Mutz, R. 2015, Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references. Journal of the Association for Information Science and Technology 66(11), pp. 2215-2222.


[2] Johnson, R. et al., 2018. The STM Report. An Overview of Scientific and Scholarly Publishing. 1968–2018. Association of Scientific, Technical and Medical Publishers, The Hague.


[3] Derek J. de Solla Price is often credited as the “father of scientometrics.” He wrote the seminal book Little Science, Big Science . . . and Beyond in 1963. Among his important insights was the prediction that scientific productivity would continue to grow exponentially, but there would be a slower increase in the number of quality researchers, meaning that an increasing proportion of science would be of low quality. He believed that science was unsustainable and a “scientific doomsday” was less than a century away.


[4] Rowlands, I., Clark, D., Jamali, H., et al., 2012. PEER D5.2 USAGE STUDY Descriptive statistics for the period March to August 2011. https://hal.inria.fr/hal-00738501


[5] Abt, H.A., 1998. Why some papers have long citation lifetimes. Nature, 395(6704), pp.756–757.


[6] The 80:20 law. Citation rates in the mid-twentieth century were skewed such that 20 percent of published work received 80 percent of the citations. This ratio has shifted to 40 percent producing 80 percent of citations in 2005, possibly due to increases in the representation of journals publishing highly cited work. Sugimoto, C. & Larivière, V. 2018. Measuring research: what everyone needs to know. Oxford University Press.


[7] Van Noorden, R., 2017. The science that’s never been cited. Nature, 552(7684), pp.162–164.


[8] The "Matthew effect", or accumulated advantage, is when one’s early accomplishments beget status, credit and further accomplishment (7). In other words, “the rich get richer and the poor get poorer.” Examples of the types of achievements exhibiting the Matthew effect include: publishing in high impact journals, being cited, science funding, invites to present talks and academic prizes. Merton, R.K., 1968. The Matthew effect in science. The reward and communication systems of science are considered. Science, 159(3810), pp.56–63.


[9] Wang, D., Song, C. & Barabási, A.-L., 2013. Quantifying long-term scientific impact. Science, 342(6154), pp.127–132.

Recent Posts

See All

Beams in the Distance

The card led to cobbles but the lead, the road was too, far, too off in the distance to start again, to mend, to the end. Yet he lay there, decided, pocked by the sight, that horrible feeling, the emp

bottom of page