We sometimes underappreciate the extent to which science is a community endeavor. Yes, science is done by individuals, laboratories, and through multi-institution collaborations, but there is a less tangible ingredient: the scientific community. Through the community network, information is broadcast, corrected or rejected, and learned, based on accuracy and relevance.
Science evolves, both the knowledge itself and how scientists operate. Beyond calling-out fraud, errors and the fringes of quality, scientific progress depends on open channels of communication and the willingness to correct and improve. Individuals and research teams are the units of selection, and through their networks of interactions that information percolates through the scientific community.
But there is no ultimate power that guides science, making its evolution clunky at best. True, committees, be they at academic journals, granting agencies or research institutes can have a considerable effect on the science they oversee. Yet scientists have themselves devised an overarching mediator: consensus. Consensus is a collective agreement of knowns and unknowns. This can be in the form of a review, synthesis, opinion or perspective piece [1], or a more straight-up "consensus statement". Consensus does not mean that everyone agrees, and some of the most powerful statements include points of disagreement.
The information problem
Published science can be challenging to unpack even for scientists themselves. Scientists typically access the published literature through specialized platforms (either databases such as the Web of Science, through journal websites or Internet searches, or from colleagues). Media, and social media in particular, has transformed this universe. In addition to pre-Internet media (which itself has diversified), we now have podcasts, news networks, blogs, Twitter, Facebook, Instagram, ... The explosion of such outlets has led to the usual standards in scientific communication sometimes being compromised or even eliminated. As such, information channeled through the media can be communicated in a number of ways: verbatim, distilled, contextualized, and unfortunately either intentionally or inadvertently altered.
Social media provides a free channel for broadcasting and interaction. Announce your latest research, see what others are up to. We interact, we learn. However, this freedom naturally generates considerable quantities of information, and person to person variation in the information emitted and reactions elicited. If I tweet much more than you, then all else being equal, I will get more followers. Viewer’s time is limited, meaning that in a population of thousands or millions, everyone cannot possibly follow everyone else in a meaningful way [2]: indeed, the majority of accounts have few active followers [3]. I’m not saying that one person having influence on just a few is not important, but rather that a few have orders of magnitude greater influence than the average. Social media activity – as much as real expertise – can generate apparent "expertise" and more disconcertingly, influence.
Expertise
Expertise is a mainstay in science and contributes to rational solutions for practical problems. Expertise has become ever-more important since the advent of social media. In the pre-Internet era, scientists relied on specialist journals, popular science magazines, books and conferences. There was far less cross-disciplinary communication than there is today [4]. In partial contrast to experts themselves, the general public relies on various media outlets, such as TV, radio, newsprint, magazines, and non-fiction books.
But what is expertise? Does it take an expert to declare another as one? Clearly a conundrum, but here’s a start. Whenever I think of attaining expertise, I conjure Simon and Chase’s “10000 hours” [5]. The central idea is simple: if we were to plot the distribution of performing a challenging task (in their article, playing chess) and arbitrarily say that the top X% of players are “experts”, then a certain number of hours of experience are required to attain this level. For the example of chess expertise, these authors arrive at approximately 10000 hours. Assuming dedication of 4 hours a day and 5 days a week, we come to about 10 years of work to reach this echelon. There are many caveats, and in particular talented individuals take ‘only’ a few thousand hours, whereas those not so talented, much more than 10000 (if ever). The point is that dedicated work is necessary to understand and perform a difficult task and this still does not resolve the issue of referring to someone as “an expert” [6].
COVID-19
This same basic principle applies to understanding the vexing challenges of the SARS-CoV-2 virus and associated disease COVID-19. Clearly, no one could have possibly crammed the necessary hours – even if far less than 10000 – during the first months of the pandemic to claim to be an "expert" by the Simon and Chase idea. This is in large part because much of the virus biology was unknown or based on preliminary findings. Bona fide budding experts said “we do not know” when they didn’t know. Nevertheless, backgrounds in epidemiology, virology, and in public health jump-started COVID-19 expertise. Pre-COVID-19 experts of other infectious diseases morphed into COVID-19 mavens. Viewers - those outside the field, from the media to the general public and even some scientists themselves - had little to go on in judging expertise and authority. Viewers search for clues. In the Twittersphere these include a tweeter’s photo and bio, home institution, number of followers, and nature of tweets and retweets. Disconcertingly, social media activity (and indeed, of media more generally) made some individuals look more expert or authoritative than they really were! More tweets, more retweets, more followers [7]. Information overload. Facts become fuzzy. Viewers get giddy. Some accept preliminary facts for “hard facts” more than they should.
To be clear, I am not saying that expert information is not factual. Facts come in all shapes and sizes, from rigorous and relevant to flimsy and futile. Determining factualness is an important challenge. Short of spending one’s life perusing the technical literature, we rely on an expert’s position (academic, official committee) and demeanor (posed, clear, scholarly). Even then, there is so much information to digest that we are obliged to either take the high road, sorting through expert wheat [8] and try to parse this from less-than-expert chaff, or take the low road and delegate to the word of authorities.
A fake authority
Similar to expertise, there is no objective and official decree of authority. Authority is garnered through consensus and experience, but can also be attributed by simply referring to someone as “an authority”. Authority is not necessarily commensurate with expertise. A prefect example is Dr. Didier Raoult, a virologist, who has emerged as an authority on COVID-19 in France – with little if any apparent expertise in epidemiology. What has helped make Dr. Raoult an “authority” are politicians, journalists, health experts and even scientists themselves, referring to him as “an excellent scientist”. Dr. Raoult is arguably an excellent virologist [9], but based on his repeated misunderstanding and even flouting of the scientific method, he is not an excellent scientist in my view, and in feigning to be one, an irresponsible communicator to the public [10]. Nevertheless, based on his popularity in a segment of the French population, his pronouncements continue to fuel conspiracy theories, anti-system and populist dogma, and skepticism of science and the in French government’s policies. He arguably has produced (much) more harm than good.
Raoult is a somewhat extreme case of how authority can harm (think of Donald Trump’s former Twitter account!), but he and many like him make the more general point that a plethora of voices – be they experts, authorities, both or neither – can confuse, and this is exacerbated by fake authorities and people in power. The information ecosystem does self-regulate to some extent (e.g., the sea of critiques aimed at Raoult), but because of its open nature, the ecosystem tolerates both a confusion of facts and the acceptance of fiction.
Issues
So, I see two issues that media and social media have brought to the fore generally and more specifically in the context of COVID-19.
1. Babble. The information coming from many independent sources [11] – a multitude of facts, not-so facts, unknowns and ways in which they are expressed – can be challenging to understand and apply. Worse, too much information can result in denial, cynicism and disengagement. To be clear, I am not suggesting that experts curtail their work and pronouncements. Experts are fundamental to scientific progress and communicating with decision makers, journalists and the public. However, freedom of voice and the lack of oversight calls for individual responsibility.
2. Responsibility. When newsprint or TV is looking for expert opinion, where do they go? To people at think-tanks and universities. To experts recommended by other experts. A limited network that creates reader or viewer trust. As such, erudite individuals can gain spectacular audiences. Prime-airtime on a major network, stream or podcast can touch millions of people. Even top social media accounts can be seen by many thousands or millions. Names become familiar, many get big, some get very big, a select few become authorities.
But with influence comes responsibility, not only because saturating the information commons with ignorance can harm [12], but because choosing statements carefully can actually help. Unfortunately, some experts do not fully appreciate their influence, nor how the greater universe of pronouncements can overload the information commons. Perhaps they don’t believe their impact is all that great, or if they do, think that their activity is perfectly responsible.
These issues are akin to “The Tragedy of the Commons” [13]. In the the present essay this is largely self-regarding agents collectively saturating the information commons: no higher source exists to oversee the quantity and accuracy of broadcasts. But, what about bottom-up solutions, the obvious one being self-restraint? If there are too many experts talking, why not take it upon oneself to broadcast less and be more attentive when one does? Am I really an expert on what I am about to say? What is the audience and how many people are actually watching or listening? Should I not pronounce, will someone else with less expertise or does not share my views, do so? Answering these questions depends on the would-be broadcast, who is watching, and the nebulous context of the information commons. A tweet today could be received quite differently than the same tweet tomorrow.
The point is that no single person thinks they are responsible for the overall effect on the information commons - and indeed the (vast) majority individuals have little or no effect. This is why the Tragedy occurs [14] and this is where expert consensus can help.
My Tweet
On May 21st, 2020, I tweeted the following message, listing attention to a number of leading researchers in COVID-19 epidemiology.
Again, in no way do I question the intentions and work of these and other experts and their roles in scientific progress and in communicating through the media. Instead, my point is that with the objective of greater coherence and authority, additional well-timed group statements can foster scientific progress and communication.
To be fair, it’s not as if experts are not thinking of this! Rather, few appear willing or feel qualified to organize expert COVID-19 consensus statements [15]. Experts may be shy, busy, have little common ground with colleagues, or simply do not believe that consensus is possible. The latter is particularly important in the context of COVID-19: consensus in science usually takes years or decades... if it emerges. Given all the unknowns associated with COVID-19, does one risk being part of an enterprise that either turns out to be so conservative as to lack teeth, or worse, risk signing a statement based on preliminary data that is subsequently shown to be erroneous or even misleading?
I do stand by my original tweet, but would add the caveat that choosing the right time for consensus statements is far from straightforward with COVID-19. New variants, the complex, multiscale nature of the pandemic, and of course races to get populations vaccinated. If COVID-19 was complex last May, then for want of a better term, it is currently vexing. As such, many experts are shying away from making forecasts beyond a couple of weeks.
Despite the challenges of consensus, certain initiatives have been taken. For example, the creation by the Biden administration of the National Center for Epidemic Forecasting and Outbreak Analytics, which may be a reaction to arguments for such a center last year. Another type of push are op-eds signed by a group of experts, a recent example being Aiming for zero Covid-19: Europe needs to take action. And finally, expert surveys that poll views and explain consensus results on some of the basic unknowns surrounding COVID-19. What kind of influence these and many other initiatives have remain to be seen [16]. Nevertheless, I know of no international consortium created by experts themselves that has produced consensus statements by the experts themselves. Yes, expert groups are being assembled top-down by research institutes and governments, but bottom-up, individual-incited initiatives [17] would not have the same physiognomy and therefore not the same impact. Consensus starts with individuals.
***
Consensus statements do have their own challenges. What spectrum of experts and disciplines makes a consensus? At what scale (city, region, country, ...) is consensus possible and useful? Should such statements be published in major scientific journals and/or newsprint? How can consortia most effectively resolve the tensions in accurately speaking to academia and (possibly through journalists) the general public? Aren’t we just back to "square one" if a large number of consortia each publish their own statement?
These questions have no simple answers.
COVID-19 may not escape the rule that consensus statements actually require consensus among experts that such an endeavor is even possible and useful. This may be as challenging as predicting the future course of the pandemic itself.
......................................... [1] Of course, each of these rubrics can be written by single authors or close collaborators, but I’m referring to independent researchers coming together to express commonalities, contrasts and differences, such as what would be a product of a symposium or workshop. [2] Reminiscent of the Dunbar Number. [3] See for example, https://doi.org/10.3390/e17085848 [4] Major journals like Science and Nature are notable exceptions, as are popular science magazines, such as New Scientist and Scientific American. [5] Simon HA & Chase WG. 1973. Skill in chess. American Scientist (61) 394-403. Idea popularized by Malcolm Gladwell in Outliers: The Story of Success. [6] And as the erudite reader will see that whereas certain pre-defined norms are required for being deemed an ‘expert’ in chess, this is not the case for expertise in science, which has no simple, objective yardstick. [7] More about types of Twitter networks at https://www.pewresearch.org/internet/2014/02/20/mapping-twitter-topic-networks-from-polarized-crowds-to-community-clusters/ [8] Numerous issues emerge when expertise is needed to generate facts and the facts be communicated to a diverse audience. These include the ability of scientists to measure and analyze data, transformation into the written form of preprints and articles in peer reviewed journals, and the verbal or written communication of results to specialists, other scientists and to the public. [9] For those readers in academia, his H-index (which measures the reliability of researchers being highly cited) as of this writing is 185. An H-index of approximately one’s age is considered excellent. [10] For example, he has recently emphatically claimed that lock-downs do not have any effect on COVID-19 epidemiology. Article in French https://www.laprovence.com/actu/en-direct/6253665/coronavirus-pour-didier-raoult-il-ny-a-pas-devidence-sur-lefficacite-dun-confinement.html [11] Not only ground-level scientists and pundits, but journalists, politicians, ... and interviews with the general public. [12] Mix into this the issue of social media hubris and spouting. I can tweet what I want, as much as I want, and whenever I want. If someone does not like it, they can unfollow or ignore me. Nevertheless, we cannot ‘unsee’ something that we’ve seen. The lack of censorship or dedicated correction is certainly a good thing, but it does open the door to approximations, misunderstanding, and knee-jerk reactions. [13] I refer the reader to the original article, the basis of which will be discussed in future blog posts. Hardin, G. 1968. Science 162, 1243-48. [14] But surely, if the panacea were restraint and this became generalized, then the risk is that the information commons would become dominated by greenhorn gobbledygook and machinator mischief. Another type of Tragedy! [15] Although not the same as a consensus statement, many experts have indeed teamed-up or been interviewed collectively by journalists. This contributes enormously to understanding and public trust. See, for example: https://www.imperial.ac.uk/mrc-global-infectious-disease-analysis/covid-19/; https://www.gov.uk/government/organisations/scientific-advisory-group-for-emergencies; https://www.cogconsortium.uk/; https://www.quantamagazine.org/the-hard-lessons-of-modeling-the-coronavirus-pandemic-20210128/ [16] Any consensus reflects the scientific diversity and backgrounds of the experts involved. [17] Stemming, for example, from thematic workshops, but also from a small number of colleagues who can each call into their respective expert networks.
开云体育 开云体育 开云体育 开云体育