• No results found

Peer review

N/A
N/A
Protected

Academic year: 2021

Share "Peer review"

Copied!
8
0
0

Loading.... (view fulltext now)

Full text

(1)

Peer review

Martijn Kemerink

The self-archived postprint version of this journal article is available at Linköping University Institutional Repository (DiVA):

http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-162955

N.B.: When citing this work, cite the original publication.

Kemerink, M., (2019), Peer review, Legal History Review, 87(3), 291-298. https://doi.org/10.1163/15718190-00873P06

Original publication available at:

https://doi.org/10.1163/15718190-00873P06 Copyright: Brill Academic Publishers (12 months) http://www.brill.com/

(2)

Page 1 of 7

Peer review Martijn Kemerink Abstract:

Over the years, peer review has developed into one of the fundaments of science as a means to provide feedback on scientific output in a relatively objective manner. While peer review is done with the common good in mind, specifically to provide a quality check, a novelty and relevance check, fraud detection and general manuscript improvement, it has its weaknesses and faces threats that undermine both its effectiveness and even its goals. Herein, I address the role of the various actors in the peer reviewing process, the authors, the editors, the reviewers and the broader society. While the first three actors are active participants in the process, the role of society is indirect as it sets the boundary conditions for the process. I will argue that although authors, editors and reviewers all are in part to blame for the sub-optimal functioning of the system, it is the broader society that intentionally and unintentionally causes many of these problems by enforcing a publish-or-perish culture in academia.

(3)

Page 2 of 7

This manuscript is reworked version of a talk I gave on the occasion of the centennial celebration of The Legal History Review on October 5, 2018. Since the talk was supported by slides that were not intended or suited for publication, key information from the slides has been integrated in the text and footnotes.

The following describes several aspects of peer review, not based on elaborate statistics but on personal experience. This experience covers more than 25 years and I daresay that I am a relatively experienced author and reviewer of scientific papers of various types. In addition, peer review is of course a topic that is discussed a lot among colleagues. Hence, although the proof I have to back up my views is solely anecdotal, I believe these views are of some general relevance, at least for the broader field in which I am active. I am a physicist by training and profession, and currently group leader of a small division at Linköping University in Sweden, working in the strongly interdisciplinary field of organic electronics, in which chemistry, physics and material science meet.

Since the numbers that are commonly used to rank scientists will play an important role later in this talk, I will provide mine1. The key number is the h-index, 38 at the date of speaking, which

means that I authored 38 papers that each has been cited at least 38 times. I (co-)authored more papers, but they did not (yet) reach 38 citations. To put these bibliographic scores in in context, in my field they are considered ok for somebody may age, but they are not spectacular. Moreover, such numbers are very difficult to compare across fields, as publishing and citing policies can differ a lot.

I think the goals of peer review are a quality check, a novelty and relevance check, fraud detection and general manuscript improvement. Although this all sounds noble and benign, this can be a very intimidating process, especially for less experienced researchers – e.g. my PhD students. From their perspective, you have manuscripts in which you put a lot of yourself, including effort and pride, and then between the manuscript and the final publication you find a lot of anonymous and faceless actors - editors and reviewers - whose only goal seems to be to stop you from publishing your manuscript.

An ideal scenario for peer review would be something like the following. There is an author, our in my field typically a group of authors, who wrote a manuscript with a certain audience and journal in mind. The choice for the journal is typically made by the senior author(s) of the manuscript on basis of an assessment where it belongs in terms of topic, broadness of relevance and potential impact. At the journal, the manuscript is received by a knowledgeable editor who checks if he or she agrees to this assessment and, if so, forwards it to a number of reviewers that are specialists on the topic, are neutral, and have a lot of time to produce critical yet constructive reports. The reports go back to the journal, where the editor forwards them to the authors, who

1My background is in Applied Physics and my current research focuses on solar cells, thermogenerators and memories based on

organic materials. In my capacity as head of the group ‘Complex Materials and Devices’ and my research I have been involved in peer review for the last twenty years. On one hand I peer review more than 20 articles per year for all kinds of journals, including the top-ranking ones, next to being Advisory Editor to Elsevier publishing and editor of Scientific Reports. I have published more than 170 peer reviewed articles. My h-index is 38, and I have an average number of citations per paper around 35.5.

(4)

Page 3 of 7

improve the manuscript on their basis. After a few iterations, this leads to either the acceptance or the rejection of the manuscript. In the latter case, the whole procedure typically starts over at another journal.

As mentioned, there are several reasons to have this procedure. The most important one by far is the quality check. However, to me personally equally important is the manuscript improvement: even if the peer review process leads to rejection at your preferred journal, clever but critical reviewers have a different perspective than yourself and that will help you to improve the quality of your manuscript. In addition, there is a novelty and relevance check, especially at more high-impact journals. That check is a thing that is shared between the editor and the reviewers. Especially the editor plays an important role in this and I will come back to that role later on. Finally, there is a little bit of fraud detection that can happen during peer review, and also that is a thing I will come back to.

When there are noble goals and purposes, there are also threats and they come, evidently, from all actors involved in the reviewing process. However, there is one additional actor that is not explicitly present in the scheme, but that is very important, and that is the broader society. Also this is a thing I will come back to. The remainder of this text will focus on the threats that come from these four actors, researchers, editors, reviewers and society.

The first threat to point out would be the authors, we that do all that writing. In a nutshell, we simply write too much, and not just a little bit. And we do so by disseminating incremental results or data slicing. What that basically means is that we have a nice big set of results that we could capture in a comprehensive paper that provides a full, nuanced picture. Instead, we distribute it over a number of letter-type papers for two reasons2: having three papers is better than having

one and letter-type papers typically end up in journals with higher impact factor, so that is a double bonus. Another thing we do is if we believe that maybe our work could land in a higher-ranking journal, we will try. At each submission we induce a number of reviews, typically you get three reviewers per paper, and if we fail at the first journal, we go down one level and we induce another set of reviews; sometimes this repeats several times for a single manuscript. Now this does of course not happen with all papers, but it does happen all too often. This behavior puts stress and strain on the system as it causes a tremendous amount of reviewing work, which evidently does not enhance the quality of the reviewing.

Why we write too many papers is a topic I will come back to, but there is one other thing with authors, or at least some of them, that should be brought up. Some authors fall for the short-term advantages of unethical behavior. This is not the main topic of this talk because I think peer review is not made to detect this. Nevertheless, the more blatant examples make very nice stories for in the bar. My personal favorite is one guy that proposed a book to Elsevier where I got to review the book proposal. I did not know the guy, which is already suspicious because the field is not so big that one doesn’t know one’s more senior colleagues. I googled the guy and

2 In practice we might not even wait till the bigger picture is clear but immediately publish as soon as we have enough to fill a manuscript.

(5)

Page 4 of 7

almost the first hit I got was from the FBI. The guy had been caught red-handed with intellectual property theft, then did significant jail time in the US, was kicked out of the country afterwards, only to be made professor in China. I suggested Elsevier that maybe we should not work with this person. Unfortunately, as a reviewer you are almost chanceless to fight more subtle types of unethical behavior like cherry-picking from scattered datasets to support a desired conclusion. In a way, this is not really a problem because the system has a very strong defense by correcting itself. If you publish something that is based on fraud but that appears to be groundbreaking, you will be found out because others will try to follow up on it and will fail. Although the system corrects itself, the process of doing so can be a disaster for the individual researcher whose project was supposed to continue on the supposed breakthrough and who sees his/her project fail. Worse, at system level, is that such incidents tend to get attention in the press and, especially in this period of fake news, that is bad for the trust in science as a source of objective information. A popular activity among authors is complaining about reviewers that do not understand our obviously brilliant work, and that is as old as is peer review itself. ‘Clown’ and ‘monkey’ are among the more friendly qualifications that we have for our anonymous peers. There are several causes for poor reviews. One is evidently an open door: the topic may be out of competence. It is not necessarily because of arrogance or general incompetence of the reviewer that this happens. I know the feeling that you want to be nice to an editor, (s)he comes to you and you know it is hard to find a reviewer. You think, I worked a little bit on that, I can do it. And then sometimes later in the discussion with the authors or other researchers you think, maybe I should not have taken this one. Second, also as a reviewer you are pressed for time because you have to publish, write a proposal, teach etc. And then, as third factor, there is competition. Your peer reviewer is also your competitor. Evidently that can become an incentive for sub-par reviews.

There are many more symptoms of poor reviews than I can discuss here, so this list is a personal top three. The first one, a reviewer demanding being cited is mostly annoying. What happens typically is that you get a sort of positive review, stating that ‘you have done nice work, but you have to cite a few more papers’ and then you get a number of DOI’s. You look them up and it turns out that three out of four have the same last author. That means that there is somebody boosting his or her own citation score, but as an experienced corresponding author one can deal with that without compromising your paper or integrity. The second symptom of a poor review is more annoying. Your reviewer is pressed for time and only has a quick glance at your paper before writing a report. What often will happen is that stuff that is in line with common wisdom will get accepted while stuff that might be more original and goes against mainstream has a large chance of being rejected – ‘That can’t be true’. I have colleagues that sort of live by this rule and argue that if something ‘goes in’ without opposition it is not original. So, if there is opposition it might an indication that you actually do something that matters. Nevertheless, this is of course annoying while it is happening. The third symptom, ‘hold & scoop’ - my own terminology - is fortunately very rare, although I do know a handful of examples from reliable sources. What it means is the following: a well-known, high-status individual is asked to review a breakthrough paper by a highly cited journal and thinks, hey, that is nice work. Then (s)he misuses the reviewer position to effectively put the publication on hold by delaying the process through late and lengthy reports with lots of questions that require a lot of effort to answer. In the meantime, the

(6)

Page 5 of 7

reviewer’s own group is pushed full speed ahead to reproduce the results, or parts of them, write a manuscript and submit it to another high impact journal.

A lot has recently been said about publishers, especially in the context of open access. A commonly encountered line of argument is that commercial publishers get the manuscripts for free, have them reviewed for free, and then make the authors and reviewers pay for the end product. Although there is a great deal of truth in this picture, it is also a far from a complete picture of the world. When I listed the reasons as to why we do peer review, the implicit motivation was the common good. We want to have high-quality science in the journal where it fits best. However, publishers, and not only commercial ones, have one very clear additional interest and that is their market share. No market share means no readers, no subscriptions, no income, no journal.

Certainly in my field, market share connects directly to what is known as the journal’s impact factor, which is the typical number of citations that an article in that journal attracts in the first two years after publication. Any publisher will at least follow this number and very often they will adapt their behavior to optimize it. What that has to do with peer review is the following. The incentive for publishers to keep and increase market share has a very strong tendency to bias the neutrality of the reviewing process because the editor makes decisions about which manuscripts are sent out for review, and which are upfront rejected. That easily forms a bias towards certain authors, a bias that can be implicit or explicit. If, again, I look in my own field at the research groups that one regularly finds in the top-tier journals, let us say Nature or Science, that is only a handful. I am perfectly willing to believe that some researchers are bit cleverer than others, but I am not willing to believe that there are only five groups in the world where truly original thinking is happening. So, there are other factors involved as well. Not only journal editors, but also we, scientists, tend to believe that if somebody has published a couple of papers in Nature or Science they ‘are good’ and that conviction biases our review. Anyway, all that is implicit. I know of at least one publishing house where something like a fast lane exists. A fast lane is for those lucky authors that have already attracted a lot of clicks and citations – good for your journal’s impact factor. When their manuscript lands on the desk of an editor, the editor will always send it out for review. People that have experience with high-impact journals know that getting past the editor is often more difficult than getting past the reviewers. Then, if the review reports are not overly enthusiastic, these fast-lane authors get a more tolerant treatment than simple mortals as this speaker. In the end, it means that what you find in your favorite high-impact journal has not always been ranked on the same scale: some animals are more equal than other animals; which is not really the idea of reviewing – or science.

The fight for market share also leads to certain types of papers being promoted. I told you already that the impact factor is a thing that looks only at citations in the first two years after publication. If you are a commercial publisher, only interested in your impact factor, and you publish a ground-breaking piece of work, but it takes a couple of years for that work to fly, which is not uncommon for something truly original, you might as well not have published it for your impact factor because it will not contribute to it. What such journals go for are the two R’s, Reviews and Records, because they will generate a lot of traffic and a lot of citations in the first two years –

(7)

Page 6 of 7

after these two years virtually all records completely lose their relevance. Publishing reviews is also a proven method to get a new journal going, especially if you manage to convince a number of high-profile authors to write the reviews. Focus on impact factor also leads to bias towards certain topics, namely topics that have a large established community working on it, because a large community means, again, a lot of citations.

Although the above is bad enough, what publishers in the end do, especially the more greedy ones, is to take the opportunity that we give them. That is what in our world entrepreneurs are supposed to do. They have to seize the opportunities that are there, and the opportunity to make a lot of money by publishing science is simply there. Just to give you a flavor of the volumes we are talking about, I recently learned that my home university in Sweden cancelled the agreement with Elsevier regarding journal access. Elsevier does not sell at single journal level, in my field Elsevier does not have a Nature- or Science-type journal, but they sell large journal packages. By cancelling that agreement my university saved around 300.000 euros. We are talking of a single medium-sized university somewhere in the far North of Europe. Times the number of universities in Sweden times the number of countries in the world basically makes you understand that publishing is a really profitable business.

How did thing come so? That is where the society comes in. Even though society has no direct role in peer review, it sets the boundary conditions with which we have to live in doing research and teaching. What has happened over the last twenty years or so is that society has put a continuously increasing emphasis on ‘excellence’ and ‘competition’ in the distribution of funds. This, in turn, has led to a culture of ‘publish or perish’, a term that I guess is known to many of you. Unfortunately, reality is more like ‘publish frequently in high impact journals and maybe you won’t perish’. As an established senior scientist, I am relatively well off since at least I have a secure job contract, but if you are more junior you typically work on a temporary contract and it’s really ‘up or out’. What happens is determined by, basically, how many papers you have and in which (high-impact) journals. Another symptom of the ‘excellence’-culture is that there is a continuous evaluation of scientists, both at the individual level when new jobs or grants are at stake, as well as at the aggregate level when groups of scientist are assessed for quality - this can be a research group, a department or a whole university. What is always happening is that use is made of bibliometry, very often of the simplest kind that can be summarized as ‘big is beautiful’. High h-index, papers in high-impact journals, large number of citations etc., and all this irrespective of the size of the group and sometimes even irrespective of which field you are in. The h-index is notorious for being very favorable for people like me who publish in international English language journals; it is very disadvantageous for humanities, social sciences etc.

There are many consequences to the widespread use of simple bibliometrics, but in the context of peer review it leads to publication pressure, who publishes the most in the shortest amount of time. A statement that holds to a degree for myself of which I know it also holds for many of my colleagues is that I’d much rather publish a little less and have a little more time to polish things. But that time is hardly there.

(8)

Page 7 of 7

That leaves the societal demand for competition – virtually no research funding is distributed without some sort of competition. Of course, if society asks to compete, we will compete, but I already mentioned that if your peer is also your competitor, it leads to conflicts of interests and it can offset and bias your neutrality as reviewer. One other thing, on a side note, is that the same society also wants us to collaborate, especially in Sweden collaboration is a big thing, and it’s become a goal on its own. The same holds for many Dutch and European funding schemes. So you get that you have to collaborate with your competing peer, which is a pretty interesting experience.

The above sketches some aspects of the current state of peer review in (a part of) science that may seem pretty gloomy at times. However, I will end on a positive note. Despite all these threats, the system functions relatively well and there is a lot of goodwill in the system, not only on the side of authors and reviewers, but at journal editors and publishers. If I would have to grade it on a scale of 1 to 10, I would give it maybe 6 or 7, albeit with a lot of variation. That means there is ample room for improvement and although I do not have the time to elaborate on any of these ideas, I would like to list a few suggestions. First, commercial publishers somehow have to be contained so they are no longer able to maximize profit without limits; to this I should add that I do think we must acknowledge that there is significant added value in publishing and that we shouldn’t expect to get it for free. Moreover, rigorously enforcing publication to only happen in open access journals, as is the idea of Plan-S is not the way to go. Second, I think it is a good idea to somehow lower publication pressure, which unavoidably will require alternative ranking schemes; note that this is not a call for more subjective ranking schemes. Even better would be to reduce the need for ranking altogether – increasing the baseline funding would for example be helpful. One thing I would like to note about these suggestions is that they are mostly for society to change in the boundary conditions in which scientists operate. That is not intended to take away the responsibility of scientist to behave with integrity, but one shouldn’t be surprised when the homo economicus in us makes us behave in ways that are rational within the given boundary conditions.

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Generally, a transition from primary raw materials to recycled materials, along with a change to renewable energy, are the most important actions to reduce greenhouse gas emissions

I två av projektets delstudier har Tillväxtanalys studerat närmare hur väl det svenska regel- verket står sig i en internationell jämförelse, dels när det gäller att

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

Det har inte varit möjligt att skapa en tydlig överblick över hur FoI-verksamheten på Energimyndigheten bidrar till målet, det vill säga hur målen påverkar resursprioriteringar

Detta projekt utvecklar policymixen för strategin Smart industri (Näringsdepartementet, 2016a). En av anledningarna till en stark avgränsning är att analysen bygger på djupa