Journals and the quality of peer reviewing – a customer ’ s perspective

Based on paper presented at the 31st UKSG Conference, Torquay, April 2008 The evidence that peer review and editing are unreliable at detecting errors due to bias and chance increases every year. It is now clear that ‘peer-reviewed’ is not a sufficiently good quality stamp for the peer-reviewed literature to be recommended to clinicians or patients.There is little evidence that the quality of peer reviewing and editing is improving with time.Those who procure journals will expect significant improvements in the quality of these two activities in the next five years.

Journals concentrate primarily on evidence, knowledge produced by research, and knowledge produced by research has helped transform population health in the last 50 years.
However, we are now in the middle of what Manuel Castells has called 'the third industrial revolution', and this is affecting health just as it is affecting every other activity and industry.The first industrial revolution was based on common sense.It is true that empirical work was done, people observed how to make better steam engines or spinning looms, and people observed that cholera was caused by something in the water, allowing them to take steps to prevent cholera long before the causal bacteria was identified by scientists.
The second industrial revolution, the second healthcare revolution, was driven by science, by chemists and engineers, by physicists and statisticians, and this industrial revolution gave us the advances in medical diagnosis and therapy which have led to the reduction in mortality and increase in life expectation that we have observed since the start of the National Health Service (NHS).
However, the third industrial revolution is not driven by scientists or experts; it is driven by three inter-related forces (see Figure 1).

Serials -21(2), July 2008
Muir Gray Journals and the quality of peer reviewing 75

Journals and the quality of peer reviewing -a customer's perspective
Based on paper presented at the 31st UKSG Conference,Torquay, April 2008 The evidence that peer review and editing are unreliable at detecting errors due to bias and chance increases every year.It is now clear that 'peer-reviewed' is not a sufficiently good quality stamp for the peer-reviewed literature to be recommended to clinicians or patients.There is little evidence that the quality of peer reviewing and editing is improving with time.Those who procure journals will expect significant improvements in the quality of these two activities in the next five years.

MUIR GRAY
Director NHS National Knowledge Service

The role of citizens
The National Library for Health, working with NHS Choices, is developing a common evidence base for both citizens and professionals.We see no reason why the public should be excluded from looking at the journals which professionals look at and we will be negotiating this in future.Because members of the public do not subscribe to the journals at present, this will result in no loss of resource to the publishers.
Information technology, shorthand for the Internet and all the peripherals through which it is expressed and connected, will also have a dramatic impact on the dissemination and implementation of evidence.
For publicly funded documents, documents from the National Institute for Health and Clinical Excellence (NICE), for example, work is in progress so that single-sentence recommendations for action within a long document could be recognized, perhaps through a digital object identifier (DOI), and automatically routed to appear in a relevant laboratory report or prescription.The way in which a single sentence from a paper can be displayed within a laboratory report is shown in Figure 2 below.However, like every advance in this dynamic situation, the Internet creates problems while it creates solutions.As the problems of access recede, the problems of overload increase.Furthermore, the ability of clinicians to access new information immediately means that they are more quickly and easily exposed to the findings of research projects that are flawed, research reports that are published after inadequate peer review and editing, and reports which still contain errors due to bias and errors due to chance.Even more worryingly, these reports may not provide the information for the reader to judge for herself or himself whether there are errors due to bias and chance.
Figure 3 shows the chain of action which leads to publication of evidence.
Problems occur at every stage and not all of the blame for a poor quality publication should be laid at the door of editors and peer reviewers, or, to be more appropriate, the publishers who are accountable for their products.Research is still too often flawed in its planning, its execution or its reporting.][3][4][5] This all leads to what is called 'positive publication bias' and some studies 6 have shown that positive publication bias, including duplicate publication of positive results, increases the size of the beneficial effects of the intervention by about one third.

Problems caused by poor peer reviewing and editing
Publishers, understandably, expect editors to judge the scientific merit of the article submitted for publication.Unfortunately not all editors are sufficiently well versed in research methodology to be able to develop a system to weed out biased or misleading research reports.One reason for this is that the researchers do not always report all their findings, reporting, as stated above, the positive but not the negative findings.Fortunately, with the advent of compulsory registration of controlled clinical trials, it will be possible for the peer reviewer to check the results of the research project against the research protocol.The appointment of Citizen Advisers has made a difference to the quality of research publications but there are problems in research design and reporting which are not statistical but which can be just as misleading.
The work of Iain Chalmers, Mike Clarke and Sally Hopewell 7 indicates the scale of the problem.Over a period of ten years they reviewed the randomized controlled trials published in leading journals.They found that some trials claimed to be the first in the field where they were not, either because the authors did not know that previous work had been done or because they did not wish to acknowledge it.They also found that trials were very rarely based on a systematic review of all the existing evidence, although the Medical Research Council now requires all applicants to demonstrate that they have conducted a systematic review of the evidence before seeking funds to collect new data.Furthermore, the research results were rarely incorporated in the existing evidence base, leaving it to the reader to try to decide whether or not the new results changed the current state of evidence about a particular intervention or treatment.They published first in 1997 and repeated the study twice over the next decade.No discernible improvement was detected, during a period in which the price of journals had increased 36%.

Improving peer reviewing and editing
Steps have been taken to improve research funding so that research projects in future will be based on a systematic review of the evidence and will publish their protocol.Tools such as CONSORT and QUORUM have been developed for editors and peer reviewers to use, which set out clearly the information that needs to be made available in a publication of a randomized controlled trial or systematic review.Additional protocols have been developed for other research methods and for different types of randomized controlled trial.
The Medical Research Council, the NHS Institute and the Department of Health are funding a project called EQUATOR, whose mission is to develop and provide these tools, and the training to use them.
The specification for journals in future will also include the requirement that the journal has in place a system for appraising the steps that the authors have taken to review the literature thoroughly before starting their project.The services of an information scientist will be required by all publishers in future because a skilled information scientist could appraise a sample of the search strategy submitted by authors to identify the degree to which they had adequately reviewed the scientific literature.An alternative would be to introduce this as a pre-publication step but if all authors submitting articles knew that a sample would be surveyed, this might have a bigger impact on their behaviour.

Figure 1 .
Figure 1.The three forces driving 'the third industrial revolution'

Figure 2 .Figure 3 .
Figure 2. Example of single-sentence recommendation in laboratory report