Publications on the internet
UNCORRECTED TRANSCRIPT OF ORAL EVIDENCE
To be published as HC 856-iii
House of COMMONS
TAKEN BEFORE the
SCIENCE AND TECHNOLOGY Committee
Monday 23 May 2011
dr rebecca lawRence, dr michaela torkaR, dr mark patterson
and DR MALCOLM READ OBE
dr janet metcalfe, professor ian walmsley
and PROFESSOR TERESA REES CBE
Evidence heard in Public Questions 161 - 248
USE OF THE TRANSCRIPT
This is an uncorrected transcript of evidence taken in public and reported to the House. The transcript has been placed on the internet on the authority of the Committee, and copies have been made available by the Vote Office for the use of Members and others.
Any public use of, or reference to, the contents should make clear that neither witnesses nor Members have had the opportunity to correct the record. The transcript is not yet an approved formal record of these proceedings.
Members who receive this for the purpose of correcting questions addressed by them to witnesses are asked to send corrections to the Committee Assistant.
Prospective witnesses may receive this in preparation for any written or oral evidence they may in due course give to the Committee.
Taken before the Science and Technology Committee
on Monday 23 May 2011
Andrew Miller (Chair)
Examination of Witnesses
Witnesses: Dr Rebecca Lawrence, Director, New Product Development, Faculty of 1000 Ltd, Dr Michaela Torkar, Editorial Director, BioMed Central, Dr Mark Patterson, Director of Publishing, Public Library of Science, and Dr Malcolm Read OBE, Executive Secretary, JISC, gave evidence.
Q161 Chair: Welcome, everyone. Thank you for coming in this afternoon. Perhaps it would be helpful if you could introduce yourselves.
Dr Lawrence: I am Rebecca Lawrence. I am from Faculty of 1000.
Dr Patterson: I am Mark Patterson. I am Director of Publishing at the Public Library of Science.
Dr Read: I am Malcolm Read, the Executive Secretary of JISC.
Dr Torkar: I am Michaela Torkar, Editorial Director of BioMed Central.
Q162 Chair: Thank you. We have heard that pre-publication peer review in most journals can be split, broadly, into a technical assessment and an impact assessment. Is it important to have both? Before I ask you, as you are a panel of four, if you want to say anything but feel that you cannot get your two pennyworth in, please feel free to add any further comments in writing after the session. Who is going to start?
Dr Torkar: I guess that you are asking about the importance of impact and scientific soundness. It is fairly straightforward to think about scientific soundness because it should be the fundamental goal of the peer review process that we ensure all the publications are well controlled, that the conclusions are supported and that the study design is appropriate. That is fairly straightforward as a very important aspect which should be addressed as part of the peer review process.
The question of the importance of impact is more difficult. When we think about high impact papers we think about those studies which describe findings that are far reaching and could influence a wide range of scientific communities and inform their next-stage experiments. Therefore, it is quite important to have journals that are selective and reach out to a broad readership, but the assessment of what is important can be quite subjective. That is why it is important, also, to give space to smaller studies that present incremental advances. Collectively, they can actually move fields forward in the long term.
Dr Patterson: If I may add a couple of points, both these tasks add something to the research communication process. Traditionally, technical assessment and impact assessment are wrapped up in a single process that happens before publication. We think there is an opportunity and, potentially, a lot to be gained from decoupling these two processes into processes best carried out before publication and those better left until after publication.
One way to look at this is as follows. About 1.5 million articles are published every year. Before any of them are published, they are sorted into 25,000 different journals. So the journals are like a massive filtering and sorting process that goes on before publication. The question we have been thinking about is whether that is the right way to organise research. There are benefits to focusing on just the technical assessment before publication and the impact assessment after publication. That becomes possible because of the medium that we have to use now. The 25,000 journal system is basically one that has evolved and adapted in a print medium. Online we have the opportunity to rethink, completely, how that works. Both are important, but we think that, potentially, they can be decoupled. That is obviously how the idea of PLoS ONE came about, but also certain other things that happen after publication.
Dr Lawrence: I would add that often it is not known immediately how important something is. In fact, it takes quite a while to understand its impact. Also, what is important to some people may not be to others. A small piece of research may be very important if you are working in that key area. Therefore, the impact side of it is very subjective.
Dr Read: That is very much a point I would make. Separating the two is important because of the time scale over which you get your answer. The impact is much longer. I guess the technical peer review is a shorter-term issue.
Q163 Chair: Of course, there are some who take the view that the process of peer review itself stifles innovation and perpetuates the status quo. How big a problem is that, or is that overstating it?
Dr Read: I would have thought that sounds a bit overstated as peer review, in one form or another, has been an underpinning aspect of research-arguably, even before journals as we know them existed.
Dr Patterson: I support that. I am not sure it is a massive problem. When a piece of work is arguing against the received wisdom, perhaps naturally, it can be a bit tougher to get it published. In a way, that is as it should be. If it is a grand claim, there probably needs to be stronger evidence to support it. The peer review process enables that to be examined. It is possible that personal biases and prejudices more associated with the conventional wisdom might come into play and make it even more difficult. You could argue that there is also a case there for focusing just on technical rigour beforehand which might ease the passage of work like that. Even then, it still has to pass rigorous tests in order to get into the literature.
Dr Read: It gets interesting because I notice one of the other observations you have made is that most articles get published somewhere, even if they have been rejected by peer review. Maybe that cuts against the conservatism, meaning you might not get published by the more conservative journals but you might get published eventually.
Chair: The four of you are basically arguing for the continuation of the process but finessing it. I know that Dr Patterson has some particularly interesting views, and perhaps Roger can pick up there.
Q164 Roger Williams: Thank you very much. These questions are directed very much to Dr Patterson but not solely to him if others want to come in. I think your journal publishes 69% of all submitted articles. Does that mean the other 31% are technically unsound?
Dr Patterson: You are correct that it is about 69%, but that doesn’t really mean we reject the other 31%. Some of them are "lost" in the sense that they may be sent back for revision-maybe 5% to 10% are sent back for revision-and the others are rejected, as they should be, on the grounds that they don’t satisfy technical requirements. We have done some work to look at the fate of those manuscripts. We did some author research in the last couple of years and we have seen that, in both cases, according to the authors’ responses, about 40% of rejected manuscripts have been accepted for publication in another journal. There are probably several reasons for that. One is that some of them will have been rejected by PLoS ONE because their hypotheses or perspectives are out of scope, or something like that. We publish original research in PLoS ONE, so that is fair enough. They end up being published somewhere because there are appropriate venues. Other authors may have gone away and chanced their arm at another journal and got through their peer review process.
Q165 Roger Williams: Is that without being refined?
Dr Patterson: That we don’t know. They may have been revised. As the PLoS ONE process isn’t perfect, another chunk will have been rejected inappropriately. We know there are some such articles. The academic world reviewers tend to get in the mode of peer review but we are doing something different and we have to try to get that message across. So there will be a small batch that is rejected inappropriately.
Q166 Roger Williams: Your PLoS ONE website indicates that you have fast publication times. How much faster are you than other journals in that sense?
Dr Patterson: What we are trying to do on PLoS ONE is balance speed-lots and lots of surveys have said that speed in publishing is really important to authors-against a process that is sufficiently robust, both editorially and in the production process, to give rise to a high quality product at the end of the day. We are on the fast side, although I don’t think we can claim to be super-fast. But the real benefit in PLoS ONE, which is relevant to speed, is that authors won’t be asked to revise their manuscripts to raise them up a level or two. With a lot of journals, you get asked to do more experiments to raise it up to the standard that particular journal wants. That doesn’t and shouldn’t happen at PLoS ONE. As long as the work is judged to be rigorous, it is fine. The amount of revision can be quite a lot less because authors are asked to do it in that way and that can really reduce the overall time from submission to publication.
There is another way in which I think PLoS ONE accelerates research communication generally. Often, articles are submitted to journal A and are rejected as not being up to standard. They go to journal B and then journal C and, eventually, are published. If you have a robust piece of work it will be published in PLoS ONE as long as it passes the criteria for publication. You will not have to fight with editors who are trying to argue for a certain standard. I think those two other things really have the potential to accelerate research communication broadly.
Q167 Roger Williams: Is light copy editing a feature of how you can deliver faster times?
Dr Patterson: Again, we are balancing these two competing interests of speed and quality. In our production process we focus on delivering really well structured files that will be computable, for example. We don’t expend effort in changing the narrative. Scientific articles aren’t works of literature. That is not to say it wouldn’t be nice if, sometimes, a bit more attention was paid to that. It is also true that one of the criteria for PLoS ONE is that the work is in intelligible English. If an editorial reviewer thinks that something is just not good enough and they can’t really see what is happening, it will be returned to the author.
Q168 Roger Williams: Should it be intelligible to your target audience or a broader audience?
Dr Patterson: The research audience, which is the primary audience. Yes, that is what I mean. We are focusing more on technical quality. We also put more onus on the authors to take responsibility for the content, and we will turn a manuscript away if it is really not comprehensible.
Q169 Roger Williams: Are there any other corners that your journal "cuts" in order to deliver faster times?
Dr Patterson: I wouldn’t frame it that way. What we are doing is trying to identify and take away any unnecessary barrier to publication. We could probably do a lot more. Our times are okay. As I say, they are not super-fast but they are on the fast side. There is certainly more we could do to streamline the process and make it more efficient.
Q170 Roger Williams: Has your approach and the reputation you have built up resulted in a lot more submissions?
Dr Patterson: PLoS ONE was launched in December 2006 and is still quite a new journal. It is only four and a half years’ old. We published about 4,000 articles in 2009 and 6,700 last year, so it became the biggest peer review journal in existence in four years. It has grown steadily over that time. I am sorry, I have lost the thread.
Q171 Roger Williams: Has your approach and the reputation and impact of the journal itself increased the number of submissions?
Dr Patterson: It has. We see a lot of positive feedback. Going back to my previous comment, the message that if I have a solid piece of work I’m not going to have to grapple with a journal that is basically biased against publication-the goal of PLoS ONE is to publish all rigorous science-is a very positive one which authors like. Coupled with ideas about how, then, you might assess the impact after publication, it is definitely gaining ground.
The other very significant thing that has happened in the last nine to 12 months is that eight or more big publishers have announced PLoS ONE lookalikes, essentially. That is very striking. The American Institute of Physics and the American Physical Society have both launched physical science versions; Sage has launched a social science version; the BMJ group, who were actually the first, last year launched a clinical research version of PLoS ONE; Nature has launched a natural science version of PLoS ONE, and on it goes. The model is getting that level of endorsement from major publishers and I think, again, that is probably helping to make researchers very comfortable with the way in which PLoS ONE works.
Q172 Roger Williams: But will you be a victim of your own success? Will you be overwhelmed by the volume of submissions and then your time to publication suffers as a result?
Dr Patterson: I certainly hope not. The growth has been pretty spectacular and has definitely surpassed our expectations. The people who work on PLoS ONE are fantastic. Everyone at PLoS somehow gets involved with PLoS ONE. As to the academic community, the 1,600 members of the editorial board have been terrific in stepping up to the plate and helping to make PLoS ONE work. Probably one of the things that has helped to make PLoS ONE a success is that it was born within the scientific community. Its founders are three fantastic scientists. We have always had that sense of support from the scientific community and there is no question but that that has really helped us.
Q173 Roger Williams: Do you believe that this approach has had an effect on the peer review process perhaps in terms of timing, quality and ease of recruiting or having access to reviewers?
Dr Patterson: It is beginning to. PLoS ONE has grown very rapidly in the space of four years to become a very big journal. There are now another eight to 10 on the scene that are being launched, or are about to be launched. If another 10, 20 or 30 of these are launched over the next one to two years, which I think is quite likely-because a lot of publishers will be looking very hard and thinking that if they don’t get involved they will potentially lose out-that could make some fairly substantial changes in the way the prepublication peer review process works. There is a lot to say about post-publication but not yet. So I think the model could change. The benefit will be the acceleration of research communication because you avoid bouncing from one journal to another until you eventually get published. That is a tremendous potential benefit.
Q174 Chair: In your earlier answer you referred to internal research that you have undertaken. Is that in a form we could have sight of?
Dr Patterson: I can send you the links. There are two presentations on SlideShare. They are publicly available.
Chair: That would be very helpful.
Q175 Stephen Metcalfe: There is a move towards greater use of online communication systems. Are there any general guidelines that you think would improve the peer review system and make it more effective and efficient on those kinds of platforms?
Dr Patterson: Are you talking about the tools that you use to administer peer review?
Stephen Metcalfe: Yes.
Dr Patterson: I think the questions are really the same, apart from the fact that we are focusing on technical rigour and defining what those questions are. Those are not new concepts. PLoS ONE isn’t such a radical departure. It is a very simple idea. We are not really changing the idea that rigorous work should be reviewed properly before publication.
Q176 Stephen Metcalfe: You wouldn’t describe your approach as "light touch"?
Dr Patterson: No, not at all. It is important to consider not just the peer review process but everything that goes on before an article is accepted for publication as being critical steps in quality control. There are several components to that, of which peer review is one. At PLoS ONE staff are involved in the first step. It goes through a series of quality control steps which are focused. Basically, we want to take stuff away from the academics so that they can focus on the science and we can sort out everything else. We focus on things like whether the competing interest statements are properly indicated; financial disclosures; if the work concerns human participants whether there is an ethics statement and appropriate ethical approval-a whole series of things like that. Hardly any manuscripts get through that without some kind of query going back to the author.
Then there is a step where we involve PhD scientists who scan the work. These are people who have some level of subject expertise. Some-not many-of the submissions are rejected at that point because they are completely out of scope or something. They are also looking for any articles on controversial topics or anything that might require special treatment. They flag work like that. The work then goes to the editors whose responsibility it is to take on the peer review process. It is a pretty involved process.
The peer review part then focuses on seven criteria to do with whether the methodology and analysis are appropriate; whether the conclusions are justified; whether the work is ethically sound and properly reported; and whether data is available as appropriate. There is a set of seven criteria. To decide whether a work is rigorous is not a straightforward task.
Q177 Stephen Metcalfe: Is one of those seven criteria to check that the work has been put into the proper context of existing literature, for example, knowledge about researches and data or something?
Dr Patterson: Yes. One of the criteria is that it is an original piece of work. In that sense, the editor and reviewers will be judging whether, in relation to what has already been published, the work is an original piece of work which deserves to be part of the scientific literature somewhere. Although we don’t explicitly state that, it is implicit in that requirement.
Q178 Stephen Metcalfe: The responsibility for that falls on the reviewer, doesn’t it?
Dr Patterson: Ultimately, the editor. Their name goes on the paper, so there is a level of accountability. That is something we do across all the PLoS journals. Every article published in a PLoS journal has, associated with it, an academic editor who in some way has been involved in the assessment of the work before publication.
Q179 Stephen Metcalfe: Dr Torkar, what has been the initial outcome of the BMC Biology experimental policy which allows authors to decide whether or not the referees see their papers again after revision? Has that worked?
Dr Torkar: Yes, that has been quite successful. A lot of authors take up that option. To explain that process briefly, submissions are usually screened by the editorial team. There is quite a high rejection rate at that point. They will often consult with their editorial board to ask about the question of impact at that point. Is this a sufficiently interesting contribution for a journal like BMC Biology which has a higher threshold and is meant to be a broad interest journal? They have a high rejection rate at that point. Of those manuscripts that go to peer reviewers about 60% are either rejected or require only minor revisions, so there wouldn’t be a requirement for a re-review anyway. Of the remaining 40% of authors who are offered the option of peer review opt-out, more than half will take it up. The editorial team will make a clear decision after the first round of peer review to make sure that they are very clear in their instructions to the authors about what needs to be done. They will then assess the revised manuscript when it comes back and they will usually go ahead with publication without rereview. I think there were only a couple of cases where that really wasn’t possible for some reason. If the revisions aren’t as extensive as they should be-say, some of the conclusions aren’t put sufficiently into context to show there are some limitations to the study-they will commission a commentary which is published alongside the paper. That is written by an expert who will put it in context and point out those limitations just to make sure that non-expert readers understand that there might be some problems.
Q180 Stephen Metcalfe: Therefore, that puts more responsibility on the author to carry out that work and also moves the burden from the reviewer to the experts who commentate on it afterwards?
Dr Torkar: Yes, to some extent. Often that expert will have been an original reviewer and is quite familiar with the manuscript or study. To complete the story, there is some pressure on those people to put it in context.
Q181 Stephen Metcalfe: How widely used is the system of cascading submissions and reviews from one journal to another?
Dr Torkar: I am sure Mark has something to say about that. We use this quite extensively at BioMed Central and, in particular, with the BMC series which is more or less our equivalent of PLoS ONE and was launched in 2001. It is a group of more than 60 community journals which are subject specific: BMC Immunology, BMC Genetics, etc. As they also have the premise of publishing all scientifically sound studies without putting too much emphasis on the impact and extent of the advance, they will consider manuscripts that were previously peer reviewed or submitted to some of our flagship journals. Sometimes the transfers will happen before the peer review and sometimes with the reviewers’ reports. That does save time for authors and reduces the burden on the peer reviewers who don’t have to re-review manuscripts for multiple journals.
Dr Patterson: Cascading peer review is a phenomenon that exists at PLoS in its two flagship journals PLoS Biology and PLoS Medicine. Articles can be transferred from there to other journals. To give you a sense of the size of that, about 10% to 15% of submissions to PLoS ONE come from other PLoS journals. It is pretty clear that, internally, that works quite well. A lot of publishers think so and quite a lot of the evidence has shown that.
The much more problematic issue is the sharing of reviews from one publisher to another. I know you heard some talk about the Neuroscience Peer Review Consortium experiment which, interestingly, was not terribly popular with authors, but I am not sure how much publishers were really behind it. For example, it was said that some publishers might feel reluctant to share reviews with another journal or publisher because they have built up relationships with these people and there is some commercial value associated with that. When you hear that you have to ask whether that sense of ownership is in the best interests of science. I am not convinced. That is a question worth asking.
To complete the thought, it is quite natural that journals would feel that way in a world of subscriptions because it is about selling a package of content to a group of readers. That is how the model works. Therefore, anything which allows you to improve that package of content is of value to you commercially. In a way, it is completely understandable that journals in that subscription business model would be reluctant to share their reviews. When you switch round the model, as BMC, PLoS and many others do now, in terms of supporting and publishing through a publication fee, considering yourselves, as publishers, much more as service providers-you are selling a publishing service to a researcher-your attitude towards sharing peer reviews might be changed. I am not sure.
Q182 Stephen Metcalfe: You are saying that those are two conflicting forces, so the model that is being adopted will affect what system will work.
Dr Patterson: I think it will influence the attitude of the publisher and the journal towards sharing that kind of information.
Q183 Stephen Metcalfe: But which one is in the best interests of science?
Dr Patterson: You probably know what I would say. I would say that sharing is generally better.
Q184 Stephen Metcalfe: Do you all agree with that?
Dr Torkar: Yes, I would agree. We have one journal that is signed up to the Neuroscience Peer Review Consortium: Neural Development. We haven’t seen that much uptake from authors, but we would welcome it-in both directions: sharing our reports and going backwards. Ultimately, we want to get the publications that are worth publishing out there.
Dr Patterson: The whole issue goes away with the "publish rigorous science first, sort it out later" model in terms of impact, relevance and so on because then you don’t have this cascade from one journal to another.
Q185 Stephen McPartland: I would like to turn to value for money. The Joint Information Systems Committee has estimated that the cost to higher education institutions from staff time spent on peer review is between £110 million to £165 million per year. Do you think this is an acceptable cost to higher education institutions which is almost a subsidy to publishers?
Dr Read: I think it is an acceptable part of the scientific process. Of course, the reviewers get a great deal of benefit from doing it. They get early sight of research articles and, particularly if they are an editor, they will also get quite a bit of added standing in their discipline. So I don’t think the research community would feel it was an unacceptable activity.
Where we have a worry is if scientists had to spend more time on peer review proportionally to their science. If that starts to escalate, as it is, certainly at the moment, perhaps for the medium term, then there is a concern because less actual research will get done. This is because of the significant increase in research output from Asia where the majority of the peer review is still being done in the western world. One would like to think that sorts itself out over a certain period of time. I don’t think researchers would feel this is a particularly burdensome call on their time as long as it doesn’t get out of hand.
Q186 Stephen McPartland: Allegedly Vitae, the UK organisation that champions personal and professional standards for research staff, suggests that a lot of peer review is done in their own time because of what you are suggesting, Dr Read. It is starting to get out of hand and many people have to do large amounts at work and then also go home and do large amounts. They feel that if they don’t they will lose their standing in the community.
Dr Read: I think that is true. But I don’t know that many researchers particularly feel they have a nine-to-five existence anyway. So I am not sure to what extent they would particularly resent this. I don’t think there is a nine-to-five mentality in the research community.
Q187 Stephen McPartland: Do you feel they should have some kind of recognition?
Dr Read: That is why, perhaps, greater transparency in the peer review process might work well. They wouldn’t get external recognition for peer review work, of course, but the fact that they are peer reviewing would be known to their peers. Being an editor would give you external recognition. I think you raise a good point there, that some form of recognition of the contribution they make in peer review would be welcome.
Q188 Stephen McPartland: Do you feel, outside their peers, their academic institutions take into account the amount of peer review that some of their staff have to take on board outside working hours?
Dr Read: Yes, I would say so.
Q189 Stephen McPartland: Do you feel that higher education institutions and researchers effectively get value for money?
Dr Read: Many people would feel that the whole publishing process doesn’t represent value for money, which is perhaps where you were leading but not the particular point you make. A model where library budgets have to pay for journals rather than it being a direct part of the research costs is leading to strains in universities and is getting very serious. Of course, it is a nowin situation. If library budgets get cut and the cost of journals and the amount of publications continue to rise, as they are, researchers will get less access to those journals. There is no obvious way of breaking that particularly difficult chain. I think many people would argue that the publishing industry is not good value for money and there should be cheaper and more modern ways of disseminating the outputs of research.
Q190 Stephen McPartland: Would anybody else like to comment?
Dr Patterson: I would agree with a lot of that. It is a really good question to ask. What is the value that we are getting out of the £120 million to £160 million every year? Moving away from a cascading model for journals sorting content could help to generate greater value for money because the burden on reviewers becomes less if they don’t have to review things that are being submitted to multiple journals. That would help, potentially.
I very much agree with the idea that there is a lot of opportunity to recognise the contribution peer reviewers make. I know that project ORCID, which stands for Open Researcher and Contributor ID-it is a unique ID for people contributing to research communication-would really help to identify who has done what peer review. Obviously, it depends on peer review policy as to what you can and cannot make openly available. I think there is also an argument for moving towards more transparent systems of peer review because there are real benefits in providing better and more open recognition of the contribution. There is a lot that could be explored in terms of getting more value for money and more efficiency out of the peer review process.
Q191 Gavin Barwell: I apologise for not being here at the start of the session. I want to ask a question about ethics, essentially. Perhaps I may start with Drs Torkar and Patterson. Do all of your journals have a publicly-declared ethics policy? If they do, what processes do you have to ensure that they are complied with?
Dr Torkar: BioMed Central and, as Mark will confirm, PLoS are members of COPE and have been pretty much from the start. I think you have had a representative of COPE on a previous panel. We put a lot of emphasis on ethical issues. We have clearly defined policies for authors as part of our information, we ensure that authors and referees declare their conflicts of interest and we follow those guidelines very strictly. We work closely with our external editors to ensure that they follow the guidelines. The short answer is that we take that very seriously.
Dr Patterson: It is pretty similar at PLoS. We have policies available on the website. Maybe one thing to add is that we are lucky in that one of our chief editors of PLoS Medicine is the secretary of COPE. We take publishing ethics very seriously across the board. You have heard talk of new tools for plagiarism screening. We are planning a pilot in that area in the next few months. We have been doing some work on figure checking, looking for evidence of figure manipulation which occurs sometimes. So we have also done some work on that. It is a very similar story to what you have heard from most of the publishers from whom you have taken evidence.
Q192 Gavin Barwell: How important do you think it is to have an online record of pre-publication history: correspondence between reviewers, authors and editors? What approach do you take to those issues?
Dr Patterson: I think it is an absolutely requirement. Any reputable publisher has to have those kinds of records. These days there are standard systems which support the editorial process and provide the mechanisms you need to archive and keep all that correspondence.
Dr Torkar: The same is true for us. As you might have seen from our contribution, we have a whole series of medical journals that even make the pre-publication history publicly available. You can access, with a published article, what the peer reviewer said and how the manuscript was revised. It is a very transparent way of seeing how the system works and the sort of records we keep.
Q193 Gavin Barwell: We received mixed submissions on this point. Some people suggested they didn’t think there was any real demand for people to wade through all of this copious information. Do you monitor how much and to what extent people look at all of that online?
Dr Patterson: It is not available and so is not public, except for the system Michaela described. Medical journals release a lot of the peer review information. We don’t do that yet, although we are certainly looking at it. It is for internal record keeping. You need them if a dispute occurs two or three years later about some aspect of priority in terms of who discovered what and when or there are some shenanigans in the peer review process that people want to investigate. They are also a fabulous tool to help support the editorial process, in the sense that if you get a new manuscript in a certain area you can then go back, it reminds you of something and you can rediscover what went on. That can help you with the editorial process on a new manuscript.
Q194 Gavin Barwell: But you make it publicly available?
Dr Torkar: Only on a subset of our journals. We decided at some point that that would make it very transparent, but it is only on the medical BMC series journals. There are about 40 journals.
Q195 Gavin Barwell: What is the reason for doing it on those journals and not others?
Dr Torkar: It is probably historical. Also, we feel in the medical community there is more acceptance of a very transparent model like this. Experience so far shows that rejection rates are very similar. It certainly has no negative impact on the peer review process and it makes it all quite transparent. It is not clear that the biology community would be quite as open to this model, but there are also experiments going on with different journals and different publishers to look at that.
Q196 Gavin Barwell: What are the retraction rates for your journals? Are there any significant differences in the percentages of retractions published by different types of publishers and people using different types of peer review?
Dr Patterson: I don’t think so. Retractions happen and they need to happen, occasionally. There was an observation that one tended to see more retractions in the really high profile journals where the potential rewards are higher and so on, but I don’t have hard data to back up that assertion. I read it somewhere.
Dr Torkar: I can’t expand on this with data.
Q197 Gavin Barwell: My final question is to Dr Lawrence. Faculty of 1000 evaluates published research. If an error is found within the original article, how do you deal with retractions?
Dr Lawrence: I should point out that there are two parts to this. The main part of Faculty of 1000 is the positive post-publication evaluation service. What I mean is that we don’t criticise papers we think are poor. Our faculty of 10,000 researchers highlights papers it thinks are particularly important, irrespective of where they are published. About 86% of those evaluated are not in what you would think of as the top journals, which suggests there is a lot of very important research in the other journals. It only highlights, as I say, the interesting stuff.
As to retractions, I am not aware of any that have been picked up that have subsequently been retracted. But we also have a dissent option. In the case of quite a few of our top evaluated articles, where several faculty members have evaluated them and said, "These are really good papers", subsequently a faculty member has come along and said, "I don’t agree. I think there are problems with it." We have a system like that.
Q198 Graham Stringer: I have one ethical question following what Gavin has just asked. How much commercial pressure is there from pharmaceutical companies to publish, to take just one example, and how does that commercial pressure interfere with the publication? A journal that publishes a paper which means doctors can prescribe a particular drug stands to make a lot of money, doesn’t it? How is that pressure dealt with ethically?
Dr Patterson: This is an issue which has certainly been highlighted in the evidence you have already heard. This is something on which, in particular, the editors of our journal PLoS Medicine have taken a very strong position, to reduce what they call the cycle of dependency in some way between the pharmaceutical industry and medical publishing. One of the ways in which that is manifest is with very substantial reprint revenues associated with high profile, hard-hitting clinical trials; for example, sponsored by the pharmaceutical industry.
What PLoS Medicine and PLoS as a whole have done, in order to keep the two things apart and separate any commercial interest from the editorial integrity of the content to be published, is refuse to accept any form of drug or device advertising, even though it could be a significant revenue stream for us. We feel that is a very strong leadership position to take in that area. The business of open access is also very important to this. The articles we publish are open in the sense that there are no barriers to reusing that content. A lot of publishers retain rights to contents so that they can reprint the article. They are the only people who can reprint that article at the levels of thousands and thousands of copies for redistribution, which then earns them an awful lot of money. We can’t do that.
Q199 Graham Stringer: Are you saying that reproducing your articles is free?
Dr Patterson: Yes.
Q200 Graham Stringer: You are very different from The Lancet or other journals?
Dr Patterson: Totally different. We feel that is a very important principle. We have no unique right to take those articles and make that kind of money from them. These are some steps that have been taken. They are not the solution to everything, but I think they are important. I know that other medical publishers from whom you have heard are also taking these issues tremendously seriously and doing whatever they can to ensure the integrity and reliability of the content that is being published.
Q201 Graham Stringer: It struck me, when you spoke earlier, that if a pharmaceutical company wanted to get a drug to market very quickly and within the mindset of GPs and other doctors, your route to publication would be quicker. It might be an incentive, then, for them to go via a route which you said yourself-I can’t remember your exact words-was of a different standard; it wouldn’t be sent back. That worried me slightly, that, commercially, it might be easier for drug companies to make more money by going via your route. But you don’t have a financial interest in that?
Dr Patterson: There is no financial interest, in that sense. To be clear, we consider work that has been sponsored by the pharmaceutical industry but, obviously, it has to conform to the same criteria as everything else. What might make the pharmaceutical industry reluctant, in terms of thinking about the value of that publication commercially, is that to publish in a very high prestige journal would probably be of great value. That is what might put them off coming to, say, PLoS ONE which does not, in and of itself equal high prestige. That is not the way PLoS ONE works.
Q202 Graham Stringer: I want to ask some questions about the nature of the science that is published in the journals. I still haven’t quite got over the shock of listening to scientists from the University of East Anglia talking about Climategate where the science wasn’t reproducible by all scientists because the computer codes, programs and data sets weren’t available. Do you think all that information should be available, and what do you do to make it available?
Dr Read: That is an area where we have been doing quite a lot of work. Various macro-scale climate models are broadly available across the world, although there is more than one to choose from. The difficulty about making software code available is that, if you are talking about stuff running on so-called super-computers, you have to know quite a lot about the machine and the environment it is running on. It is very difficult to run some of those top-end computer applications, even if, of course, they are prepared to make their code available. Maybe they are not.
Q203 Graham Stringer: In this case, they were not. But how can it be science if it can’t be tested and reproduced by somebody else? If journals are publishing articles which, because of the nature of the super-computer or the secrecy of the data sets or the fact that scientists want to keep their code private, can’t be reproduced, what is the point of that?
Dr Read: They should make clear the nature of the program they are running and the algorithms. A computer will not have any value beyond the way it is programmed. As long as they define the input conditions, as it were, and what the program is designed to do, you should be able to trust the outputs. That would be no different from any statistical test that is run on a data set, so long as you say what the test is. You then start to get down to the accuracy of the data itself, which is perhaps a more fundamental issue than the software or statistical test that is being run on it. I would say that the availability of the research data is a more important issue because then, of course, other researchers could run different types of algorithms on different types of computer on that data. I think access to the data is more fundamental.
Dr Patterson: To add a comment, reproducibility is a gold standard that we should be aiming for as publishers. PLoS, and many other publishers for that matter, requires authors to provide the data that underpins their work, or, in our case, the software, though not on a huge scale because then you have practical issues. It is the same with data. When it becomes truly massive you need alternative approaches. But, in general, we have a requirement that, in the interests of reproducibility, you must make the data available. We have had cases where readers have reported to us a problem with getting hold of data from an author published in a PLoS journal. We follow that up. We talk to the author and ask what the issues are. In the majority of cases the author will deposit their data and it is a misunderstanding, almost, that they haven’t deposited their data in the appropriate repository, or whatever it is that is done in that particular community.
Q204 Graham Stringer: I don’t know whether you have read the transcripts of our last meeting.
Dr Patterson: Yes.
Q205 Graham Stringer: Andrew Sugden from Science said that there was real difficulty in getting data sets for peer review. Is there anything that can be done about that? I accept that some of these data sets might be huge in different areas of science, but if it is supposed to be peer reviewed in Science it should be available.
Dr Patterson: I agree.
Q206 Graham Stringer: What can be done about it?
Dr Patterson: I think this is probably a very good area for study. In a lot of fields there are well established processes, places, resources and infrastructure to deposit data. I am thinking of fields like the generics community and the protein structure people. There are established places where you can put data. In other fields the situation isn’t quite as advanced but there is some interesting work going on. There is a project called DRYAD that is developing a kind of generic database for data sets. This is work particularly in the fields of ecology and evolution. That is where they are starting, but they are already talking of expanding into other areas. The idea is that this is a place where you can deposit your data set-I’m not sure whether the facility is available yet but it certainly will be-and where you can give privileged access to reviewers, for example, during the peer review process and then make the data available once the article is published. There are facilities being developed to help solve this problem, and I agree it is a problem, but there are ways round it. I don’t think it is insoluble.
Dr Lawrence: I think that depositing data is essential. However, within the kind of time frames of peer review, you really can’t deal with the issue of reproducibility because you aren’t going to be able to repeat the experiment yourself. All you can do is say that it seems okay; it looks like it makes sense; the analysis looks right; the way they have conducted it makes sense and the conclusions make sense. I think the issue of reproducibility must come after publication in the sense that people try to reproduce it. That is when people say, "I couldn’t reproduce it", or, "I could."
Q207 Graham Stringer: Do you think that depositing data sets after publication should be mandatory?
Dr Lawrence: Personally, I think that would be a good step.
Dr Torkar: It depends on the community you are talking to. It is only if the standards are well established and agreed on by the community that you can really enforce it and insist on it as a publisher. It becomes more difficult when, say, databases are not quite ready to accept all of the submissions or formats. That becomes a real barrier for authors. They cannot publish because the publisher insists on it. I think there is a lot of responsibility on the publishers to interact with different communities to establish the right databases and standards and where the limitations are and to make it mandatory in some cases and in others encourage submission and deposition, in particular. I think it depends very much on the communities.
Q208 Graham Stringer: To follow that up, are you saying that the depositing of these data sets may be a difficult problem but it is one that could be overcome?
Dr Torkar: Yes. Often it comes down to the communities to establish their needs in order to be able to reproduce each other’s work. Then the publishers need to work with them in order to find out the agreements and the right way forward. It is very much to do with communication about what is the best way forward.
Dr Read: I would inject a word of caution here. There are technical and economic problems. Some of these data sets are huge. Keeping them available, possibly in perpetuity, could end up as a cost that the sector simply could not afford. While I would certainly be very much in favour of encouraging a predisposition to make data available, there are technical and economic factors involved in very large data sets that might simply make it impractical. Keeping available all the outputs of the experiments on the Large Hadron Collider is just infeasible. Other data, such as environmental data, must be kept permanently available. I think that should be made more open. Of course, you can’t repeat an earthquake and that data must never be lost. A lot of social data in terms of longitudinal studies make sense only if the entire length of the study is available. In some areas of science the data is produced by computers and programs. In that case, if the data is very large, an option might be simply to re-run the program. I merely say that as a word of caution. A blanket mandate on open data might not be feasible but the predisposition should be to make data openly available.
Dr Patterson: I make two brief points about that. First, it would be really helpful for publishers to include some kind of statement about data availability so that it is clear. How do you get hold of this data? Are there any restrictions in terms of accessing it because of the size of the data in some fields or whatever? Secondly, there is an opportunity to incentivise the sharing of data by giving greater credit and finding mechanisms to reward researchers who do that to assess the impact of that sharing as well. Rather than focusing everything on what they have published in whatever journal, to start thinking about different kinds of outputs and their value.
Dr Read: I strongly agree with that, because the cost of making data available in terms of describing it in ways that people outside your discipline can understand could be very high. They would have to put in a lot of effort and they would deserve credit and recognition for that.
Q209 Chair: You talked about credit for taking on reviewing work. I want to go finally to post-publication commenting. Should publishers introduce some system of prestige or credit for post-publication commentary? Dr Patterson, why is article-based methodology a good one? I don’t regard it necessarily as a healthy comment if I make a speech and there is an endless number of blogs. No doubt I will disagree with half of them anyway. Is the F1000 model which uses faculty members to carry out that process a better one, or does it become a biased process? To finish off, let me put to all of you this question: what is a good system of post-publication commenting? Should there be some recognition of the people who participate?
Dr Patterson: Maybe the starting point is to say that at the moment we have a very blunt instrument for research assessment which is basically a number-an impact factor-associated with a journal. We can do much better than that now. The way we are looking at this is to consider all the things you can potentially measure post-publication. It is not just about a blog comment or something like that. There is a whole range of metrics and indicators, including resources like Faculty of 1000, which can be brought to bear on the question of research assessment. Normally, people are looking at the research literature as a whole, they are identifying the papers that are important to them and they are coming to those papers. We want to provide an indication when they come to that paper of how important this is and what impact it has had through usage data, citation information, blogosphere coverage and social bookmarking. There are so many possibilities.
We have moved in that direction by providing those kinds of metrics and indicators on every article that we publish-we are not the only people doing this but we have probably taken it further than most-to try to move people away from thinking about the merits of an article on the basis of the journal it was published in to thinking about the merits of the work in and of itself. Indicators and metrics can help with that. They aren’t the answer to the question but they will help. Ultimately, there is really no substitute for reading it and forming your own opinion. Our general approach to the question is to try to capture as much of the activity that happens after publication on to the articles themselves.
Dr Lawrence: We would agree. Faculty of 1000 is a way of using a panel of experts. We have heads of faculty who then suggest the section heads who then suggest the faculty members. It is all very open. All their comments are against their name. On the question of bias, they also have to sign something to say they haven’t been unduly influenced and, obviously, there are issues of conflicts of interest.
Q210 Chair: Isn’t that a more structured approach to Dr Patterson’s X Factor version?
Dr Lawrence: I don’t think that any of these different metrics, on their own, are that strong. The point is about bringing together all the various metrics. They all have their own problems. To measure the impact of research you need to use different ones in a sensible way. In a way, the more metrics you have the better your chance of really understanding the impact.
Q211 Chair: I am getting from this that your methodology is making sure that the judges aren’t tone deaf, if I may continue to use my rotten analogy, which is a cruel one to you, Dr Patterson. In Dr Patterson’s case, you don’t care.
Dr Patterson: No. To be clear, I think both approaches will be required. They are complementary. I would like to see-we probably will shortly-F1000 as one of the indicators on a PLoS article. You go to the article and say, "Ooh! It’s been highlighted in F1000 and this is what the person has said", or something like that. There will be a place for expert assessment, evaluation and organisational content post-publication, as well as grabbing as many metrics and indicators as you can from the world at large.
Q212 Chair: As to the other two, where do you stand?
Dr Read: I think we mustn’t underestimate generic social networking tools as a very unstructured way of commenting on publications. Of course, you have to make those publications more widely available before that can happen. More of that will happen in a world of open access than at the moment.
Dr Torkar: I would agree with all of that. It is important to encourage those systems and get them used more widely and, in particular, to get the critical views across. I think the challenge at the moment is to encourage people to air their criticisms and put their names to them without fear of any repercussions. We need to encourage this as much as we can.
Chair: I thank all four of you very much for your time. I am sorry it has taken a little longer than we anticipated, but it has been a very interesting session.
Examination of Witnesses
Witnesses: Dr Janet Metcalfe, Chair, Vitae, Professor Ian Walmsley, Pro Vice Chancellor, University of Oxford, and Professor Teresa Rees CBE, former Pro Vice Chancellor (Research), Cardiff University, gave evidence.
Q213 Chair: Thank you very much for coming along this afternoon. I apologise that we are running a little later than we were scheduled to. I would be grateful if the three of you would introduce yourselves for the record.
Professor Walmsley: I am Ian Walmsley, Pro Vice Chancellor for Research and University Collections at the University of Oxford.
Professor Rees: I am Teresa Rees. I have just finished being Pro Vice Chancellor for Research at Cardiff University, and I am a member of an expert advisory group to the European Commission on structural change in universities.
Dr Metcalfe: I am Janet Metcalfe. I am Head of Vitae which is an organisation that supports the professional development of researchers in higher education.
Q214 Chair: Thank you very much. We have heard that peer review isn’t perfect. What would a perfect system for evaluating scholarly research look like?
Professor Rees: One contribution that I think needs very serious attention is the way in which clinical trials, in particular, and pharmaceutical research address the issue of sex and gender in research. At the moment a considerable amount of research and clinical trials involving rats, mice and people is conducted largely on the male species, if I may put it that way, and yet the consequence is that the research turn into pharmaceutical products that are prescribed for men and women. While men and women have a lot in common-
Q215 Chair: This is about scholarly research. I asked a question about peer review. What is wrong with it? What would a better system look like?
Professor Rees: A better system would be one where the journals into which researchers put their work insist that all those submitting articles specify the sex of the participants in the clinical trials. At the moment pharmaceutical products are being withdrawn from the shelves, although they are based on research that has been peer reviewed and has appeared in journals, because they have not been tested on both sexes.
Professor Walmsley: I would start from the premise that we have two criteria: first of all, we want a system that is accurate and then we would like a system that is precise. I think peer review satisfies the first of those criteria based on the extent of the body of work and its usage over time. I am not sure there is a system that can provide more precision on single instances because one is dealing with new ideas and looking forward into the future. Therefore, essentially, you are trying to evaluate derivatives. The way to make things more precise is simply to have more chances or opportunity, as it were. It is not clear to me that there is a system which will provide both of those criteria in any very simple way.
Dr Metcalfe: I would add to that from the perspective that the peer review is actually a collective in terms of its system. So it is: how do you ensure the expertise and the objectiveness of the collective as a whole? How do you understand the system when you are entering it and ensure that you are being fair and inclusive in terms of the whole process of peer reviewing?
Q216 Chair: If we take Professor Walmsley’s observation that there is no perfect methodology, why is it that researchers are put under so much pressure to get work published in the high impact journals?
Professor Walmsley: Perhaps a simple answer to that from a parochial view of a university person is that that is the way one’s career advances. As you heard from the previous panel, a lot of very good work gets published in journals that do not have such high visibility, and I think that is quite crucial. None the less, having a highly cited paper in a journal that people would regard as high profile is considered important as a way to raise your visibility and develop your career.
Dr Metcalfe: We have drivers in the system, such as the research assessment exercise, that encourage that, so there is very strong emphasis in terms of the impact of the journal. Coming at it from my perspective as Vitae, it is: how do you support early career researchers to enter into that system and even make decisions about what journals they should be targeting? How do they get a sense of the most appropriate place for them to publish?
Q217 Chair: Doesn’t the tie-in between the research excellence framework and high impact journals potentially create a rather subjective judgment?
Professor Walmsley: I would argue that the reason peer review works well is the expertise of the community on an inherently subjective set of criteria; that is, one can with any piece of work assess various objective elements of it. Is it right? Is it novel-that is, is it new and not been published before? But the subjective element, which I think differentiates a number of different journals-because they have different subjective criteria-is the piece that is very difficult to assess in an objective way. Knowing that a piece of work is going to be important is a very difficult thing to do. In many ways that is something best assessed post facto; that is, the impact of this work is: how many other people find it a fruitful thing on which to build? How many people find it a productive way to direct their research as a consequence? It is difficult to say that one can be completely objective on all elements of assessing research outputs.
To add one more comment, I absolutely take the point about RAE. Having sat on one of the RAE panels last time, I can say the panel was very clear that the forum in which the paper had been published was not determinative. It was reading the individual outputs and assessing the value of the work itself that ended up being more important. None the less, when a CV comes across the desk of a head of department for a faculty post, as a first pass through it makes a difference where those papers are published.
Q218 Roger Williams: Turning once again to value for money, the Joint Information Systems Committee reported recently that it estimated it cost higher education institutions between £110 million and £165 million a year for peer review. Is it fair that these institutions absorb this cost on behalf of publishers?
Professor Rees: In my view, peer review is part of the process of ensuring that research is excellent and improving it. Conducting peer review helps in one’s own skill development, particularly early career research about which Dr Metcalfe can speak. So it is part of the academic process. We have an expanding number of journals, as we know, and there is increasing pressure to publish. I think there is a question of whether academics can keep up with reading all the material in the growing number of journals. One might want to have a debate at some stage about whether that is the most effective and efficient way of managing all the potential research that can be published.
Q219 Roger Williams: We were also told that a lot of this work gets done out of hours, so to speak. It seems to me that it is not costing the higher education institutions but individuals.
Dr Metcalfe: I was going to add that comment. I am not sure about the basis of the JISC work and how they did the calculations, but I think many researchers would feel there is a personal cost in terms of the effort they put into peer review. They appreciate that it is a very important part of the system-it is partly about protecting academic discipline and contributing to the academic community-but there is an expectation, not just with peer review but other aspects of being an academic, that you have to put in very long hours and you are expected to work beyond your terms and conditions of employment to be successful. These are systemic issues within the academic community, and peer review falls very much within that. It is also rarely identified as a specific element in workload conversations or models within institutions, so we have no idea how much time is spent by the academic community on peer reviewing.
Q220 Roger Williams: It is probably not part of this inquiry, but it seems to me that if it is the case that a scientist’s standing within their subject or community depends on doing peer review then those who perhaps have other responsibilities, such as caring or parenting, are put at a disadvantage in progressing through the profession.
Dr Metcalfe: I do hope it is part of your review. That is a very important aspect of looking at whether or not the peer reviewing system disadvantages different groups within the academic community. I question whether or not there is recognition for being a peer reviewer, although there is certainly recognition that it is an important contribution for academics to make. Some early career researchers put on their CVs when they have peer reviewed to try to make that visible. Otherwise, I think it is an invisible contribution to the academic community except when you get on to an editorial board or grant panel.
Professor Rees: I think it is handy to have time-limited periods serving on editorial boards and research councils, because that is where the bulk of peer reviewing occurs. Perhaps that can be shared more effectively, but journals do vary in the extent to which they impose a time limit.
Professor Walmsley: An important aspect of value for money, from my perspective, is the effective certification that peer reviewed publications have in indicating where the critical mass of research will be. If you are beginning a new project, perhaps as a young researcher, a starting point will be to review the literature. Having a certification that a paper has appeared in Science, Physical Review Letters or wherever will be a place where you take note and start to think about where you can build on that. That certification does take time. I would like to know the numbers-I am afraid I don’t-for the total cost divided by the total numbers of people taking part in science in the UK. It is probably a relatively small number. That might be a useful number to realise.
In terms of overall recognition and internal promotion within universities, you heard Dr Metcalfe talk about that. I would agree. It appears when people are evaluated that they have reviewed for journal X, Y and Z. Certainly it appears and appeared in RAE as an indicator of esteem. That is what you might call passive recognition. I think active recognition is becoming more common; for example, the American Physical Society has an outstanding referee award. Every year it makes a big deal of naming people who have provided consistent, high quality and useful reviews. That is becoming more open. It is not a direct financial compensation for time. However, I think most people would say this is a contribution to the community which reaps values in other ways.
Q221 Roger Williams: You have already mentioned staff from universities taking part as editors and members of editorial boards. That must take them out of their departments for long periods of time. Is there any evidence that universities might discourage their staff from taking up those posts and duties?
Professor Rees: On the whole, I think they like members of their staff to be on editorial boards because there is some recognition of the institution as well as the individual if it is providing people to sit on them. But it is time consuming. Certainly, if you are the editor of a journal that can be reflected in workload management; being a member of a journal or a reviewer that is just used on an ad hoc basis tends not to be.
Q222 Roger Williams: Overall, would your judgment be that in the best of all possible worlds researchers should be paid for their work as peer reviewers?
Professor Rees: I am not sure of the answer to that question. It is strange that if researchers do the research and publish and then do the peer review and the editing-in some cases now they are asked to pay for their articles to be published-one finds oneself responding to a memo saying, "Which journals do you think we should cut from the library because of budget cuts?" I would say there is a bit of a paradox there.
Professor Walmsley: I would concur with that, having part of the library as my portfolio too. That is an internal and difficult question to address. As to whether reviewers should be paid, I think that may send incentives in the wrong direction. One wants as wide a fraction of the community with appropriate expertise to be involved as possible. The way we might see it internally in departments at Oxford would be that this is a contribution to the community, just as chairing or sitting on committees in the university is considered part of what you need to do in order to make the place and the business function. But the question is about keeping an appropriate lid on that. There are various ways in which one might do that, but in mentoring terms one would often say, "You want to review twice as many papers as you publish and you want to review three times as many grant applications as you submit." That tempers your workload and makes the whole system work.
Chair: We want to pursue that with a few more precise questions on exactly where you left off.
Q223 Stephen Metcalfe: I want to go back to the issue of recognition, if I may. Assuming that we are not going to pay peer reviewers, do you think that peer review should be formally recognised as part of an academic’s work and included in the criteria for evaluation, promotion and those kinds of things? If you do agree with that, what barriers are there to putting a system into place that would handle those?
Professor Rees: Funding research councils-because peer review is important for funded research as well as journals-have set up colleges of reviewers. To be a member of such a college does provide some recognition and journals might want to think about referring to their ad hoc reviewers as opposed to their editorial boards-the ones they use on a regular basis-in that way. Some journals will publish, at the end of the year, the names of the ad hoc reviewers they have used during the year but I think that is really neither here nor there. A college system might work.
Dr Metcalfe: Vitae has worked with universities and research funding councils to develop a researcher development framework which recognises the broadness of being a researcher. Very much embedded within that is the importance of publication, both from being an author but also contributing as a researcher in terms of the peer review system, mentoring early career researchers in terms of their development. So, being a researcher in higher education is very much part of the job description. I would say it should be recognised as a workload in the same way as other aspects.
Q224 Stephen Metcalfe: Is that a change?
Dr Metcalfe: It would make explicit things that are now implicit in terms of being an academic specifically in relation to peer review. I think a challenge for early career researchers is: how do you get into that system? How do you become a reviewer? It is very often by recommendation. There are journals that do open courts for reviewers, but it is usually part of the apprenticeship of being nurtured as a researcher by your principal investigator or senior academic. There are issues in terms of how we support those researchers to become involved and good at peer reviewing on both sides of the fence, but also how we recognise it by acknowledging the broadness of a researcher’s activities.
Professor Walmsley: It can perhaps be made more explicit, but I think it is somewhat explicit now; that is, in evaluating people for promotion one would look not only but primarily at the quality of the research undertaken and published but also at how they have contributed to the working of the community. That will come internally, as to how they have worked within the department-and evidence for that would be sought-and as part of the larger community. One would normally expect to see, on a CV for evaluation, that somebody had undertaken reviewing for research councils or, in this sense, professional societies or other publishers for journals.
As to the extent one wishes to quantify that to a greater degree, I would be cautious about that. One doesn’t want to be prescriptive. One wants to see some threshold of evidence that people are playing a role without being quantitative about exactly how much they ought to be doing.
Q225 Stephen Metcalfe: Do you think that the peer review burden is growing at the moment and is becoming a greater issue for researchers? If so, who is carrying most of that burden? Is it young and inexperienced researchers or mid-career experienced researchers? Who carries most of the burden at the moment, assuming you agree there is a burden?
Professor Rees: There is certainly a burden. As the sector expands and you have more people applying for promotion and jobs as well there is all the peer review involved in writing either references for people where you have been nominated or assessments where you are required to assess their work. That is another whole area of reviewing. I think that is one that is increasing enormously. Certainly, promotion systems are requiring more external peer review at all levels.
It is hard to say where the burden falls exactly because it would depend on the nature of the reviewing, which kinds of journals and the field, because different journals will call for different numbers of referees. For example, if it is interdisciplinary research you are more likely to ask for a wider number; similarly if it is an interdisciplinary research grant proposal. I would have difficulty in saying that the burden falls definitely in this or that part of the community. It is spread but not in an even way.
Professor Walmsley: I would concur entirely with what Professor Rees said. As she noted, peer review is pervasive throughout all aspects of the academic endeavour, not just publishing. For example, one may distinguish that senior people will have more to do with evaluation of others through promotion, tenure, awards or what have you and perhaps at the editorial end in publishing, and that younger people will have more of the burden of evaluating individual articles or specific research grants.
Q226 Stephen Metcalfe: I want to turn now to the issue of training. Dr Metcalfe, I think that the number of people who take up the opportunity for training either in peer review or other publication training is relatively small. Why do you think that is?
Dr Metcalfe: The tradition is very much an apprenticeship model. You learn the system by doing it in terms of writing papers, submitting them and maybe getting feedback from your principal investigator. Where that works it is absolutely fantastic in terms of somebody taking an early career researcher through the system and giving them feedback before they submit their articles, maybe having several researchers in their group giving feedback, and showing them how the whole process works. But, because we are a collective in terms of the academic community, there is opportunity for that process not to be as well supported throughout the whole of the academic community as it could be.
The challenge is how to help a researcher maximise their opportunities of publication at submission so that they are reducing the amount of rejections and the amount of comments they have to do through that. Formal training in that process is one way in which you can do that. From some of the research Vitae has done, we have evidence of increases in the success rates of grant applications and fellowship applications by having formal training and development in working within the peer review systems for both of those. We could do more in advance of a researcher having to submit their first paper or grant proposal so that they are better informed and therefore more expert about how the whole process works.
Q227 Stephen Metcalfe: You would be in favour of moving towards a more formal requirement for training. You consider that it should be provided across all higher education institutions.
Dr Metcalfe: No, I wouldn’t go down that route. I think the opportunities to have training should be there. The process by which a researcher learns to become expert is very much up to their individual circumstances. If they are getting good individual nurturing and mentoring by their PI, that is great. But there should also be the opportunity, for those researchers who respond more to formal training, to have that available as well.
Q228 Stephen Metcalfe: Who do you think should pay for that training?
Dr Metcalfe: Collectively, we all have a responsibility for it to work. I think journals have a responsibility to support and provide more information about what is required and to contribute to the training of their reviewers. I think institutions have a responsibility, as signatories to the concordat for the career development of researchers, to ensure that those opportunities are there. I think research and funding councils and Government have an obligation to provide enough funding within the entire system to make available that kind of training for our early career researchers.
Q229 Stephen Metcalfe: Does anyone else want to comment on that issue?
Professor Walmsley: I would concur that a combination of both mentorship, which I think has a primary role, and some elements of non-mandated training would continue to be very helpful. Those aspects are in place at Oxford, for example. I think they work well to bring people into a system in a way that helps them to understand and use it.
Professor Rees: We introduced a system in Cardiff where people who had submitted research grants for publication, for example, made them available with the referees’ comments so that our young researchers could read that and, by looking at a whole set of these in their field, could understand what a good proposal looked like and the kinds of things that reviewers came up with.
Stephen Metcalfe: But you would not want to establish a training framework.
Q230 Chair: Before you leave that, on the one hand, you argue in favour of maximising fairness across gender, with which the Committee would agree 100%, but, on the other, you are not seeking to create a formal structure. How are you going to get one without the other?
Professor Rees: As to gender, we should be following the lead that has already been shown in the United States among research funding bodies and, in particular, health journals like cardiology, which says that people who are describing research that they have conducted, which involves clinical trials, should specify the participants in those clinical trials. Very often they do not do so and that has led to deaths, particularly of women. But it is also a difficulty for men who experience breast cancer, for example, and who can be prescribed Tamoxifen that has never been tested on men. I think that in order to get more rigorous excellence in research we need to pay proper attention to this.
Dr Metcalfe: One group of stakeholders we have not talked about in terms of responsibility is the early career researchers themselves. I think that is where your comment comes together.
Q231 Chair: Particularly for that group; I think it is fundamental to them.
Dr Metcalfe: There are responsibilities for those early career researchers in terms of thinking about what they need in order to be excellent researchers and more professional in their contribution to the community. Individual researchers need to be able to identify whether or not they need more training in this area and whether they understand the system and processes they have to go through. It may be that an individual researcher will prefer or have the opportunity to have some mentoring, rather than go on a training course. I think that flexibility in how people develop their expertise-
Q232 Chair: But are you placing that onus on the institution where they are based?
Dr Metcalfe: Not just the institution. The institution has to have the provision and ensure there is enough opportunity for researchers to get that professional development. The responsibility is on the individual researcher to take advantage of those opportunities and ensure that they are developing their own expertise and understanding of the entire system. It is not purely an institutional responsibility.
Q233 Stephen Metcalfe: I can see merit on both sides. There is a growth in the approaches to peer review at the moment but it is changing. Do you believe that researchers will have the wherewithal to adapt to all these changes as they come along? I am also concerned that there might be a time impact. Do you believe that they will have enough time to get involved in things like post-publication commenting, interactive public discussion and all those kinds of areas that may be more time-consuming than getting involved in prepublication review?
Professor Rees: Innovation and engagement is very much the third arm of research activities. My own institution has introduced it as a main criterion in promotion and researchers have to give evidence of what they have done and how it is of excellent quality. I think researchers are more and more aware, through the impact agenda, of the need to do this, but time is finite. Therefore, one needs to engage in this kind of dissemination and strategic relations with organisations that might benefit from the research in a very effective way. Compared with conducting research and teaching with which we are fairly familiar, in all fairness we are only in the process of developing effective ways of dealing with the agenda of impact and engagement.
Professor Walmsley: I don’t think that publication itself will play a major role, for two reasons. First, pre-publication is of finite duration; post-publication is ad infinitum. So there will be a half-life associated with that anyway. On post-publication I think the important questions will be: "Is this piece of work relevant to what I do? Has it made me rethink? Has it led to a new fruitful outcome that I will then go on and publish?" That is going to be the important thing rather than a commentary. It is: how is this piece of research now used?
Dr Metcalfe: I would concur with that, but I would also add that critical debate is a very important aspect of the academic community. If it is in an area of specific interest to you then researchers will engage in that debate process because it is fundamental to the way research is done.
Q234 Graham Stringer: Last week the chair of COPE told us that if a university had not fired at least one academic for fraud there was something wrong with the university. Do you agree with that statement? Do you think she was right? If so, have your universities sacked any academics over the last five years for academic fraud?
Professor Walmsley: The answer to the second question is no.
Graham Stringer: So you are not firing.
Professor Walmsley: Yes. I would say that the answer to the first question is probably no, too, but I want to be careful not to suggest that there are no ethical challenges within publication. We have a process within Oxford, which I am certain is the same at other places, to deal with that. Part of the question is: how does it come to your attention?
Q235 Graham Stringer: Before you go on, is that process published?
Professor Walmsley: Yes. It is available on the website through the Research Integrity Portal and there is an access through the SkillsPortal to that as well.
Q236 Graham Stringer: So, that is true for all?
Professor Walmsley: Yes. How do you identify and find that out? I think that internally, at the pre-publication end, there is great onus on researchers. As more and more papers are published with joint authors there is joint responsibility for doing that. That could lead in two directions: first, increased pressure to get it right because there are more people involved in the discussion; but, secondly, the chance that you will miss a trick or two because there are more people contributing. It is a difficult tension. Once the paper is out there, if an external party notes something that looks challenging I guess we will hear about that either from the external people or from editors themselves. If an editor writes, we will be able to investigate that internally.
As to the sanction of firing someone, I said I have not known that to happen, but there are certainly lower levels of discipline that can happen. However, I don’t know what the statistics are at Oxford.
Q237 Graham Stringer: If you were to have an investigation, would you publish the results? Would that become a public document?
Professor Walmsley: I don’t know the answer to that question.
Q238 Graham Stringer: Would you write and tell us?
Professor Walmsley: Yes, I will do that.
Professor Rees: It is not an issue I have come across, I have to say, in all my years as Pro Vice Chancellor at the university. I think the mechanisms particularly in adhering to ethical guidelines early on in any process of the vetting by ethical committees sometimes involve external agencies as well. Given the different groups of people involved at all the various stages in getting approval to put in a research grant from within the institution and the refereeing that goes on all the way through, I think it is quite difficult to be very successful in conducting fraud.
Q239 Graham Stringer: In asking the question, the case that has been in my mind through this investigation-and there are other cases-is that of Andrew Wakefield. He had a peer reviewed paper. The institution he was working for was asked to investigate and the journal was asked to investigate. The truth was only arrived at after 10 years because a particular journalist pursued it doggedly. What you seem to be saying is that it isn’t a problem. Like the chair of COPE, I just wonder whether you are looking at this hard enough. For instance, as in the Wakefield case-and there are others-do you think there should be an external statutory regulator?
Professor Walmsley: Are we discussing now the issue of the research itself getting underway, or the issue associated with the publication part of it?
Q240 Graham Stringer: I am discussing the situation where an accusation is made that somebody has published academic work that is fraudulent in some way because they have fiddled the results or the sampling in whatever way.
Professor Walmsley: It is not quite clear to me how such a regulator might work except that, before the research was started, as Professor Rees would have said, if it had had certain components it would have gone through an ethical review. That aspect and the method by which data would be taken and the way that would be curated, etcetera, would have been laid out at that stage. On aspects where that was done I think there is good evidence to look back internally at the trail.
In places where that was not done and there were no ethical concerns or issues before the research started, it is hard to see quite how an external regulator would work. One might say, given new research council moves in terms of data curation itself, there will now be a trail that is both internally and publicly accessible to look back at all of that matter, but it could be quite burdensome and onerous. I think the internal processes are reasonably robust. You raised the question of whether it was public, and that is an issue I am willing to look at.
Professor Rees: The processes are more robust now than they have been historically.
Q241 Chair: Hang on a minute. You have not come across cases of fraud. How do you know that the processes in place to deal with them are robust?
Professor Walmsley: It is true. I noted that we had not come across cases of fraud in respect of publications. There have certainly been other issues-I will not say it is fraud-associated with ethical conduct of research where we have processes that parallel those we might use for publication, and they have been shown to be effective. In respect of publication I would say that at least within my tenure they are untested, but I think there is good evidence that parallel processes for other issues work.
Q242 Graham Stringer: Are you talking about plagiarism now?
Professor Walmsley: I am talking about conduct of research on a grant and the terms under which grants were obtained.
Q243 Graham Stringer: I put a final question that you may or may not be able to answer. You listened to the session today. We have been looking at the process, whether peer review can detect fraud and all sorts of things. As I have listened to and read the evidence presented to us I have had a feeling that we should be looking more at the commercial pressure on both editors of journals and researchers. Do you think we should be concentrating more on that? What do you think that we should be concentrating on as a Committee? What would you hope to see emerge from the piece of work that we are doing?
Professor Walmsley: I think the primary consideration is: how does one validate the quality of scientific research? The peer review is certainly one element of that. You have heard from Professor Rees, Dr Metcalfe and myself how pervasive that is in all aspects of the academic side of research. If one is thinking about alternative methods, it would be good to understand how one saw those working across a wide range of different activities where this kind of process works. It is hard for me to understand how one would find a replacement within publications that wouldn’t also be useful or useless in other spheres. Either confirming or identifying robust alternatives would be a good outcome.
Professor Rees: What I would like to see coming out is some consideration of the new Spanish legislation passed in January which is now binding on all publicly-funded research in Spain. It is an attempt to try to promote excellence. There are some quite innovative ideas in that legislation. Also, as I mentioned earlier, I think we have lessons to learn from the publicly-funded scientific research in the United States on guidelines which again is designed to promote excellence in research.
The European Commission is in the process of drafting two communications, one on modernising universities and one on structural change in research. There are some very interesting discussions going on there about possibly producing a directive for and with Member States which again is designed to increase the quality of research in the EU in the context of increasing global competitiveness. There is much research activity and good practice from those different sources that I hope this Committee might want to take on board because of the amount of work and consideration that has gone into developing it.
Q244 Chair: But don’t the Spanish and US examples take us a little beyond the scope of UKRIO and closer to a regulatory framework?
Professor Rees: It is really more about integrating best practice, particularly in Government-funded organisations such as research councils. I think there is a lot to be gained from looking at that.
Q245 Gavin Barwell: I want to ask some questions about gender and other biases. Both Professor Rees in an article and COPE in its submission to us said that the evidence on gender bias in peer reviews was contradictory. What is your assessment of the scale of the problem, even if it can be clearly quantified?
Professor Rees: I think there are two aspects to this problem. The one that I have been talking about is the way in which clinical trials are often conducted on one sex but the pharmaceutical products or the results that come out of that research are prescribed to both. I am sure you are all familiar with the research that suggests an aspirin a day is very good for heart disease. That was conducted on clinical trials of 27,000 people, so it was fairly robust research but they were all male and heart disease in women is different. There are contraindications for women if they take that kind of medication for heart disease. To me, these are poor methods. It is not doing the research properly.
Q246 Gavin Barwell: Are you saying that is bad science?
Professor Rees: It is bad science, exactly.
Q247 Gavin Barwell: It is not gender bias in peer review. You would expect the peer review to pick up that that is bad science.
Professor Rees: If the peer review is done properly the research will be done properly. Many Government-funded research institutions and journals in the States and other countries insist that that is revealed. Therefore, it is made clear so peer reviewers can do their job properly. You can’t really get funding to do medical research in the States from publicly-funded sources unless you explain your research design on those criteria. I have to say it is not just sex. Research for products that would lead to treatment for Parkinson’s disease is conducted on very young rats. There is an issue on all kinds of criteria like that. I think this is an extraordinary waste. It is also true in engineering. For example, in developing cars test dummies have been used in crashes. First, they used only male passengers. There are differences in whiplash and so on because of different frames, but as far as concerns the air bag in the passenger seat if you happen to be pregnant the first thing it does is kill the foetus. There is lack of attention to the diversity of human bodies in research. To my mind, that is laziness and poor methodology. Peer reviewers need to be able to assess the quality of research effectively.
The other difficult aspect of peer review is whether the gender of the person who is applying for a research grant or has written the article makes a difference. Do people operate with a preconceived notion of quality? There is a whole series of studies about this. For example, evidence from the States suggests that if John Mackay or Jean Mackay submits an article it will be peer reviewed more favourably if it is by John Mackay. There is a whole series of papers to that effect. How do we deal with this? I add that this is discriminatory against both men and women. It seems to me that in the selection of reviewers to serve on research council boards, journals or promotion panels we need transparency so that people can apply and be assessed against merits to gain those positions, and we need turnover so it is not the same people doing that assessment for 20 or 30 years. We might want what is unfortunately called double-blind reviewing so you don’t know the sex. Equally, there is unconscious bias against people with foreign-sounding names. Brazil’s science minister is very concerned about this and has encouraged academics there to co-author with people from the US or Europe who may have a surname that is more familiar to reviewers. Double-blind marking would deal with that unconscious bias that affects peer reviewers as it does any other member of the public.
Q248 Gavin Barwell: My final question is to Professor Walmsley. You stated in your evidence: "There is now quite a lot of evidence as to the practical issues which need to be tackled to make the review of funding proposals and of work submitted for publication fairer …" Can you tell us what Oxford is doing to address those issues?
Professor Walmsley: Like Professor Rees, the question of how one populates panels and encourages people to be involved in this process is a key element of that. Therefore, the kinds of things we would be looking at are: internal training, as discussed before, coupled with encouragement that people need to be actively involved in that; and that the chairman or chairwoman of a panel makes sure the terms of reference and what people are being asked to do are clear.
Chair: Thank you very much for staying with us for so long. It has been an intriguing afternoon. Thank you very much for your answers. Professor Walmsley, if you have any references to the documents to which you referred Graham perhaps you would provide the links to them.
Professor Walmsley: Certainly.
Chair: Thank you very much.