Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Two critical letters from the Sheffield Deanery have been posted on the EMJ online web site. The first was written about a particular BET (vasopressin versus adrenaline (epinephrine) in cardiac arrest) and challenged the completeness of the search—claiming that a brief search of Medline had uncovered recent reviews that had been missed. It also called into question the whole peer review process for BETs. The letter was erroneous, and a robust defence of the search and selection strategy for the BET in question was posted by the author (Kerstin Hogg) pointing out that finding papers was a first step and that the alleged “missed” papers had in fact been found and added nothing in that they had found no additional primary papers. The issue of the peer review process was left as the argument that the review process for this BET had failed (which underlay the complaint) had been disproved. Dr Webster raises a number of points in addition to those raised by Dr Locker earlier in this correspondence. I feel the BestBETs team needs to answer three of them:
The BETs are not of low methodological quality—rather they are of lower methodological quality than a systematic review. They are in fact of much higher quality than almost all of the literature reviews contained in the original articles that are mooted to replace them. They have a clear aim (three part question), a clear methods section (search strategy) and clear results (evidence table), discussion (comments) and conclusions (clinical bottom line). The main problem (in that the method is not overt) lies in the selection strategy for the papers (that is, why were particular papers selected and why were particular papers rejected). The BestBETs web editors group is currently working towards posting selection algorithms on the BestBETs web site—but in the mean time the method is set out in the original paper in JAEM that is referenced in every BestBETs set published. My opinion is that it would take a huge amount of space to publish an overt selection section individually for each BET (listing the papers rejected and why) for no real gain.
Waste of space
Best BETs are also published in Archives of Disease in Childhood and Interactive Cardiovascular and Vascular Surgery. Emergency Nurse is also likely to begin publication this year. Journals such as BMJ publish POEMS and extracts from Clinical Evidence, and the ACP Journal Club is entirely based on such articles. Best BETs have been reviewed both on the ACP Journal Club and Annals of Emergency Medicine and are regarded as the best way to answer real time questions in the specialty. Furthermore, at the time of writing four BETs are in the top 10 hits (January) so we are clearly giving most readers something they want.
The BestBETs are specialist articles (shortcut reviews) and require peer review by expert reviewers. BestBETs are all dual authored—one reporter and one checker. This is the first review. The BestBETs team have set up a system whereby all BETs in emergency medicine (whether generated internally or externally) are brought to the weekly evidence based journal club at the Manchester Royal Infirmary for group review. At this meeting the construction of the question, the search strategy, the data extraction, and the veracity of the clinical bottom line are discussed. This is the second review. Internally generated BETs are sent back for reconstruction by their original author and brought back for further discussion once this has been done. Externally generated BETs are allocated to a club member and brought back for further discussion once required changes have been made. This is the third review. Each BET is then allocated to a web editor who independently reviews the search strategy, paper selection, data extraction, and clinical bottom line. This is the fourth review. Finally, just before publication, the search is rechecked and rerun by an information officer (as MESH terms may have changed) and any new papers obtained. The table and conclusions are updated by a senior editor and the article submitted to EMJ. This is the fifth and final review. I suspect this process stands up well to the peer review process (judging from what we see on the hanging committee). I would term it specialist peer review and I believe it to be eminently defensible.
In summary the BestBETs are not low quality—but lower quality than the highest possible quality (systematic reviews). If this were a reason not to publish then EMJ would be easy to edit (blank pages every month) as we very rarely receive papers of the highest quality. They are popular with the readership. The peer review system is probably more rigorous than that for other articles.