Tips for Reviewing Scientific Manuscripts: Part 2 – Review Articles
Writing review articles is common in the life sciences, especially medicine and many people believe it's not a difficult task but instead an easy way to start (or boost) your publishing career. You just pick a topic you know more about than your colleagues, pick a few articles and summarize them. You can write your manuscript quickly and it doesn't even require you to have/analyze any data from your own lab!
In fact, writing a review requires good analytical and critical thinking skills as well as very good command of the English language to understand the papers you read and to write a nuanced analysis.
This contrast in perceived difficulty results in journals being flooded with low-quality submissions of review articles. Some journals have gone so far as to stop accepting unsolicited reviews and only consider reviews that a member of the editorial team has invited an author to write.
Since review articles undergo peer review just as any other article would, today, we will discuss things to look for when you're asked to review a review!
Traditional vs. systematic reviews
When writing a review you can decide between a traditional, also called narrative review and a systematic review. While a systematic review has a clear, ideally predefined methodology, this is not the case for a traditional review.
As a systematic review requires you to describe how you search for articles, why you include or exclude them and how you analyze the information, they increase reproducibility and — if done correctly — reduce the risk of bias. However, as you can probably imagine, they also require quite a bit of work.
If you are new to systematic reviews, have a look at this introductory video by the Cochrane Collaboration.
When you're reviewing a review, you should therefore first check what type of review you're looking at. Ideally, this should be clearly stated in the title. When you know what you're looking at, you should look for the question that the review is trying to answer. Ideally, this information is in the title as well. If it's not in the title, it should be in the abstract. If it's not in the abstract, that's the first thing you should criticize!
Now you have to decide, if the type of review is suited to answer the research question. A systematic review is always the right type…being systematic when doing science is never wrong. But if you're looking at a traditional review, you should have a closer look at the research question.
Questions for traditional reviews
If the authors are just trying to give an overview of the developments in a field and don't try to create strong recommendations, this is an example where a non-systematic approach can be acceptable. For example: 'Developments in artificial intelligence for the detection of super rare tumor XY – a traditional review'. This type of review is usually characterized by not having a very narrow research question and instead trying to show broader trends.
Questions for systematic reviews
If a question is very precise and especially if it requires a quantitative answer, there is no way around a systematic review – for example: 'Does drug A cause more complications than drug B?' or 'Do patients with tumor XY live longer after surgery or radiotherapy?'If the authors have not conducted a systematic review when they should have conducted a systematic review, this is a cause for major criticism. If, in addition, the authors also present a strong conclusion such as „based on our findings, drug B should be preferred“, you should call for the article to be rejected.
You cant answer this question with a non-systematic review, because the risk that authors might overlook/ignore/miss studies that are crucial to come to a balanced conclusion is way too high!
Reviewing systematic reviews
When you write a systematic review, there are several resources that can help you to structure your approach such as the PRISMA guidelines. As a reviewer, you can check whether a systematic review adheres to these guidelines. Some things you should check in particular include:
- Is the research question stated clearly? An acronym from PRISMA to guide you is called PICOS (participants, interventions, comparisons, outcomes, and study design)
- Are the inclusion and exclusion criteria reasonable? Do they create a bias?
- PRISMA requires authors to „Present full electronic search strategy for at least one database“. Check that search strategy with the respective database and compare the number of results you got to the number the authors give in their manuscript.
- You can't check every paper that the authors included, let alone screened unless you want to reproduce the whole review. But you should open a few articles and see if the authors did their job with sufficient care.
- Do the authors present sufficient data for each of the included studies including risk of bias?
- If the authors use statistical methods to combine the results from the included studies, which is called a meta-analysis, are those methods clearly explained?
- If the review is not blinded, google the authors — is there a potential conflict of interest that the authors did not disclose? Or is one of the conflicts of interest that the authors did disclose a cause of bias in the manuscript?
Generally speaking, a key goal of a systematic review is reproducibility. If the authors do not provide sufficient information so that you or other researchers in your field could try to reproduce the results, you should criticize that.
In the end, you should also check the limitations that the authors mention. While you might have heard that systematic reviews are super high-quality evidence, this can be severely limited if the articles that a systematic review analyzes, are not high-quality. For instance, if a systematic review tries to answer if drug A causes more complications than drug B, but there are only retrospective studies or case reports, then there is still no prospective, let alone randomized data and this should be clearly stated. Since many people equate systematic reviews with high-quality evidence, it is even more important to be upfront and clear regarding limitations as well as possible source of bias at study and review level.
Reviewing traditional reviews
For traditional reviews, the rules are much less strict. However, as a consequence, the conclusions that authors can come to using a non-systematic approach should also be less strict!
Some things you should do, to check if the authors have done good work:
- Do a little literature search and see if there are any important papers that the authors did not include. What could be the reasons for not including them?
- Just as you would do for a systematic review, open one or two articles and see if the authors did their job with sufficient care.
- Google the authors — is there a potential conflict of interest that the authors did not disclose? Or is one of the conflicts of interest that the authors did disclose a cause of bias in the manuscript?
General considerations
One thing that you should always do when reviewing is to check what the article in front of you adds to the existing literature. Ideally, you have considerable expertise in the field you're doing reviews but this isn't always the case and a little online search to see what has already been published never hurts.
Are there other reviews on the topic? Maybe even systematic reviews? Do they come to the similar conclusions? What exclusion and inclusion criteria do they use? This can help you to assess whether the criteria of the article you're reviewing are appropriate.
Another thing that you should be wary of is self-citation. For some authors, a major motivation to write a review in addition to the additional publication is increasing the number of citations for their existing publications. Why is that important to authors? The number of citations is — in addition to the impact factor of the journals that an author has published in — an important metric to assess how 'good' a researcher is. There are even metrics like the h-index to turn the citations into a single number that is comparable between researchers. While the fixation on citation metrics has been criticized for quite some time, citations are still an important criterion for many grant applications and academic positions. Since there are researchers out there, who almost exclusively rely on self-citation (check out this nature article on the topic if you're interested), it makes sense for them to write a review and cite as much of their own articles as they can get away with.
However, not every author who writes a review and cites several of his own articles is guilty of inappropriate self-citation. If an author writes a review in a narrow field where very limited research exists but he is one of the main contributors, then he has no other choice but to cite his own articles (however, this should be mentioned as a limitation in of the review). If the field is however rather broad, yet the author still cites his own papers a lot even though there are other papers that he could or should cite instead, this is something that you should criticize and prove with examples.
Closing remarks
Make sure to have a look at the other articles of this series. In the next parts, we will discuss reviewing other types of articles, such as case reports or original research. If you found this article helpful, let me know! If you disagree with some of the things I said, let me know as well!