Search the site...

ORGANIZATION FOR HUMAN BRAIN MAPPING
  • BLOG
  • Tutorials
  • Media
  • Contributors
  • OHBM WEBSITE
  • BLOG
  • Tutorials
  • Media
  • Contributors
  • OHBM WEBSITE

Registered Reports in Human Brain Imaging

4/24/2018

2 Comments

 
GUEST POST BY CHRIS CHAMBERS
PictureProfessor Chris Chambers, Cardiff University
The biomedical sciences are facing a rising tide of concerns about transparency and reproducibility. Among the chief concerns are inadequate sample sizes, lack of sufficient detail in published method sections to enable replication, lack of direct replication itself (and notable failures when attempted), selective reporting of statistical analyses in order to generate desirable outcomes, suppression of negative results, lack of sharing of materials and data, and the presentation of exploratory outcomes as though they were hypothesis-driven. Collectively these problems threaten the reliability of biomedical science, theory generation, and the ability for basic science to be translated into clinical applications and other settings.

Human neuroimaging in many ways represents a perfect storm of these weaknesses, exacerbated by the fact that two of the main techniques, MRI and MEG, are extremely expensive compared with adjacent fields. Researchers using these methods face tremendous pressure to produce clear, positive, publishable results, usually in small samples.

Until recently such issues were rarely discussed openly, perhaps for fear that it would bring a relatively embryonic discipline into disrepute and collapse funding opportunities. But they have been simmering below the surface for a long time. Years before irreproducibility was headline news, at one imaging centre where I worked we noticed that we were running out of data storage faster than we were acquiring new data. After some detective work we learned why. Researchers were repeatedly analysing and reanalysing the same datasets, and with every reanalysis they were inadvertently duplicating huge quantities of raw data. The incident was illuminating about normative research practices.
​
PictureData fishing in brain mapping (Image credit: Neuroskeptic)
When I raise this scenario with colleagues, their typical response is “Well, duplication of raw data is a silly mistake but most fMRI research is exploratory and exploration is vital for science”. This is true, of course. There is a huge amount to gain from performing reanalysis of existing, complex datasets. But the key, then, is whether such exploratory research is documented transparently as exploration. In an exploratory field, and especially one that often relies on inferential statistics, shouldn’t publications faithfully report all analyses that were attempted before settling on the ones that drove the conclusions? And does this happen in fMRI? Of course it doesn’t. Pick up a copy of any neuroimaging or cognitive neuroscience journal and you’ll find article after article purporting to test hypotheses using complex analyses, each of which is presented as though it was planned in advance. Given the pressures on researchers to produce clean results and frame them as the outcomes of hypothesis testing, it comes as no surprise that virtually no two published fMRI studies report the same analysis pipeline.

​There are many solutions to this quagmire, including greater sharing of data, materials and code, and I also believe one major piece of the puzzle is preregistration of hypotheses and analysis plans. Many in the neuroimaging community are skeptical of preregistration and what it might say about our scientific approach, which sits uncomfortably between confirmatory and exploratory modes and relies on massive investment to remain afloat. When your typical experiment involves hundreds of analytic decisions, each of which can be considered “legal” yet produce slightly different outcomes, there is a natural tendency to fear that pre-specification of any particular route through the garden of forking paths will lead to unpublishable, possibly confusing findings. We thus feel pressured to apply the “human element” to bring order to chaos. Researchers will routinely spend months poring over their data and analyses using sophisticated statistical methods but almost none appreciate the risks of their own biases in interpreting one outcome among hundreds or thousands of possibilities.

This is why I have pushed hard for neuroimaging journals to offer Registered Reports (RRs). The RR format eliminates the fear of preregistration producing unpublishable results because, for RRs, the editorial decision is made before the results are known. Detailed study protocols are reviewed before researchers commence the research, and following detailed review of the theory and methods, the highest quality submissions are accepted for later publication regardless of how the results turn out. Researchers can also report additional exploratory (unregistered) analyses, clearly flagged as exploratory, and are encouraged to include preliminary experiments to validate a proposed analysis pipeline.

This week sees the launch of Registered Reports as a new article option at NeuroImage as part of a two-year pilot initiative, co-edited by me, Birte Forstmann (University of Amsterdam), Rob Leech (Kings College London), Jeanette Mumford (University of Wisconsin-Madison), Kevin Murphy (Cardiff University) and Pia Rotshtein (University of Birmingham). In addition to the usual features of Registered Reports, we are also inviting proposals for secondary analyses of existing but unobserved datasets, innovative approaches using Bayesian adaptive optimisation to combine the strengths of exploratory and confirmatory science, and review/perspectives articles on the potential costs and benefits of preregistration in neuroimaging research. Submissions are invited in any area of human neuroimaging and readers can find detailed author guidelines here.

Preregistration in neuroimaging is a high stakes intervention. The combination of high analytic flexibility combined with high risk of bias and high expense of data generation means that it has the potential to yield perhaps the greatest scientific benefits of any field to which it has been applied so far. But because of this methodological complexity, preregistration also brings some of the greatest challenges.

One such challenge is power analysis. Many of the 103 journals that currently offer RRs require high prospective power to detect the smallest effect of theoretical interest (e.g. 0.9 at Cortex, 0.95 at Nature Human Behaviour), but we know that MRI in particular is underpowered to detect theoretically plausible effect sizes, and we also know that many researchers do not have the resources to fund large studies. At one level this problem can be solved by consortia projects. Initiatives such as the Psychological Science Accelerator, Study Swap and the ENIGMA neuroimaging consortium are blazing a trail to facilitate more definitive team-oriented science. However, the main benefit of RRs lies not in the support of big science but in the elimination of publication bias and selective reporting. Therefore, to make the format as accessible as possible to the neuroimaging community, the RR format at NeuroImage will not set a minimum required statistical power or sample size. Instead we will simply require authors to justify the sample size they are proposing.

A bigger question is whether preregistration in neuroimaging is even feasible. To what extent will researchers feel able to prespecify their analysis pipelines in advance? For a RR, if an exact pipeline cannot be prespecified then researchers will be given the opportunity to prespecify data-dependent contingencies (e.g. if the data look like this, then we will apply this filter, etc.). They may also propose a blinded analysis strategy or an adaptive design in which some decisions will be post hoc, but actively protected from bias. Can such approaches succeed? I believe they can but for me the most fascinating outcome of this particular RR launch will be to discover how a community of talented analysts tackles this challenge.


Chris Chambers is a professor of cognitive neuroscience at the Cardiff University Brain Research Imaging Centre and guest section editor for Registered Reports at NeuroImage

2 Comments
David Seminowicz
4/26/2018 05:27:57 am

I am generally in favor of registered reports and think this is a move in the right direction for the field. I wonder about a few things, though, which I'm sure you've thought about. First is additional reviewer burden, and second is whether journals with registered reports end up with all the null results. They're related in the following scenario: a paper is registered at NeuroImage and undergoes review, but then when the results come out and strongly support the hypotheses, the authors decide to send the manuscript to a "higher" journal. I don't know how common this scenario is or whether there are ways to prevent it - would love to hear your thoughts.

Reply
Chris Chambers
5/22/2018 02:20:49 am

Hi David
Good questions. Both of these are addressed in the Registered Reports FAQs at https://cos.io/rr/

Specifically:

----------
The peer review process for Registered Reports includes two phases. Won’t this create too much additional work for reviewers?

It is true that peer review under the RR model is more thorough than conventional manuscript review. However, critics who raise this point overlook a major shortcoming of the conventional review process: the fact that manuscripts are often rejected sequentially by multiple journals, passing through many reviewers before finding a home. Under the RR model, at least two of the problems that lead to such systematic rejection, and thus additional load on reviewers, are circumvented. First, editors and reviewers of Stage 1 submissions have the opportunity to help authors correct methodological flaws before they occur by assessing the experimental design prior to data collection. Second, because RRs cannot be rejected based on the perceived importance of the results, the RR model avoids a common reason for conventional rejection: that the results are not considered sufficiently novel or groundbreaking. We believe the overall reviewer workload under the RR model will be similar to conventional publishing. Consider a case where a conventional manuscript is submitted sequentially to four journals, and the first three journals reject it following 3 reviews each. The fourth journal accepts the manuscript after 3 reviews and 3 re-reviews. In total the manuscript will have been seen by up to 12 reviewers and gone through 15 rounds of review. Now consider what might have happened if the study had been submitted prior to data collection as a Stage 1 RR, assessed by 3 reviewers. Even if it passes through three rounds of Stage 1 review plus two rounds of Stage 2 review, the overall reviewer burden (15 rounds) is the same as the conventional model (15 rounds).


What is to stop authors with in-principle acceptance withdrawing their manuscript after getting striking results and resubmitting it to a higher impact journal?

Nothing. Contrary to some concerns, authors are free to withdraw their manuscript at any time and are not “locked” into publishing with the journal that reviews the Stage 1 submission. If the withdrawal happens after in-principle acceptance has been awarded, many journals will simply publish a Withdrawn Registration that includes the abstract from the Stage 1 submission plus a brief explanation for the withdrawal. At several journals, authors must agree to the publication of a Withdrawn Registration should the need arise.
----------

At NeuroImage we are implementing a Withdrawn Registration policy. To quote from the author guidelines at https://docs.google.com/document/d/1LQwi3-fZ8nyRWAgTxKyWc35biAK8Kb2skyagMGVbkF4/edit#

"It is possible that authors with IPA may wish to withdraw their manuscript following or during data collection. Possible reasons could include major technical error, an inability to complete the study due to other unforeseen circumstances, or the desire to submit the results to a different journal. In all such cases, manuscripts can of course be withdrawn at the authors’ discretion. However, the journal will publicly record each case in a section called Withdrawn Registrations. This section will include the authors, proposed title, the abstract from the approved Stage 1 submission, and brief reason(s) for the failure to complete the study. Partial withdrawals are not possible; i.e. authors cannot publish part of a registered study by selectively withdrawing one of the planned experiments. Such cases must lead to withdrawal of the entire paper. Studies that are not completed by the agreed Stage 2 submission deadline (which can be extended in negotiation with the editorial office) will be considered withdrawn and will be subject to a Withdrawn Registration."

To my knowledge there have been no Withdrawn Registrations so far at any journal offering RRs.

Reply

Your comment will be posted after it is approved.


Leave a Reply.

    BLOG HOME

    ​TUTORIALS

    ​MEDIA

    ​contributors

    ​OHBM WEBSITE

    ​

    OHBM OnDemand 
    ​Education Platform


    RSS Feed

    Archives

    January 2024
    December 2023
    November 2023
    October 2023
    September 2023
    August 2023
    July 2023
    June 2023
    May 2023
    April 2023
    March 2023
    January 2023
    December 2022
    October 2022
    September 2022
    August 2022
    July 2022
    June 2022
    May 2022
    April 2022
    March 2022
    January 2022
    December 2021
    November 2021
    October 2021
    September 2021
    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    August 2018
    July 2018
    June 2018
    May 2018
    April 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016

stay connected with ohbm!


become a member

Telephone

952-646-2029

Email ohbm

EMAIL BLOG TEAM
Header image created by Thiebaut de Schotten & Batrancourt  
www.brainconnectivitybehaviour.eu