Authors: Rahul Gaurav, Elizabeth DuPre The OHBM Standards and Best Practices (SBP) committee aims to advance the work of our community by helping to develop and promote best scientific practices within the field. Committee co-chairs Jack Van Horn and Peter Bandettini sat down with Ilona Lipp and Claude Bajada to learn more about the history of and ongoing efforts in the committee. Here we briefly exchange ideas on how OHBM can help in maximizing the quality of science by introducing people to the best practice recommendations—for instance, through the Committee on Best Practices in Data Analysis and Sharing (COBIDAS), which was initiated by the OHBM Council in June 2014. Ilona Lipp Thank you very much. It's great to have both of you here. Could you tell us a little bit about the history of the committee? Jack Van Horn This whole effort really got started with OHBM trying to encourage reproducible science in human neuroimaging. We recognized that we had a large community of people who were in need of resources to help demonstrate that their science was reproducible, that they were reporting the kinds of information they needed to report in their publications, and that some kind of best practice might be in order. As I recall, some of the early discussions of this were as far back as the meeting in Seattle in 2013. Then it was just an idea but it grew, and we pursued it in Geneva at some pre-event meetings, particularly when Karen Berman was the Chair of the OHBM. The idea was to craft a committee to look at some of the recommendations for how people can do best practice for their neuroimaging studies. I was asked to help, and I reached out to a number of people—including Tom Nichols—and said that we'd like to have a committee that looks into this. Tom graciously decided to do it, which I'm ever thankful for. Council was so impressed with the initial COBIDAS report that they asked Peter to look at this across different domains of neuroimaging, not just on reporting for brain imaging but for things like EEG or MEG, or network analytics. Those activities have recently begun. We have an international committee which represents pretty much every time zone on the planet. We're very excited to be able to meet with them to exchange ideas about how OHBM can make science better through best practice recommendations and how we can work with other organizations to identify points of synergy. This is not only a top-down thing where the Council asks for something and we pursue it—it's also bottom-up, so anybody in the organization can help develop these recommendations. If they have an idea for something where they'd like to see best practices, they can recommend it to us, or they can point out where some other organizations have already begun this work, and perhaps we can tie in with them. Peter Bandettini COBIDAS stands for the Committee On Best Practice In Data Analysis and Sharing. We're coming out with COBIDAS-2 just because the landscape of practices continues to change, and one of our goals is to keep updating for new analysis approaches. For instance, the popularity of dynamic functional connectivity and resting state in fMRI has exploded since the original 2016 report. COBIDAS is not meant to be a rigid guideline; it's meant to be a suggestion for best practices that's endorsed by OHBM. But at the same time, it is meant to be a guide for people. Even just in fMRI, reproducibility is a real challenge—and we're still getting our heads around that—but we also need to expand; for example, as we’ve done with COBIDAS MEG. Lucina Uddin has also put together best practices in large scale brain network nomenclature around how you name the various networks. Ilona Lipp So the original COBIDAS report was the motivation to form the committee, and now you're trying to cover different aspects of neuroimaging, different techniques, more broadly. Is your aim to have several of these reports in a similar style to the original COBIDAS report? Peter Bandettini Yeah, the goal of the whole process is building a report and then publishing it, either in Aperture or other places. The original COBIDAS report was published in Nature Neuroscience. Ideally, we’d have both a formalised report and an accompanying webpage detailing the best practices. Jack Van Horn By publishing and having an online resource, we create multiple formats of this information for multiple audiences. And it communicates to the scientific community at-large that OHBM is serious about best practice. If you want the really nitty gritty details, you can either look in Aperture, or you can look on the OHBM website. About every five years, we're asking those committees to reconvene to review them and make sure that they are up-to-date with current technologies, methodologies, and implementations thereof. It’s important to remember how these best practice documents are created. They first get drafted, and that usually takes about a year from the initiation of the project. Then there's a period where that document is made publicly available for comments so people can provide ideas or suggestions. That feedback then gets ingested into the draft, and the committee goes through another iteration on it. Then eventually, the report is finalized. Next, the OHBM SBP committee will look at the resulting draft and, if we feel it captures best practices for our community, make an endorsement to the OHBM Council. Ilona Lipp So, who is it to actually put together these documents? Is it people from the SBP committee? Or do you gather a group of experts, and how do they go about it? You mentioned earlier that evidence is quite important for defining standards. Jack Van Horn We usually reach out and deputise somebody to lead that effort. When creating the original COBIDAS report, I was in the leadership of OHBM as both program chair as well as education chair, and I felt that's a little too close. It looks a little too top-down. We’ve carried forward that initial idea of a more community-driven approach by asking for people to lead these recommendations. So, either somebody will come to us and say, “hey, we've got an idea for a best practice.” They’ll make a proposal, and we'll take a look at it. If it looks reasonable, we'll ask them to lead that effort, as we've done with Lucina Uddin for network nomenclature. Other times, we may hear from OHBM Council that they'd like to see a best practice group on a given topic, and then we would go out and identify experts in that area and ask them to take on the challenge of putting together best practices. To help in this process, we have some documentation on the OHBM website, and people can pursue that. We ask that they put together a reasonably-sized group of experts; these groups can't be too large or you get a cat-herding problem. When following our recommended procedure, we usually have something within a year to 18 months that we can begin circulating to the community for input and feedback. Peter Bandettini And when the SBP committee reviews it, we try to identify if there's any gaps, or—conversely—if there's anything that's overly prescriptive. We try to fine tune it in such a way that it reaches the right balance. Claude Bajada How do you decide what is the most common approach in the community versus the most appropriate approach? There must be some judgement calls. Peter Bandettini We haven't run into any major issues or arguments regarding that as these ad hoc committees usually make those judgement calls. It's assumed that they're experts in the field, and they’re watching the literature. Most people can agree upon what to report. Maybe physicists might have a slightly different view than cognitive neuroscientists, but the committee is made up of a group of people who all have different expertise. So when they reach a consensus, I think that consensus is reflective of most of the community. But you're right, it isn't overly formalized. I think that people just give their expert opinion, and they argue that opinion within the committee. Jack Van Horn It's important, though, to ensure that these best practice recommendations don't become overly burdensome. Peter mentioned we’re not overly prescriptive; we don't want to kill innovation. We want there to be enough flexibility to encourage people to move in different directions, to try something out and report on it. There are some things that we're just not going to know until they emerge or become common in the literature. So, we certainly want to encourage people to pursue innovation, but also provide elements that will allow somebody else to take either that same data or to collect data similar to it, and replicate the experiment that was described. Peter Bandettini We're trying to find the minimum prescription that adds value. When submitting papers, if you can refer to how you’ve adopted COBIDAS guidelines, then that makes the review process more transparent and easier in that regard. If there's some standard, you're able to ratchet the field forward. So, we're trying to find that minimum amount of prescription just to have everyone agree, and then keep on ratcheting forward in that regard. Claude Bajada What are the general thoughts of the SBP committee on projects like fMRIPrep? Or similar standardized glassbox pipelines that you can use, can clearly reproduce, and also have boilerplate text describing the processing for publications. Is that something that you're going for? Or are you not too keen on that sort of approach because it's overly prescriptive? Jack Van Horn I think it’s fine to mention that fMRIPrep or other tools are available. It's just that if you're going to use that, report it right. If you use some in-house, home-grown method, you still need to report your preprocessing in detail. What we're hoping to avoid by having these reporting standards is that you don't need to describe what you did every time. Peter Bandettini Yeah, that type of conversation comes up all the time. It turns out that in a lot of these large N studies, the preprocessing does matter, but reporting metrics does too. We're starting an effort to try to get groups to report quality control metrics. And there's many different kinds of software out there which makes this a challenge. Ilona Lipp Do your guidelines focus on reporting terminology, or do you intend to branch out into guidelines for how to set up experiments, data sharing, statistical corrections, etc? How do you determine what recommendations to focus on? Jack Van Horn It would be just too difficult, I think, too fraught, to be telling everybody how big your sample size should be. Because we don't know what their interests are. The onus is on the investigators to do the power calculations that they should do. And those calculations depend on factors like: are you doing fMRI? Are you doing DTI? If you're looking for morphological changes, the effect sizes are going to be different depending on which modality you choose. We're not here to be very top-down. We'd rather provide guide rails so that people provide the information necessary to replicate that experiment, no matter how many subjects were in it. I think that's one of the guiding principles. Peter Bandettini We're also trying to find things that make a difference in the experiment. For instance, we're finding that measuring behavior more closely from trial to trial makes a difference. But it's mostly on the end of reporting, rather than, there's certain things you have to do. Obviously, reporting a power analysis is useful. And yet a lot of people don't do that. Claude Bajada In order to perfectly reproduce an experiment, you need access to the same data. COBIDAS has recommendations on data sharing, but how do you deal with different legal frameworks, particularly for the European general data protection regulation (GDPR)? Jack Van Horn Certainly, data sharing! Everybody's got a different opinion about how it should be done, but, I think the general feeling is that having primary data available from published experiments is an absolute must. It's irresponsible not to do that. There's now a number of repositories for storing that data, whereas back when I started sharing there was only one. I think as long as you are forthcoming about where people can find your data among the available repositories, then that's great. If you've put it in COINS, if you put it in OpenNeuro, that's great. Having said that, I don't know the full implications of GDPR and how that intersects with human subjects research. One would hope that if the data are properly anonymized then they're not like human subjects data anymore, because you've gone and cleared them of any identifying information. But I'm not a lawyer, so, I don't say that I know the full legal implications. And as far as people are reporting those things, if they're being forthcoming about what they've done, then that's fantastic. We don't necessarily want to prescribe which de-identifying methods are good or bad. But for removing the face or altering the facial features of structural MRIs: as long as they're not impeding the data processing, they're probably okay. But it’s not really our purview to assess these methods—we're hoping that others will investigate those things and publish on them. But our key area of interest is having people report what they’ve done. Peter Bandettini Each university has their own set of rules, with some universities being very prescriptive in that you have to explicitly say in the experimental protocol and participant consent forms that you're going to share the data. And if you haven't, maybe you can't share it. But I think what we're doing here is trying to bootstrap the field, to have more examples of sharing and to show the benefit of sharing. Over time, those university policies will slowly change. So we're not trying to take this legal or procedural problem head on, we're just trying to build enough of a consensus that the universities or hospitals or other institutions will change over time in their policies. We're just saying, these are the tools to do this, and it might be easier to publish down the line if you do share your data. So hopefully that will build momentum in that regard. Ilona Lipp The COBIDAS regulations have been out for a few years. Have you noticed any change with regard to the quality of reporting? Do you have a feeling for how many people are making use of these guidelines? How are you promoting them and incentivizing their use in the field? Jack Van Horn Right now, I don't know if we have any metrics. But I think education is a key part of this work. One of the things I've tried to do—in both the undergraduate as well as graduate-level neuroimaging coursework that I teach—is to let my students know that the COBIDAS recommendations exist. So, they very often have to do various exercises where they're looking at the literature and trying to decide whether or not a study has been reported according to COBIDAS recommendations. Others may have a different take on how to teach COBIDAS today. But certainly, the more we can do to introduce people to the COBIDAS recommendations, the more they'll be adopted. Peter Bandettini Yeah, I don't think anyone has done a formal look at the literature. My sense is that it's increasing, though. Data sharing is on the forefront for most people doing studies these days, and it’s sort of a cultural shift. But I haven't done a formal analysis of how much data is shared. It's obviously increasing. But it's hard to point to this committee as being a major factor in this general cultural shift. Jack Van Horn The committee hasn't existed formally long enough yet, but the hope would be that with the existence of this committee and its engagement with the community, more people become aware of things like COBIDAS for improving their scientific practices. And as updates come out, they'll respect those too, and recognise, “oh, I need to report this as well, and it's easy to do.” So our science will get better as a result. Peter Bandettini Right now we're just starting out with a small number of techniques, but you can imagine there's all kinds of other areas—like multimodal integration—that need best practices to guide the field. Jack Van Horn Our thinking is that we hope to help create and endorse a core set of standards in some of the major areas in human brain mapping. Then we will probably look to other, more focussed efforts to find people who are creating standards for more specific parts of the field doing it better than we could do it. For example, the International Neuroinformatics Coordinating Facility (INCF), does some work with regards to the development of standards and best practices, and I've reached out to them on a couple of different things. Beginning when COBIDAS first started recruiting in 2015, OHBM has recognised the reproducibility challenges that neuroimaging has as a field, and other groups have started to look to OHBM as a major driver of more reproducible science in neuroimaging. So, we’re part of an important effort, and I'm glad we're doing it. Ilona Lipp You get different people from the community to engage, which seems like an important part of driving this forward. Jack Van Horn
There's enormous opportunity for those who are interested in best practices to make recommendations or point out to the committee where things are emerging. We've been asked by Council to be the sanctioning body for what gets considered as “OHBM-endorsed best practice,” but we’re still a small group. So any help is welcome, and more eyes may point out different things for us. We're glad to have those recommendations. Peter Bandettini It's still a relatively new effort and we're still in the process of figuring out the scope of the work. To date, we still haven't had many proposals. So we'd love to have more from people who feel like the field would be advanced by having new standards developed and promoted by OHBM. Jack Van Horn This is exactly the kind of thing that I think helps the community see that neuroimaging is a mature science, that we recognize that we have flaws, and that we want to put the right guardrails in place that help people do the best science they can. Peter Bandettini And even with our existing standards, there are still areas to improve. One thing we're struggling with in the second version of COBIDAS is that you end up with a huge checklist of things that people have to go through and ask “Did you do this? Then report it like this.” The eCOBIDAS effort is trying to create an online, machine-readable way to record and report that information. It’s still a work in progress, but once that's in place, it’d be nice to have software packages make it automatic in some regard. So, once that gets more developed, it will hopefully ratchet the field forward to keep iterating towards better solutions for reproducible science. Claude Bajada Where should readers go if they want to get more information about the SBP committee? Jack Van Horn You can find us on the OHBM web page. That page includes our three current committees. First, the COBIDAS committee that’s again chaired by Tom Nichols is undertaking a five-year review of the existing COBIDAS recommendations Then, the MEEG group that's run by Aina Puce and Cyril Pernet, which has already produced an article. Finally, Lucina Uddins's activities are at full steam ahead looking at best practices for large scale brain network nomenclature. So those are the three which are currently going on, and we're keen to hear ideas from others about existing activities from other organizations or things that OHBM should take a look at. So, if any of the readers have recommendations, we strongly encourage them to go and take a look at our web page. If you scroll down on that page, you can click on one of the little blue boxes that are there to make a recommendation or point out some idea for best practices effort.
0 Comments
Your comment will be posted after it is approved.
Leave a Reply. |
BLOG HOME
Archives
January 2024
|