By Peter Bandettini & the OHBM Neurosalience production team
In this week’s podcast, you’ll hear about clinical applications of resting-state fMRI from Dr Michael Fox. You’ll hear some of the highlights of his research, from the beginnings at Wash U, including his early work on resting-state fMRI and the issue of global signal regression, to his more recent pioneering work on lesion network mapping. Through this, you’ll find out about how lesions can impact behavior through their effects on functional networks. This approach is a promising inroad of fMRI towards clinical utility.
Michael D. Fox, MD, PhD, is the founding Director of the Center for Brain Circuit Therapeutics at Brigham and Women’s Hospital and Associate Professor of Neurology at Harvard Medical School. He is also the inaugural Raymond D. Adams Distinguished Chair of Neurology and the Kaye Family Research Director of Brain Stimulation. He completed a degree in Electrical Engineering at Ohio State University, an MD and PhD at Washington University in St. Louis, and Neurology Residency and Movement Disorders Fellowship at Mass General Brigham. Clinically, he specializes in the use of invasive and noninvasive brain stimulation for the treatment of neurological and psychiatric symptoms. Dr. Fox’s research focuses on developing new and improved treatments for brain disease by understanding brain circuits and the effects of neuromodulation.
The Neurosalience production team consists of Rachael Stickland, Kevin Sitek, Katie Moran and Anastasia Brovkin
By Peter Bandettini & the OHBM Neurosalience production team
In this week’s podcast, Dr Catie Chang walks us through her thought process regarding pulling information out of the fMRI time series. After discussing some of the ongoing issues in fMRI, such as whether or not to use global signal regression to remove noise, she leads us into a commonly overlooked effect in fMRI—that of changes in arousal and vigilance. In particular, this has measurable effects on the resting state fMRI signal. She discusses the perspective that one person’s artifact may be another’s useful signal, depending on the goal of the study.
Catie Chang, Ph.D. received her B.S. in Electrical Engineering and Computer Science from MIT, and received her M.S. and Ph.D. in Electrical Engineering from Stanford University. While in graduate school, she opened up the field of fMRI by publishing a seminal paper using time-frequency analysis of resting state fMRI, showing that it was quite dynamic. Since then, she has been exploring the effect of basic physiological processes, such as cardiac function and respiration on the fMRI signal, and has recently been uncovering unique information regarding the influence that changes in vigilance have on the time series signal.
The Neurosalience production team consists of Rachael Stickland, Kevin Sitek, Katie Moran and Anastasia Brovkin
By Peter Bandettini & the OHBM Neurosalience production team
In this week's episode, Peter talks to directly to MRI scanner vendors. Together, they try to reconcile the importance of fMRI in research contexts with the market pressures of developing clinical applications. As fMRI has virtually no clinical market, does it really influence vendor decisions on pulse sequences and hardware? Could more be done aside from making fMRI more clinically relevant? In this discussion, you’ll hear some fascinating history into the early days of echo planar imaging and high speed imaging, as well as insight into the processes by which products are prioritized. You’ll also find out a possible future of how fMRI may begin to become more clinically useful.
R. Scott Hinks, Ph.D. is the Retired Chief Scientist from GE Healthcare's MR division. He received his PhD from the University of Toronto in 1985, where he began his studies of MR Physics and Imaging. For over 34 years Scott has pursued a career in MR research in both industry and academia, specializing in imaging and system physics. He was the principal developer of FSE and has led technical development of EPI for both fMRI and DWI. His work has resulted in numerous publications and over 34 patents. In his most recent role as Chief Scientist for GE Healthcare’s MR division,he is actively engaged in every aspect of MR imaging and works in close collaboration with leading academic researchers worldwide.
Franz Schmitt, Ph.D. is the retired chief scientist from Siemens’ MR division. He received his Ph.D. from the University of Munich and has worked for Siemens since 1983, overseeing development of EPI, gradient and RF coils, both 3T and 7T, as well as pTx imaging. He worked on site at the Martinos Center for a few years in the early 2000’s and has been actively engaged in academic research worldwide.
Ravi Menon, Ph.D. is a Professor of Medical Biophysics, Medical Imaging and Psychiatry at Western University, where he is also a member of the Graduate Program in Neuroscience and the Graduate Program in Biomedical Engineering and Scientific Director of Western’s Centre for Functional and Metabolic Mapping (CFMM), Canada’s only ultra-high field MRI facility. He received his Ph.D. in Medicine from the University of Alberta and performed his post doc in the laboratory of Kamil Ugurbil at the University of Minnesota where he helped to pioneer fMRI.
The Neurosalience production team consists of Rachael Stickland, Kevin Sitek, Katie Moran and Anastasia Brovkin
By Charlotte Rae, on behalf of the SEA-SIG
The Sustainability and Environment Action (SEA) SIG has formed three new Working Groups, to tackle the environmental impact of the annual meeting, assess environmental implications of neuroimaging research activities, and educate our community on these.
What are the new Working Groups?
In December 2020, we held two open meetings to talk about the priority actions for our new SIG with the OHBM community. We had colleagues attend from across the world, who shared fantastic ideas on how we should make OHBM activities more sustainable.
From these meetings, there was a pretty clear consensus that we needed to tackle three areas: the Annual Meeting, neuroimaging research pipelines, and education. So, we have set up three new Working Groups that will focus on these particular domains.
The Annual Meeting Working Group will assess the environmental impact of the Annual Meeting, investigate sustainable conference models, and make recommendations to the Council for how to create a more sustainable Annual Meeting beyond COVID-19.
The Neuroimaging Research Pipelines Working Group will assess the environmental impact of neuroimaging research pipelines, investigate how we could do our research more sustainably, and create resources and publications to support neuroimagers in greening their research practices.
The Education & Outreach Working Group will collaborate with the other two to educate our community about the impacts our research activities have, including putting on events around the Annual Meeting. It will also seek to collaborate with industry and sister neuroscience societies. In these collaborations and in guiding the SIG's own activities, it will use insights from psychological and neuroscientific work on how humans respond to communications about climate change and environmental issues.
How can I get involved?
We hope that there are lots of OHBM members who are interested in participating in these groups to help us achieve our sustainability objectives!
For example, in the Annual Meeting group, we want to comprehensively assess what the environmental impacts were of recent in-person (e.g. Rome, 2019) and online (2020, 2021) meetings. Looking forward, we hope to then investigate how much our meeting footprint would be reduced if we adopted potential alternative conference models, such as hybrid (with some in-person and some online content), hub-and-spoke (where we have several meeting locations and you travel to your nearest), or moving to a biennial meeting. Many other societies and conferences are considering such options (Figure 1). For the Working Group, we need colleagues who are interested in looking at these options and putting together a report for Council. We are very fortunate that Sepideh Sadaghiani, an experienced member of the Program Committee, has come on board to chair this Working Group.
In the Neuroimaging Research Pipelines Group, we need colleagues who are up for digging down into all the details of a neuroimaging workflow, from hardware and data acquisition to analysis and computing infrastructure. Ideally, we want to quantify the potential environmental implications of all these stages, so we can produce resources for the neuroimaging community that would allow researchers to plug in their pipeline protocol and get a measure of its environmental footprint. Of course the next step is then to provide resources for our community to enable them to go about changing this for the better - establishing what best practice looks like for sustainable neuroimaging. In this group, we will need colleagues from across OHBM disciplines who have experience across all sorts of neuroimaging processing streams. We might also seek to collaborate with external experts such as cloud computing providers.
The Education & Outreach Working Group will have quite a broad remit around educating our community about the impacts our research activities have, in concert with the other two groups. Here we need colleagues who have experience in (or want to get experience in!) areas such as putting on events around the annual meeting, like symposia and socials; interfacing with industry and sister neuroscience societies; and perhaps even bringing psychology-based knowledge of what works well when communicating about climate change, to make sure we are operating as effectively as possible in the SEA-SIG as a whole.
As well as general group members, we are looking for two individuals who might be interested in Chairing the Neuroimaging Research Pipelines and Education & Outreach groups.
I’m in! What are the next steps?
If you would like to participate in any of the three groups, or would like further information, please do get in touch with us at firstname.lastname@example.org. We welcome informal enquiries if you are not sure before you sign up to participate!
For further details on the aims and objectives for each group, see our new website at ohbm-environment.org
If you know a colleague who would be ideal to contribute to one of our groups, please do pass on our details. And you can retweet our Twitter post announcing the groups.
We look forward to sharing updates on the Working Groups’ progress soon!
By Peter Bandettini and the OHBM Neurosalience production team
In this week's podcast, Peter gets a birds-eye view of modeling of messy biologic systems, namely the brain, from Professor Danielle Bassett. They talk about the challenges of measurement accuracy and what scale might be most informative to modeling - and how to make do with what we have. On the clinical side, Danielle discusses network control theory for modulating networks for therapy and limitations in technology for modulation. They consider the limits of network modeling and the search for the equivalent of an idea as powerful as “natural selection” for the brain. In the second part of the podcast Peter and Danielle discuss bias in science and what Danielle is doing to help increase transparency to combat bias.
Danielle Bassett PhD, is currently the J. Peter Skirkanich Professor at the University of Pennsylvania with a primary appointment in the Department of Bioengineering and a Secondary appointment in the Departments of Physics and Astronomy, Electrical and Systems Engineering, Neurology, and Psychiatry. Dr. Bassett received her B.S. in 2004 in Physics from Penn State University. She received a Ph.D. in physics in 2009 from the University of Cambridge, UK as a Churchill Scholar, and an NIH Health Sciences Scholar. Following a postdoctoral position at UC Santa Barbara, she was a Junior Research Fellow at the Sage Center for the Study of the Mind. In 2013, she joined the University of Pennsylvania as an assistant professor, and in 2019, was promoted to full professor. She is also founding director of the Penn Network Visualization Program, a combined undergraduate art internship and K-12 outreach program bridging network science and the visual arts. Her primary work is towards developing network models towards deriving principles of brain function.
The Neurosalience production team consists of Rachael Stickland, Kevin Sitek, Katie Moran and Anastasia Brovkin
By the Neurosalience production team: Rachael Stickland, Kevin Sitek, Katie Moran and Anastasia Brovkin
OHBM has a new podcast: Neurosalience! You can listen to it in your car, while out walking, or just in the ever-present home office. Through Neurosalience, you’ll discover state-of-the-art topics and current controversies in brain mapping. The host for the podcast, Peter Bandettini, has lined up a stellar cast of interviewees ranging from brain scientists to hardware vendors and health professionals. This includes finding out about publication biases affecting gender and racial minority groups with Dani Bassett, network neuroimaging in neurological populations from Michael Fox, circuit based neuromodulation from Catie Chang and much more. Get all of this insight through your favourite podcast apps, including Spotify, apple podcasts, anchor and Google Podcasts.
We launch with a brief introduction to the podcast, a fireside chat between Peter and Rachael Stickland (one of the OHBM Communication Committee producers for the show). Then the first full episode explores Aperture, the new open-access publishing platform powered by the OHBM. Through discussions with founding members and the Editor in Chief, you’ll learn how Aperture came about and what it hopes to achieve.
OHBM Neurosalience episode 00: An introduction to the podcast
Peter Bandettini chats with Rachael Stickland, where they set out some of the exciting conversations you’ll hear on OHBM Neurosalience. The name ‘Neurosalience’ highlights the aim of this podcast - to put a spotlight on important developments, discoveries and controversies in the world of human brain mapping. Find out why this podcast was set up, what the main themes and topics will be, and what to look forward to with the first few episodes.
Peter Bandettini, Ph.D. is Principal Investigator of the Section on Functional Imaging Methods and Director of the Functional MRI Core Facility in NIMH. Recently he has also established the Machine Learning team and the Data Science and Sharing Team as well as the Center for Multimodal Neuroimaging within NIMH to help all intramural investigators with their neuroimaging studies. He has been working on fMRI methods for 30 years.
Rachael Stickland, Ph.D. is a postdoctoral fellow at Northwestern University in the Applied Neuro-Vascular Imaging Lab (ANVIL). Her work focuses on characterizing cerebrovascular function in healthy cohorts and in Multiple Sclerosis, using fMRI with breathing challenges and gas inhalation.
Aperture, a new open access publishing platform for neuroimaging research
Peter Bandettini introduces Aperture, a new open access publishing platform for neuroimaging research. Peter is joined by one of the co-founders, Jean-Baptiste Poline, along with the new Aperture Editor In Chief, Tonya White, and the journal manager, Kay Vanda. Together, they discuss the motive, history, steps for creation, and current status of Aperture. It was created with the strong support of the Organization for Human Brain Mapping, and aims to be a peer-reviewed platform for publishing not only papers, but also various other types of research objects that often do not find space in conventional journals, including data, educational tutorials and code. While there is still work to be done to be fully up and running, many insights into this process are shared and discussed.
Tonya White, MD, PhD is an associate professor in the Department of Child and Adolescent Psychiatry at Erasmus University Medical Centre in Rotterdam. Her primary research goals are to apply neuroimaging techniques to obtaining a better understanding of genetic and environmental factors associated with typical and atypical brain development in hopes that this will translate into either preventing or decreasing the morbidity of severe psychiatric disorders.
Jean-Baptiste (JB) Poline, Ph.D. is an Associate Professor in the Department of Neurology and Neurosurgery at McGill; the co-Chair of the NeuroHub and Chair of the Technical Steering Committee for the Canadian Open Neuroscience Platform (CONP) at the Montreal Neurological Institute & Hospital (the NEURO); and a Primary Investigator at the Ludmer Centre for Neuroinformatics & Mental Health.
Kay Vanda is the journal manager of OHBM Aperture, working out of the OHBM central offices in Minneapolis, MN. She is a key component of this entire effort as she handles all the organization as well as corresponds with authors, reviewers, and editors.
2020 was such an interesting year; it was certainly not the one I was waiting for. Due to several issues related to the pandemic, I unofficially took a leave from thesis work and had a chance to meet a lot of people virtually, collaborate, learn and grow. Although so many of us were stuck at home, open-science-driven events like NeuroMatch and BrainHack created opportunities to connect with colleagues and peers. This turned out to be hugely impactful for myself and other people like me—in other words students/early career researchers based in countries with limited resources.
And to think, it all started with a tweet….
Figure 1. Neuromatch 1.0 call for people (image credit: https://twitter.com/KordingLab/status/1239986383550365696).
Neuromatch 1.0 virtual (un)conference was held from 30th to 31st March, 2020. The organizers were so open and encouraging in setting up this collaborative experiment—I couldn't just sit back and wait for the event! I quickly volunteered to help with the scheduling, reaching out to the speakers before their talks, testing the setup and monitoring the chat during the talks. Even though it was entirely virtual I have learned a lot both academically and socially. It was so amazing to witness vibrant online community interaction. That time also coincides with my coming out, getting comfortable with my selected name and pronouns. It also felt so affirming to me to be accepted as who I am within the academic community. It was beyond my imagination!
But Neuromatch was just getting started. I was delighted to help organize the second Neuromatch conference, Neuromatch 2.0, held from 25th May to 3rdJune, 2020. It had more than 3000 registrants from all over the world. Although the same platform and structure were used as during the first Neuromatch, we saw even more talks, posters, presentations, and debates. This increased community interaction was encouraging, and showed how it was still possible to bring researchers together even without international travel.
Figure 2. Neuromatch 2.0 map of registrants all over the world. (image credit: https://twitter.com/KordingLab/status/1264998609411604480)
After my experiences at Neuromatch, I was overjoyed to discover another large-scale, explicitly collaborative event within the OHBM community: this year’s OHBM brainhack.
HBM Brainhack 2020 was my first Brainhack experience and one of my biggest encounters with the open science community after Neuromatch Conferences. It was truly phenomenal to see enthusiastic hackers, creative projects, inspirational training sessions and witness the facilitation of open science research virtually all around the world. It was an amazing privilege for me to serve the community as an event host. In addition to comprehensive guidelines, specific help channels and project specific coding places for hacking, a live virtual help desk at gather.town virtual space was dedicated for guidance and a socialization. There will be no exaggeration if I claim that OHBM Brainhack 2020 provided attendees a life-like event in the comfort of their homes. As the first virtual OHBM Brainhack, it was a stellar start. For the future events I believe that there would be more diverse representations and more opportunities from generally underrepresented neuroscience researchers all around the globe.
After OHBM brainhack I shifted my focus to Neuromatch Academy Summer School (NMA for short). It was a 3 week worldwide summer school, and, although all online, a tightly structured experience for 1757 interactive students and 191 teaching assistants (TAs). It was not only an intense course but also a huge mental challenge for students, TAs and organizers. It was extremely incredible that such a comprehensive course could happen at low/no cost for students. Diversity, Inclusion and Equity were core values of NMA that I championed in my involvement. For the preparation period of NMA, I took part in helping towards increasing diversity and inclusion besides enhancing student experience via extracurricular activities such as weekly karaoke times and daily yoga sessions. During the NMA, I helped some of the fellow TAs at my time zone towards mastering the material for teaching effectively, solving daily problems about the material and also served as a TA of a group of students in my time zone (pod)
From the interactions within NMA volunteers and TAs, I have learned that establishing an inclusive community is the key for a better and effective way of science, learning, teaching and academia.
One of the interesting things that I was involved in is meeting several artists to get some neuron doodles to increase virtual experience. I greatly enjoyed explaining to people without any STEM background, about the summer school, and about detailed yet basic concepts of the neuron. It is hard to find the words about my experience of witnessing the creative process of an artist. Seeing the sparkles of human creativity turning into an art piece was also very inspirational.
Figure 3. Neuromatch Academy, Mozilla Hub virtual space, student avatars interacting with neuron doodles (image credits: 1st image https://twitter.com/phant0msp1k3/status/1286102782890536960, 2nd image https://twitter.com/neurograce/status/1283047247299706880).
During the times I served as a TA, I realized that the best way of learning is teaching. I also witnessed that providing a safe space is the most important thing to do to enhance the abilities of students to flourish. CoC and automatic violation report system of NMA effectively helped us to preserve the provided safety.
Having gone from Neuromatch, to OHBM, to NMA, I was happy to return to Neuromatch with the Neuromatch 3.0 conference. Although largely the same format as Neuromatch 2.0, it significantly expanded again in size and in scope. I believe it was one of the most inclusive neuroscience conferences ever with six main themes of parallel tracks and a main stage events like lectures, panels, discussions, open affinity group sessions (black, queer, first generation) it gave the ambience of non-virtual big conferences. I served as a backend person mostly by clicking the necessary buttons, ensuring that every speaker runs their talk as smoothly as possible. It was wonderful to see some of the submitted NMA projects were turned into complete papers and submissions to NMC, it made me feel proud as a TA of NMA.
All of these experiences and the vibrant community of Brainhack encouraged me to organize (almost solely) and lead Brainhack Ankara on December 1st to 3rd, the first ever Brainhack in the Middle East! Taking part in both Neuromatch event series and OHBM Brainhack encouraged me to spread open science concepts and run a local event with the help of the global Brainhack community. Since the community here is new for Brainhack Ankara, it was convenient to have focus on learning the basics of open science tools and to explain why and how it can enhance our way of doing science. Overall attendance was about 30 people, it was a relatively small event comparingly. But I learned that to build a vibrant big community, starting small is important.
2020 for me was a year of meeting the neuroscience community virtually as well as demonstrating that how open science helps researchers, how the culture of collaborative science helps to enhance better practices and how vital it is to embrace diversity in research. I learned that listening, understanding and providing the needs of the people is necessary to maintain the collaborative culture and for the best possible approaches of ensuring inclusion. To take an action for this goal, I recently joined the OHBM Open Science Special Interest Group as an inclusivity officer for the organisation of the 2021 Open Science Room and OHBM Brainhack. In this role, I hope to provide a safe space for people to flourish, open space for underrepresented groups, and encourage initiatives that will enrich the community.
By Rachael Stickland & Nils Muhlert
Professor Helen Mayberg is a pioneer of neuroimaging and neurostimulation for depression. As a behavioral Neurologist she has helped to identify the brain circuits implicated in mood disorders, and then developed and refined effective treatments based on deep brain stimulation. She is a member of the National Academy of Medicine, The American Academy of Arts and Sciences and the National Academy of Inventors. As a founder member of OHBM we found out about her work, her experiences of seeing impact statements become reality and about holding on to the ‘OHBM train’.
Nils Muhlert (NM): I'm joined today with Professor Helen Mayberg, who is a professor of Neurology at Mount Sinai, as part of the OHBM oral history initiative. First, can you tell us how and why you became interested in neuroimaging?
Helen Mayberg (HM): I was a neurology resident in the early 80s. Imaging was in its infancy. In medical school, in the late 70s, we had our first CT scanner. MRI was relatively new during my residency training at Columbia, and it was an important diagnostic tool. I planned to train in behavioral neurology in Boston with Norman Geschwind. But in my last year of residency, he passed away suddenly, so I needed a change of plan.
My change in direction to imaging as a focus for further training was the result of a lucky coincidence. One of the first year neurosurgery residents on my team had just come to New York from Baltimore. He had written one of the first papers characterizing opiate receptor subtypes in the brain, and told me about this new research imaging method being developed at Hopkins where you could image chemistry in living people using positron emission tomography. I had always been interested in neurochemical abnormalities in psychiatric disorders but there was no way to study that directly in humans. Despite my intense interest in severe mental illnesses, I didn’t find the training in psychiatry to be a good fit for me, so I pursued my interest in behavior via neurology training. It wasn’t a perfect fit, but neurology seemed a better choice for clinical training than psychiatry where I just never felt comfortable with their vernacular. While it was still a stretch to understand chemical mechanisms of behavioral disorders it did provide foundation for thinking about structure-function relationships in the brain, an approach that really wasn't applied yet to psychiatric syndromes. So suddenly, here was this new scanner that might provide a way to do what I wanted: assess regional chemical abnormalities in patients with mental illness.
I started as a research fellow at Hopkins in the Nuclear Medicine Department, learning PET scanning in 1985. I was learning basic methods to map and model various neuroreceptor systems - mostly opiate receptors, but with some projects involving dopamine and importantly serotonin systems. The lab I was a part of did little to no behavioral mapping studies; it was a dedicated chemical neuroimaging lab. There was very little work on blood flow or glucose metabolism except as ancillary maps for receptor studies. At the same time, in LA, John Mazziotta, and Mike Phelps were working with glucose metabolism to map abnormalities in a variety of neurological disorders. At Wash-U, Marcus Raichle, Peter Fox, Mark Mintun and their colleagues were developing methods for behavioral mapping using cerebral blood flow. Nora Volkow was using various methods at the Brookhaven Labs. There was a relatively small group of teams with PET scanners worldwide that developed specific niches of expertise using this technology.
Whilst working within a chemical mapping group, I was reading about other methods and as my work evolved it was clear that my questions required tools that I didn’t have access to. Because there were so few groups, we were a very small and interactive community. Before OHBM, this type of research was discussed at the Cerebral Blood Flow and Metabolism meetings. The neurologists, like John Mazziotta, Marcus Raichle and Richard Frackowiak, would go to the Neurology annual meeting, and we’d have our own imaging sessions there. You’d also see different imagers at various meetings--Society for Neuroscience, American College of Neuropsychopharmacology, Society for Nuclear Medicine. The PET community was tight and continued to grow but it was relatively insular as there were few research PET scanners as you needed a cyclotron.
There was a point where the questions I wanted to ask about depression required methods that were not the focus of our lab. Peter Fox was setting up a new research imaging center in San Antonio; I had met him at a Cerebral Blood Flow meeting and heard what he was up to. It was a method that I didn't know but that would provide a set of tools to take my depression research in a new and potentially interesting direction.
In 1991, Peter invited me to be a founding member of this fledgling new center, so I moved from Baltimore to San Antonio in Texas. I was part of his team, setting up a research lab as part of the brain imaging group. We were very small, and focused on PET scanning but I was no longer doing any neuroreceptor mapping studies. By this point, I'd moved to studying depression exclusively. I worked as a clinical neurologist but did my research with collaborators in psychiatry using imaging.
In the early to mid nineties, The Research Imaging Center (RIC) was host to the original Human Brain Mapping workshops. As part of these workshops, there were ongoing discussions of how to develop common platforms to share data. The RIC team started compiling spreadsheets of coordinates of brain activation findings from the literature. These were the early days of imaging meta-analyses, done by hand. While the work was slightly peripheral to my own studies, I couldn't help but become involved. So I received an education in the world of mapping beyond my own area and saw a style of thinking that was way ahead of its time. We now take so much for granted with our contemporary approach to big data and data sharing. It was laborious work in those days.
NM: It sounds like a very exciting time - certainly within that lab.
HM: It’s funny. For every scientist you can never know when you're in the middle of something important, if anything you're doing will have long-term traction. We delude ourselves, write grants, make statements about the potential impact and how significant we think our work is. In fact, it's only when looking backwards that we can actually see how it all evolved. It's hard sometimes to be reminded that during my time in medical school, CT scanning was new. We studied dead brains, we had anatomical atlases, we had white matter maps from studies in nonhuman primates that we used to mentalize how brain regions were connected to each other. We mentalized a connectome in our head, by piecing together these various studies. We didn’t yet have access to multimodal mapping. I don’t think I could have even conceptualized such methods.
What was a paradigm shift of imaging, was that you could directly test your hypotheses rather than simply make interpretative inferences from pathology or animal models of behavior. Not only did technology allow visualization of the brain in action, but with time the choice of methods greatly expanded. My neurology professor in medical school said: “pick a topic, not a method.” At that time I was learning PET scanning. He said, “you'll reach a point where your current methodology no longer allows you to answer your question. So you’ll learn new methods and tailor your questions as methods evolve.” As a clinician and not a technical or methods developer, that was incredibly important advice - don’t just learn a method for method’s sake, but learn a method in service to your clinical question. That's been my approach since then. So what I know the most about, PET scanning, is something I don't even do much anymore. But the use of imaging as my experimental method has never changed, I have just learned to adopt new imaging tools to best address my next depression study. One can now pick and choose, and with that range of choices you can really go deep to answer your own questions.
NM: On that note, one highlight of your career is that you’ve helped identify the role of Brodmann area 25 in basic drives affected in people with depression. When did you start focusing on this area and how did that come about? Was it as a result of these meta-analyses?
HM: We didn’t go looking for it. We just followed the data and there it was. But it wasn’t on our radar with our early studies, which started with examining post-synaptic dopamine and serotonin receptor in post-stroke depression patients. These studies were complemented by studies of opiate receptor changes following electroconvulsive therapy as a model of epilepsy, to studying resting state abnormalities in basal ganglia disorder patients with and without depression. Our goal was to test the hypothesis that, regardless of etiology, there was a common set of regions affected in patients with depression. We were working to define a depression circuit. As a common pattern of limbic-cortical abnormalities emerged, we felt it was reasonable to move to study primary depression where clinical heterogeneity was well described. I presented that set of findings in 1989 at one of the Cerebral Blood Flow & Metabolism meetings. We did the next natural experiment: how the abnormalities change with treatment.
For the most part, we found what everybody else was finding in depression, low metabolism in the frontal lobes. When we treated people, the frontal activity normalized--it increased. By using the statistical methods Peter Fox had brought to San Antonio from Wash U-- change distribution analysis-- combined with new computer algorithms and higher resolution scans, we could further examine the whole brain instead of predefined regions of interest. I remember analyzing a specific set of data of depressed patients studied before and after successful antidepressant treatment. I was looking at the statistical change maps and I figured there must be some sort of misregistration artifact. I kept looking at the pictures. It was midnight and, all of a sudden, I realized the ventral parts of the brain were showing decreased activity while dorsal parts of the brain were showing increases. I squinted my eyes and looked at what I was seeing: there were brain regions that weren't abnormal at baseline that showed decreasing activity as people got better. When I looked up the brain regions, I found we were in the subgenual cingulate. I had my Talairach atlas, and I'm looking up with the ruler where I am - it was all done by hand, and I thought ‘what the hell is this BA25?’
Actually, the Talairach atlas I had misidentified it. So people thought for many years that it wasn't really in Area 25, and I say look I'm just following Talairach. I went to see who else had seen anything in this region. I found changes in this region in a study by Jose Pardo on mood induction. We tried looking at correlations, to see which part of this multi-node network went with which symptoms of depression, and I couldn't separate out the mood from the attention symptoms with the data I had. Peter suggested “Let's do a blood flow scan following a mood induction.” The intention was that if we induced a negative mood in healthy volunteers we would determine if you could dissociate the presumed limbic-emotional regions from the cognitive cortical regions in this presumed depression network. To our surprise, mood induction did indeed reveal limbic activations and they were in Area 25, but it also decreased activity in the prefrontal cortex--the same regions identified in the depressed patients. Area 25 and the prefrontal cortex were inversely correlated with each other in both experiments--depression recovery mapped over six weeks and sad mood induction over 2 minutes with the magnitude of behavioral changes correlating with both regions. In essence, our hypothesis was just wrong; we couldn’t induce solely change in limbic regions by focusing on mood. The two systems, limbic and cortical, could not be separated. Obviously a simplistic notion if viewed through today’s use of graph theory and dynamical modeling approaches to time series data. But at that time, these simple experiments using blood flow and glucose metabolism PET gave us one of our most important insights--these regions were yolked and worked as a synchronized limbic-cortical circuit to mediate the interaction of mood and cognition.
This pattern of reciprocal changes involving midline and lateral cortex regions was new. Today, we would look at this pattern and immediately see the default mode and executive networks displaying their typical anti-corrrelation with each other. But then, in the mid 90s, that concept was just developing. We looked at it thinking ‘what are these regions and what do they do?’
Area 25 had very little written about it. It was described in the animal literature as a visceral motor outflow area and not necessarily a mood area. You could even find references to its homologue in lizards, as it's a very old, highly conserved, part of the brain. I would get into fights with rodent anatomists by asking about the rodent equivalent: “is it infralimbic? Is it prelimbic?” Lots of opinions as to whether or not it is even a good idea to attempt to match rodent and human prefrontal cortices, if one is really interested in studying depression, a uniquely human clinical construct. That was sort of a turning point; if I wanted answers I needed to really learn to read the tract tracing studies done in nonhuman primates and learn the connections between regions by looking at combinations of anterograde and retrograde studies. Little did I know that I was laying the foundation for future work that would rely on maps of structural connectivity defined using DTI.
NM: And then you later moved into intervention studies, where you used targeted deep brain stimulation (DBS) of region BA25 to see how it affects symptoms. What was that like - setting that up and seeing the results from those first studies?
HM: In all honesty, I became an interventionist almost by accident. I wasn’t a trialist; I merely used treatments as probes to better understand depression and treatment mechanisms. I spent the first 20 years basically trying to prove that depression was a circuit disorder, first by identifying the nodes, and then the connections and making inferences about causal relationships using changes with various kinds of treatments. There became a point in Toronto where, findings in Area 25 were so consistent across all of our treatment studies, that we hypothesized that if you didn't downregulate this region then people didn't get better. It seemed to be really at the center of the antidepressant treatment response.
The idea to target Area 25 with DBS for treatment resistant patients was highly influenced by the neurosurgical literature and the evolution of ablation to DBS for Parkinson’s disease. The leading theory about DBS mechanisms at the time posited that high frequency stimulation resulted in a local depolarization block. As we had consistently demonstrated that effective antidepressants decreased or blocked activity in Area 25 and if you couldn’t block it you didn’t get better. We followed that logic to hypothesize that if you can't talk or drug or shock it down, maybe you could block it with targeted stimulation delivered very precisely at this node in the network.
Everything I knew about connectivity (even though at that point, there were no tractography tools available to us, so implied connectivity) was that if you downregulated a region such as BA25, maybe you would also get disinhibition of regions it was connected to. The DBS technology at this point, in 2002 or so, was well established and readily available. I had a surgeon that was willing to test my hypothesis. It was actually very much an imaging-driven idea. If I hadn't been doing imaging, would I have even thought about it? Probably not, but I was in the right place at the right time. We had the maps that pointed to a putative DBS target for treatment resistant depression, a surgeon with extensive experience with DBS for Parkinson’s disease and a team of investigators willing to learn about DBS and manage this group of extremely ill patients with this novel intervention. It was in some ways a natural next step for our ongoing studies. So that's why we did it - because we could. But the logic was basically built on that first mood induction depression recovery finding.
NM: It's as we discussed before - the impact statement becoming true over time, where you think ‘what areas are involved?’, then ‘what can you do about it?’ And here you've got a great example.
You were involved in the creation of OHBM. What was your role?
HM: Well, mainly I was involved because I showed up. I attended the first meeting in Paris, which was a natural extension of the Cerebral Blood Flow meetings I had been attending since starting my post-doc in PET imaging. With time, I became an officer; I was elected secretary in 2000, and served from 2000 to 2003. It's interesting that many of the originators of OHBM were clinician-scientists. Several of the key drivers - Mazziotta, Fox, and Evans all in North America - had a grant together, and joined forces with many key thought leaders and teams worldwide to make it happen. A shared vision. Again, being in San Antonio with Peter, I had a front row seat to the evolution of the organization. Timing and opportunity are a common theme here.
When I think back, how could anybody not participate? It was happening all around me. So you get on the train and hold on and see where the journey takes you. We all had a ringside seat, and saw an idea grow and mature. Like any diverse scientific community, building an infrastructure that requires not just expertise but buy-in and cooperativity is challenging. But like any democracy, there was a lot of trial and error and compromise seeing what worked, what the community, the stakeholders wanted; it evolved by taking great ideas and giving them space to evolve and mature. What was great was it was very inclusive - methodologists, clinicians, statisticians, engineers, all topics, all scan types, multimodal approaches, new science, courses, and great opportunities for networking. The multinational and multidisciplinary collaboration that established OHBM has continued to define it and foster its unique position among imaging meetings.
NM: And what have you found most rewarding about your experience in holding on to that train with OHBM over the years?
HM: Well, I've had the opportunity to collaborate with people world-wide and adopt a multimodal imaging approach to our team’s clinically-oriented research questions. Maximizing use of novel technologies is at the core of our work--with critical reliance on state-of-the-art engineering and statistics. OHBM is where I can always count on seeing the newest technical and analytic advances and where discussion is scholarly and collegial. Our own work is quite iterative, so it’s useful to see a new method used by others before jumping in ourselves. OHBM provides an important sounding board for our ideas and I have always found the meetings personally and scientifically rewarding.
OHBM has evolved beyond anything any of us could have imagined. Technological advances have been the critical catalyst but applications of the technologies have been important drivers. Perhaps I am biased, but imaging in one way or another has been at the center of many of the advances in neuroscience over the last 50 years.
NM: And what do you see as the most promising things that are coming out now?
HM: Like with anything, progress is not linear. Sometimes it seems like it's three steps forward, and then one steps backward or sometimes even sideways. I'm reminded of one of the first imaging meetings I attended prior to OHBM where we would sit and listen to thought leaders debate the advantages of their particular methodology. It was a curious sort of testosterone storm of statistical one-upmanship. It was as though one method had to defeat all others.
It has been fascinating to be part of our maturation as a field. Where our focus is on matching methods and technologies to a specific category of question rather than assuming one size fits all. How could it be otherwise? That's the natural evolution of any field… the first thing is you don't believe it, the second thing is it's obvious, then it evolves to be much richer because everybody starts to dig into working out the details.
Right now, I think we're going through a stage where there's so much data that we don't know how to parse it. We're at a time where doing experiments that people care about is expensive and hard. Early on, the focus was on ensuring the methods were valid and reliable. Like any broad field, people have different interests. I am grateful to know that there are people pushing the limits of the technologies and those using it to understand basic principles of brain function; the big data consortiums with multimodal data archives for general use are priceless resources for the community. As a depression researcher, I want to exploit the technology; ‘If I take on learning a new method, I need to decide if it's worth it.’ Then, ‘How is it going to help us test our next set of hypotheses?’ I don't think it's just because you're a physician that you want to do that. Everything is hard and time consuming - how do you have meaning in the way you spend your time, scientifically?
We're back to a plateau where we're “fighting” about ‘what's the right way to do it? Do we believe anything we know? Is it all an artifact? How do we replicate?’. We're learning that the brain is very adaptive. And when we think we control an experiment, we don't control it as well as we thought we did.
We're looking for big signals. We've gone from working with deforming the brain into a common space (which was to increase signal to noise and you didn't believe it until you saw it robustly across subjects) to trying to understand inter-individual variability (if you don't understand the individual, you know nothing). You can get dizzy, realizing these are natural evolutions. Everybody's right, just not at the same time.
This is the beauty of the OHBM culture: you can develop tools to answer the question in the way you want to. So for me, as a very specific example, we guessed based on a blob on a PET scan, where Area 25 was, stimulated with an invasive implant in that approximate spot, and made people better. It worked. And then we’ve spent the next 15 years trying to figure out what exactly we did, how to do it better, and why it worked.
Imaging has remained a key method towards these goals. For instance, Kisueng Choi in the lab had identified the critical white matter bundles that mediate the DBS treatment effects and developed tractography methods to reliably define the optimal surgical target in any individual patient. I love and use the data from the human connectome to test ideas, but at some point, we need to make a decision about an individual patient's brain. Where is the spot? Can it be visualized reliably? Can we hit it with millimeter precision? I’ve got to make a map for the surgeon to put the electrode where we say with a high level of accuracy. And then be able to show prospectively, that what we wanted to do, is in fact what we did.
I am envious, as we all are, of the amazing advances in circuit mapping techniques using cell-specific labeling, such as optogenetics, and CLARITY. While I can learn a lot from these exquisitely detailed maps in rodents and more recently non-human primates, I also need reliable lower resolution methods because I don't have the luxury of single cell stimulation. We inject a big amount of current into a pretty sizable brain area that contains a mix of many cell types and passing fibers. It remains a real mystery how such nonspecific stimulations work but it does. Obviously, more advanced methods will evolve. But for now, we work with what we have. That said, we’re always looking to see what new methods people are developing. When I was a kid, I used to hang out with my uncle who was a biochemist and nuclear medicine physician in the pre-PET era. He used low resolution detectors to measure radioactively tagged chemicals in the brain often injected through the ventricle during surgery and scanned later. The images were horrible, like looking at a fuzzy bowl of soup. You could measure changes in brain concentration of various compounds, but without the spatial detail. Still, it was really amazing we could do it at all. Now we're working to improve on 0.8 mm isotropic voxels. Looking to push the envelope further. All of this change in less than 50 years; I am just sorry my uncle missed all of this. He would have loved it.
NM: I could almost end there but one last question about your personal experiences attending OHBM. Are there any moments that stand out for you?
HM: There are so many. To hear the giants of imaging give the Talairach lecture is always a thrill. But I think like any meeting it’s the camaraderie, the openness of students who read your papers and want to get your opinion on their work; to both meet old and new heroes and to maintain relationships with colleagues over 30 years; to be able to sit down and talk or just hang out. I always enjoyed the grandeur of the big lectures and the rigor of the science as well. But it has always been the relaxed atmosphere that catalyzes new ideas and new collaborations. I can remember meeting the Oxford team -- Paul Matthews and Heidi Johansen-Berg and that short chat changed the course of my fledgling tractography work. I remember making sure to arrange meetings around world cup matches; Resting up for the dance party night; figuring out the train to Sendai. Just so many big and small wonderful memories.
NM: Professor Mayberg, I'd like to thank you very much for joining us. It was a fascinating insight into your experiences with OHBM.
HM: Well, thanks so much for including me. It's really an honor.
Ilona Lipp (Lead editor):
With the masked face and being on the beach in December in Connecticut (with 15 c), I feel it pretty much sums up the Pandemic and Climate Change ridden year - 2020!
In the coming year I look forward to more enlightening interviews as well as exploring new avenues for lay media blog posts with members of the Communication Committee.
Nils Muhlert (ComCom Chair):
This year has taught many of us a lot about ourselves. Personally I found out that I’m terrible at baking sourdough bread. Alongside all the awfulness there have been some real highlights. Inviting new people to join the blogteam and seeing their first contributions is definitely up there. As is passing the baton of blogteam lead and ComCom chair to Ilona Lipp; the OnDemand tutorials that she’s been leading have become a great resource for those wanting expert-led introductions to the many flavours of MRI. I hugely enjoyed interviewing some of the original founders of OHBM, including John Mazziotta and Helen Mayberg. More to come next year! As to next year, the prospect of an effective vaccine and a gradual return to an upgraded normality are certainly beacons of hope. With a bit of luck I’ll see many of you again at OHBM2022 in Glasgow, if not virtually at OHBM2021. Have a good holiday all, and hope you come back rested and recharged.
By Valentina Borghesani, Elvisha Dhamala, Niall Duncan, Marie-Eve Hoeppli, and Michele Veldsman, on behalf of the SEA-SIG
This month, OHBM announced the formation of a new Special Interest Group that will tackle sustainability and environmental issues around brain imaging.
Here, we talk with the Sustainability & Environment Action (SEA) SIG Chair Charlotte Rae to hear more about what the new SIG will seek to achieve.
Why do we need a new Sustainability & Environment SIG?
Awareness of the environmental impact of human activity has never been higher, and there is now strong international consensus that we urgently need rapid action to tackle multiple crises, including dangerous climate change and irreversible ecosystem degradation. Neuroimaging research activity plays a part in these crises - from liquid helium extracted through fossil fuel production, to the energy usage of big data. We all have a responsibility - especially as professional scientists - to address these issues and move towards a sustainable future.
We have set up the new SIG so that we can have a community conversation around how to enact the changes that are required. For example, we plan to do some work around measuring and assessing what the environmental impact of a neuroimaging workflow is, from data acquisition to data analysis and even publication. One back-of-the-envelope calculation puts the carbon footprint of a single MRI scan session at 160kg, and we know that server activity has a big impact - especially resource hungry approaches such as machine learning. Once we’ve quantified the size of the problem, we aim to provide a set of guidelines and recommendations for sustainable neuroimaging practises.
We are also really keen to work together with Council, the Executive Office, and colleagues across our community to decarbonise the annual meeting. There is growing recognition that 3000 of us flying across the globe annually isn’t compatible with a safe future on this planet: one transatlantic return trip generates nearly 2 tonnes of carbon dioxide. That’s the size of our annual individual personal ‘carbon budget’ if we’re going to limit warming to the 1.5C set by the Paris Climate agreement. We need to work up positive and practical alternatives that the whole of our neuroimaging community can get on board with, whether that’s ‘hub-and-spoke’ models, where you meet colleagues locally on your own continent, supporting hybrid in-person and online interactions, or reducing meeting frequency.
We have a lot of work to do! But our sister SIGs have shown that with international collaboration across our brain imaging community, we can achieve rapid change. The Open Science SIG has changed the way we think about open neuroimaging. The Diversity and Inclusivity Committee, set up in 2016, now has a dedicated symposium slot at every annual meeting. As has already happened for open science and inclusivity, we can aspire to drive rapid uptake of sustainability awareness and action amongst our community too.
How can OHBM members get involved?
We plan to hold regular open SIG ‘community meetings’ where any OHBM member can share their thoughts on what our priority actions should be for the SIG to take forward. This might be decarbonising the annual meeting - such as building on the 2020 and 2021 digital meetings to ensure we don’t simply return to 3000 members creating a huge travel footprint every June post-COVID. Or tackling the question of big data - how can we run our analyses sustainably when server manufacture has a huge ecological impact, and energy to perform computations often still comes from fossil fuels?
Once we know our priorities for action, we want to establish SEA-SIG working groups so that we don’t just ‘talk the talk’ about what the problems are, but ‘walk the walk’ to figure out what the changes are that need to happen. Ultimately, we want to be able to produce some guidance as to how neuroimagers can go about greening our research practises. We need OHBM members with expertise across MRI physics, computing, analysis practises, to all get involved!
It's also crucial that we have lots of input from early career researchers. Our current generation of trainees are going to have to live with the consequences of dangerous climate change for much of their lives - it is already happening, and is only going to get worse. We hope we can amplify the voices of ECRs, who we know often feel very strongly that rapid urgent action is necessary, but who are not always heeded by those in power.
If you would like to get involved with any of our activities or receive updates about what we’ve been doing then contact us at email@example.com.
You are also most welcome to come to our first community meeting, on Tuesday, 15th December via Zoom (with two sessions: 09.00 UTC and 18.00 UTC, to accommodate colleagues in different timezones). We will outline what the climate crisis and ecological emergency mean for us as neuroimagers, before we collaborate in small groups to determine priority aims for the SIG to pursue. Register to attend here: https://forms.gle/vVF3ydnJCyArobdj6
We are also looking for colleagues to join our Committee, in the posts of Webmaster, and Social Media officer. Please contact us on firstname.lastname@example.org if you are interested in taking on either of these roles.
Finally, you can follow us on Twitter, @OhbmEnvironment.
We hope to see you at a SEA-SIG community meeting soon!
By: Rosanna Olsen, Amanpreet Badhwar, Valentina Borghesani, Lee Jollans, Hajer Nakua, Laura Marzetti, Nils Muhlert, Pradeep Reddy Raamana, Tilak Ratnanather, and Lucina Uddin on behalf of the OHBM Diversity & Inclusivity Committee
In June 2020, OHBM made a statement condemning the murders of George Floyd, Breonna Taylor, and Ahmaud Arbery as well as ongoing actions of police brutality against Black Americans and underrepresented minorities around the world. During the conversations surrounding these events, there was a public recognition of the lack of support for Black and minority communities. We realized that at OHBM we have not done enough to support underrepresented minorities in science, and that we need to take concrete actions to make our organization a welcome and safe environment that educates and supports each and every member of our group.
To achieve this goal, we need to gain a better understanding of the experience of OHBM members and their sense of belonging within the organization. Hence, the Diversity and Inclusion Committee (DIC), with support from the OHBM Council, will perform a survey to learn how welcome and comfortable members feel within the organization, at the Annual Meeting, and other satellite events. This survey will also allow for anonymous reports of any experiences of discrimination based on race/ethnicity, gender, sexual orientation, religion, disability, or affiliation with any other marginalized group.
The DIC has developed an anonymous two-part survey: the “Survey of Member Views on Inclusivity at OHBM.” The first part of this survey will collect crucial information from OHBM membership and will eventually become a permanent resource for anonymous feedback for all of our activities. Survey responses will identify areas of concern, flag problems, and identify actions that OHBM can then work to improve. The second part of this survey will collect demographics and other identification characteristics of our membership. If you do not want your answers to this part of the survey linked to the first part, there will be a place to indicate this in the survey itself.
The Survey of Member Views on Inclusivity at OHBM will be sent to the OHBM members in December 2020. The survey will take around 10 minutes to complete, and your input will be incredibly valuable, as we aim for a complete picture of our membership’s unique experiences. A high response rate from our membership will provide us with a more representative picture of our diverse attributes and needs, which will provide a better basis for improving our organization. The survey will provide an opportunity for members to provide feedback regarding both what *is* and what *is not* working at OHBM and what do you think should be done to make OHBM more inclusive, for everyone . We also welcome any suggestions on how to improve our survey for subsequent data collection efforts.
by Claude Bajada
The GDPR is a new(ish) legislation by the European Union that regulates the processing of personal data when the person processing or controlling the data is in the EU, even if the actual processing occurs outside of the EU. Further, the GDPR also sometimes regulates the processing of personal data of people who are in the EU, even if the persons doing the processing are outside of the EU.
How does this affect neuroimaging? We sit down with neuroimaging expert and Open Brain Consent co-author Dr Cyril Pernet (CP) and Technology law expert Dr Mireille Caruana (MC) to discuss the implications of this law on our work.
The article flip-flops between the term “participants” and “data subjects” since ““data subject” is the term used in the GDPR but for the purposes of this article you can think of them as equivalent terms.
What follows is a summary of our conversation, edited for conciseness and clarity.
Who are our experts?
CP: I do a lot of method development in neuroimaging and in a clinical context. Data sharing is something that I have always been happy to work towards. Data sharing is like code sharing, we need it for good science. With the advent of the GDPR, we've got some extra constraints on what to share and how to share.
In the clinical context, the typical thing is to say is: “Oh, you know, we have patients’ data, therefore, privacy issues,” and people don't even try to share. This really annoys me because there are ways we can do it. It doesn't have to be completely open on the web so that everybody can download it. I've been working on all sorts of open science related projects and the Open Brain Consent is part of that line of work.
MC: I am the head of the Media, Communications and Technology Law Department within the Faculty of Laws at the University of Malta and my research has, since before the GDPR, focused on privacy and data protection issues. I would not contradict you that the GDPR is a relatively new law that has, from the start, been the subject to a lot of uncertainty and difficulty in implementation and application. It's well worth working our way through the legislation to seek correct interpretations of it.
Why is it important to discuss GDPR across disciplines?
CP: We are scientists, when we read the GDPR text, we don’t understand the implications. We do not know how judges will interpret the law. This means that we need lawyers to guide us on how to interpret what is written there.
MC: The problem is that in many instances there aren’t clear answers. In fact, while a lawyer may give legal advice, it may eventually be contradicted by a court. Nevertheless, scientists should behave as diligently and carefully as possible. If the perception of the GDPR ends up restricting research or not allowing researchers to do their work, that's a problem. It shouldn't be that way. But achieving this balance is very difficult.
Anonymous data are not governed by the GDPR. Do you think there's anything within neuroimaging that can be considered anonymous?
CP: In my opinion, one of the key points in GDPR that is relevant to neuroimaging is that neuroimagers are able to single out individuals from datasets, which makes the data identifiable. And I'm not just talking about brain structure data, I am also talking about EEG data, MEG data, etc.. With connectome matrices and a few tasks you can single out individuals, and we can thus consider that any imaging data should be considered identifiable. Others disagree with me and argue that singling out is not strictly identifiability, while I contend the opposite because GDPR indicates that singling-out is a prerequisite to identification.
This is a key difference between North American legislation and the GDPR. While North America differentiate between anonymised data, pseudonymised data and identifiable data, the GDPR only distinguishes between anonymised data or identifiable data. Pseudonymisation is just a process. Data can go through that process without changing their status as identifiable. That is we can remove the face, ID, etc ., but brain imaging data remain identifiable, in that we can potentially distinguish between individuals and even if we don’t have the metadata, link those data to someone by name.
Can we have an example?
CP: Imagine, for instance, that we have two independent datasets consisting of connectome matrices and tasks. There may be individuals who have been participating in each of those datasets. So we can now think about linking them and studies have indeed shown that it is possible to say that the same individual belongs to both datasets, because of the way connectomes look. Not only can we single out people within datasets, but we can also link datasets, and possibly by adding associated metadata we are getting even closer to identifying that person in the real world.
Are there any proposed solutions for this problem?
CP: The solutions that we have come up with are detailed in Open Brain Consent and involve two consent forms as well as a data user agreement for data collected in the EU. Of the two consent forms, one is the consent for the study and the other one is consent for people to share their data. The way you can legally share this is through a data user agreement, not through a licence, which means we ‘control’ who has access and to a lesser extent what can be done to the data. Now the control can be done in a way where people register to use specific datasets. For example, the Netherlands have a good system because every researcher is registered on a database. So for instance, if you log into the system of a particular institute, they know who you are, which institution you are affiliated with, and you can just download data, even if you're not part of the data-holding institute. This is possible because they can identify you. You can sign the data user agreement with a simple click.
A user agreement also helps researchers share data outside of the European Union. The GDPR refers to this as “standard contract clauses.” This allows you to get to a point where non-EU researchers can download the data and become the data controller. With the data user agreement, the downloader agrees with the terms of the GDPR. This way you can share data anywhere in the world, even outside the EU. But you cannot just put your data up on openneuro. This is important since openneuro servers reside in the US, and the US is special because it is not considered to be a “safe country” by the EU. Institutions can sign an agreement with the EU to become a safe repository. But that also means openneuro would have to change their infrastructure to support data user agreements.
Where does consent come into all of this? Could I just get consent from my participant to share all of my data in the US, and the rest of the world?
MC: In the GDPR, sharing or transferring data is considered to be a type of processing. Let's forget about how the original data were collected and focus on the sharing of these data. In this case, you should still have a legal basis for processing in the GDPR. I am also assuming that they're sensitive personal data, since I am assuming that they say something about an individual’s health status.
Article 9 of the GDPR has a legal basis specifically for research data processing. So perhaps you don't need to rely on consent to share data because there is another legal basis which speaks about the necessity for scientific research. However, this legal basis is somewhat unclear in its application because it speaks about individual member states laying down a law that provides appropriate safeguards.
With regard to data transfers to a third country such as the US, chapter 5 of the GDPR concerns transfers of personal data to third countries or international organisations. According to Article 45, transfer of personal data to a third country may take place where the EU Commission has decided that the third country, or one or more specified sectors within that third country, ensures an adequate level of protection. Such a transfer does not require any specific authorisation. In the absence of an adequacy decision, a controller or processor may transfer personal data to a third country only if the controller or processor has provided appropriate safeguards, and on condition that enforceable data subject rights and effective legal remedies for data subjects are available.
Under Article 49, in the absence of an adequacy decision, or of appropriate safeguards, a transfer or a set of transfers of personal data to a third country may take place only on one of a set of stated conditions, which include that “the data subject has explicitly consented to the proposed transfer, after having been informed of the possible risks of such transfers for the data subject due to the absence of an adequacy decision and appropriate safeguards”.
How do we deal with requests for deletion of data?
MC: Article 17, GDPR sub article 2 states that “Where the controller has made the personal data public and is obliged pursuant to paragraph 1 to erase the personal data, the controller, taking account of available technology and the cost of implementation, shall take reasonable steps, including technical measures, to inform controllers which are processing the personal data that the data subject has requested the erasure by such controllers of any links to, or copy or replication of, those personal data.” It talks about reasonable steps that would, by way of good practice, mean a record of people who accessed the data and contacting them to inform them about the request.
How long can we store data for?
CP: You are required to set a time frame within which you must review the need for continued storage of the data. However, if the data keep being necessary, the data can be kept indefinitely.
Is it true that under the GDPR, legally, you're not allowed to reuse your own data in your own lab to answer different questions than what it was originally collected for?
MC: The GDPR speaks about purpose limitation (“personal data shall be collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes”) and ‘specific’ consent (“‘consent’ of the data subject means any freely given, specific, informed and unambiguous indication of the data subject’s wishes…”). So ideally, I think even ethically, your research participants should understand how you're going to use their personal data; but no, research is treated in a particular manner under the GDPR. Research is not considered to be incompatible with the original purpose for data collection (“further processing for ... scientific ... research purposes ... shall, in accordance with Article 89(1), not be considered to be incompatible with the initial purposes”).
Furthermore, recital 33 of the GDPR clarifies “It is often not possible to fully identify the purpose of personal data processing for scientific research purposes at the time of data collection. Therefore, data subjects should be allowed to give their consent to certain areas of scientific research when in keeping with recognised ethical standards for scientific research. Data subjects should have the opportunity to give their consent only to certain areas of research or parts of research projects to the extent allowed by the intended purpose.” So, legally, you may be covered, even though the debate surrounding so-called ‘broad consent’ is not conclusive (cf. for example the Article 29 Working Party’s Guidelines on consent under Regulation 2016/679).
CP: In my opinion, the “purpose” research is not specific enough. But if you say the purpose is “memory” that's too specific because that way you could not even use a T1w image to create a template. So, we came up with a compromise. If you look at the Open Brain Consent GDPR edition, our solution is to say that, for instance, the purpose of conducting the study is one thing, but also that the data may be used for future research projects in the field of medicine and cognitive neuroscience, which strikes the balance.
MC: Article 5 (1) (b) of the GDPR states that “personal data shall be collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes; further processing for ... scientific or historical research purposes or statistical purposes shall, in accordance with Article 89(1), not be considered to be incompatible with the initial purposes”
This gives researchers quite a bit of flexibility. This is maybe one area where law and ethics overlap. The debate within research on genetic data that I have come across when dealing with biobanks, is that people speak of dynamic consent and they want to use dynamic consent to have more granular consent for specific projects. The thinking behind this is that certain people might object morally to particular research. So of course, you're being more respectful to the data subject if you don't use the data in ways that they would not approve of. Specific, granular consent is in line with the spirit of GDPR, but I don't think that the GDPR excludes such broader consent for scientific purposes.
The GDPR refers to data minimization? How do you guarantee that we don't collect data that are unnecessary?
CP: This is something that we also struggled with. On one hand, you would want to be able to collect participants' data and typically, in my lab, we go through a bunch of health questionnaires, handedness, medical history, language etc… because, of course, you can then reuse these data in a larger dataset. You've got 100 different studies, but for each participant, you have the common six questions, and you can do a nice big analysis. You could possibly connect these studies and perform richer analyses. What is the balance? We know that this may be the only way to aggregate enough data from multiple studies to then have a study that is powerful enough to look at the effect of some type of medication.
MC: Unfortunately, I think that this is an outstanding difficulty or problem because as a researcher you may not know exactly what you're looking for; for example, what analysing the patterns may reveal. It is a known tension in the GDPR that may also go against the purpose specification principle. So I think it's a tension that is real. I would however always emphasise in such contexts that the sole purpose for processing these data is in fact scientific research, that there may be uncertainty associated with research, but that there is also an important public good to be gained from such research that affects the balance to be achieved between the different competing interests, including the privacy and data protection rights of the data subjects.
What are the next steps?
CP: I think the next steps are twofold. One is for neuroimagers to engage with their own institutional repositories. We need to work with them and with data protection officers to come up with solutions for sharing data. You need to explain what systems need to be in place and how to implement them. We do have power because we do receive money from funders who often actively ask us to share the data. And it is the university’s job to provide us with the tools to be successful in funding applications and to comply with funders.
The other aspect is more ambitious. There are systems that work under the architecture of any repository to index them, so that for instance, every university in Europe could very well have their information connected. But this would necessitate that all universities cooperate with each other. It's more like a dream.
I am also very keen on making sure that everyone reading this interview knows about all the efforts of the Open Brain Consent project. I would like to highlight all of the hard work put in by many, in particular Chis Gorgolewski and Yaroslav Halchenko who started the project, Stephan Heunis and Peer Herholz who organize work on this during the Organisation for Human Brain Mapping (OHBM) hackathon, and all the people who helped sharing their consent, experience, and proposed translations (now available in 12 languages) thanks to the COST association support (GliMR2). Note that we are keen on having more people involved, in particular having and sharing more information about how these issues are dealt with in countries from the Global South that are currently under-represented.
You can find more details on the Open Brain Consent website.
Now is the time to submit your nominations for 2021 OHBM Awards. To inspire you, we are highlighting some of the outstanding winners from this year’s meeting.
This year’s annual meeting was unique in many ways. Uncertainty about whether the meeting would happen was followed by a remarkably fast reorganization in order to hold the meeting online with a complex time schedule. One event that was not missing in the program was the traditional award ceremony that recognized the work of individuals who have changed the scientific landscape of human brain mapping.
Inspired by their nomination letters, we honor OHBM 2020 award winners and their achievements:
2020 Early Career Investigator Award Winner: Danielle Bassett
Danielle Bassett received her PhD in physics in 2009 and after only 10 years, she is now a full professor at the University of Pennsylvania and has published over 240 peer reviewed articles. Her top-cited paper on small-world brain networks has received over 1,800 citations. In addition to the OHBM Early Career Investigator Award, she has also received the Erdos Renyi Prize in Network Science, a National Science Foundation CAREER award and a MacArther “genius” fellowship, amongst others.
Danielle Bassett’s laboratory, the Complex Systems Group at the University of Pennsylvania, combines theory and tools from bioengineering, physics, electrical engineering, neurology, psychiatry and philosophy. Her team’s translational, interdisciplinary research has enabled them to explore the human thought process through investigations of how we learn and how this is underpinned by the flexibility of brain network dynamics. Her interdisciplinary approach applies new physically-informed metrics and null models for spatially embedded systems to look at networks at different scales (from cellular to systems) in order to inform clinical medicine and societal interventions. Danielle and her lab also contribute to software packaging and science outreach events.
Danielle gave the opening lecture at OHBM2019 and participated in the mentoring symposium organized by the OHBM student/postdoc special interest group.
From musical notes to neural nodes, you can learn more about Danielle Bassett’s career and aspirations at ScienceMag.
2020 Education in Neuroimaging Award Winner: Robert Savoy
Educational programs are a key part of the success of OHBM. Before the OHBM meeting in 1996 in Boston, Dr. Robert L. Savoy organized an educational workshop on fMRI attended by 600 of the 900 meeting attendees. The success of this inaugural course showed the high demand for educational programs. These have continued annually with the still highly-attended workshops alongside each OHBM meeting. Robert’s very first course was offered at the MGH NMR Center in October of 1994, and it was envisioned that the market for an introductory fMRI course would soon be exhausted. In contrast, the continual advances in fMRI and the general excitement associated with the technology meant that it reached an ever-expanding range of disciplines, increasing the pool of interested students. As the field has grown, so too have Robert’s educational offerings. Since 2007, Robert has organized an annual two-week Multi-Modality course; this has in turn generated another short course on connectivity. Robert is a rare scientist who devotes almost all of his efforts to education. His courses have had a profound impact on the career trajectory of many of our colleagues, including many active and leading members of the functional imaging community around the globe.
A large fraction of the leaders in the field have attended his course – receiving their first instruction on fMRI and neuroimaging there. Peter Bandettini, Ph.D., Director of Functional Magnetic Resonance Imaging Core Facility (FMRIF) collected the following quotes:
2020 Mentor Award Winner: F. Xavier Castellanos
In her nomination letter, Lucina Q. Uddin describes Francisco Xavier Castellanos as “a winner with great mentoring values, guiding his lab members to become independent thinkers and scientists. He is a tireless mentor and teacher. He proposes clear goals with defined timeline and expectation along the way and he predicts correctly. He shows a clear vision of a career path and best opportunities that should not be to define a path for new lab members. He is able to teach the art of “grantsmanship”, one that every scientist must master. Xavier is always there for his trainees, current and past. Trainees can always count on Xavier to submit a letter of recommendation at a moment’s notice, which is a great aid to apply for fellowships, grants, and positions as the opportunities arise. He is always happy to comply with letter requests, no matter how frequent. He also remains, at every career transition, a sounding board, providing clear-headed, rational and thoughtful advice.”
Lucina mentioned one particular anecdote that represents her experience of being supervised by F. Xavier Castellanos: “One particularly salient example of Xavier’s unconditional and enthusiastic support for his trainees comes to mind. One day, in a conversation with Bharat Biswal, we were tossing around the idea of trying to collect neuroimaging data from a split-brain patient in order to test a theory we had about functional connectivity. Without a fuss, Xavier funded the trip for me and a colleague to fly across the country, collect data from this unique patient, and spend months analyzing the data (though this project was unrelated to any of his grants at that time). This spontaneous trip led to a number of interesting case studies (Nomi et al. 2019, Uddin et al. 2008), and remains one of my favorite Xavier memories. The fact that he has always been enthusiastically supportive of whimsical projects has made science fun over the years.”
What particularly distinguishes Xavier from other senior successful scientists is his generosity, intellectuality and personality. He clearly has had a positive impact on a number of young scientists. Indeed, it is worth noting that three of his previous mentees (Lucina Uddin, Mike Milham, and Daniel Margulies) have received the OHBM Early Career Award. Another example of the way in which Xavier exemplifies the values of collegiality and building community through acknowledgment and recognition is in his authorship practices. He never hesitated to add junior scientists as co-authors on manuscripts, and readily gave up senior authorship positions to his trainees, as he always practices the maxim of giving credit where credit is due.
Xavier has been a proponent of open science before open science was a thing. His lab was one of the earliest to get involved in grass-roots data sharing initiatives such as the Autism Brain Imaging Data Exchange, the ADHD 200 International Neuroimaging Data-sharing Initiative, and the Enhanced Nathan Kline Institute – Rockland Sample. In fact, he is acknowledging that so much data are collected and so many people are needed to analyze them, so he favors giving others the opportunity to use their expertise without worrying about authorship or credit or restrictions. This kind of radical data sharing has inspired countless researchers worldwide, who are beginning to follow a similar philosophy. Xavier’s lab and pioneering radical data sharing initiatives set the stage for the current climate of open science and collaboration that permeates the field today.
2020 Replication Award Winner: Andre Altmann
Andre received the replication award for his paper titled ‘A comprehensive analysis of methods for assessing polygenic burden on Alzheimer’s disease pathology and risk beyond APOE’, Altmann et al., Brain Communications (16th December):
In this paper, Andre Altmann and colleagues attempted to replicate results from “Polygenic hazard score, amyloid deposition and Alzheimer’s neurodegeneration”, published in early 2019 by Tan et al.. The original paper was proposing a link between a polygenic hazard score (PHS) and amyloid deposition (from amyloid PET) beyond APOE.
Andre Altmann and colleagues proposed to account for APOE4 status (carrying or not) instead of APOE4 burden (number of copies). Beyond this difference in analysis, Altmann et al. went further to show that their analysis better accounted for APOE4 than the initial study. While using subjects from the same database (ADNI), Altmann et al. were not able to replicate the results from Tan et al. (2019).
APOE4 is the strongest common genetic risk factor for the sporadic late onset of Alzheimer's disease and is known to be associated with amyloid deposition in the brain. Therefore, it is of importance to disentangle the effect of APOE4 from the polygenic hazard score in order to avoid correlation of no interest in the results. This would explain part of the previously observed strong link between their proposed Polygenic hazard score (PHS) and amyloid deposition. This reanalysis questions the conclusion from Tan et al. (2019) that PHS influences longitudinal cognitive decline in regard to the model.
Altmann et al. adjusted their linear mixed effects model and the replication study shows that small differences in modeling decisions have a dramatic impact on the results.
This study also rectifies a result that could have had a large impact in the field as PHS could have been used for follow up study in the Alzheimer's disease community without proper initial support.
2020 Open Science Award Winner: Michael P. Milham
Mike P. Milham’s efforts in open data, open resources and collaborations are numerous. They impact both clinical and basic science neuroimaging communities. In just a decade, starting with the aggregation and publication of the 1000 Functional Connectomes Project (FCP-1000), Mike has:
The above initiatives have had a substantial impact on the neuroscientific community both in terms of immediate/direct (e.g., publications) and sustained/indirect impact (e.g., cultural change).
Mike has been the driving and inspirational force of a host of important open science initiatives that have helped change the landscape of human and non-human primate neuroimaging.
Once again, we congratulate all the OHBM 2020 winners and nominees. We wish them a great year of science.
We hope we have inspired you to look around you, consider your own mentors, colleagues, trainees, friends and neuroimaging heros to might be an appropriate candidate for one of the 2021 OHBM Awards. The OHBM website has all the details regarding eligibility, and required information for each of the award categories; just select the award by name and there you will find the link to the submission webform. The nominating process is reasonably easy, all online and waiting for your submission. Remember, our ability to inclusively honor members of our diverse community is directly dependent upon you submitting deserving candidates!
Written by: Claude Bajada, Fakhereh Movahedian Attar, Ilona Lipp
Expert reviewers: Adina Wagner, Cyril Pernet
Newbie editors: Yana Dimech, Renzo Torrecuso
This post is about good neuroimaging practices. ‘Practices’ relates to all aspects of conducting research. By ‘good’, we mean beneficial to the field and neuroimaging community - but you’ll see that most of these practices also benefit the individual researcher. Here, we collected a number of tools, tips and tricks to do neuroimaging in the ‘best’ way possible. We aim to provide an overview and answer some questions you may have asked yourself about reproducibility and good neuroimaging practices. As usual, we refer to OHBM On-Demand videos from the educational sessions of previous annual meetings. OHBM has both a special interest group (SIG) for Open Science as well as a Best Practices Committee, where leading brain mappers promote and help implement Open Science and good practices in data analysis and sharing. Both the Open Science SIG and the Best Practices Committee regularly create invaluable resources, such as the annual Hackathon workshops, and the COBIDAS Best Practices in MRI and M/EEG data analysis papers.
Isn’t the main issue in our field reproducibility? Or the lack of it? Should I care about my science being reproducible?
Those are loaded questions. We think we just might not answer them because you are luring us into a trap that begins with seemingly innocent questions and then rabbit into an unending borough. There are so many terms to wade through that the novice neuroscientist can easily get lost in this bog!
In his video, Cyril Pernet clarifies the often used terms 'repeatability’ and ‘reproducibility’ (from min. 1:07). First, ‘repeatability’ means “simply” that redoing the same analysis with the same data should result in an identical result as the original analysis, which is not as trivial as it seems. The software version and the operating system can be variables that affect the output of your imaging analysis. That, however, is only step one. In his video, David Kennedy (from min. 3:54) highlights that ‘reproducibility’ is really a spectrum. We could use the exact same data and nominally similar analysis. Or, we may have nominally similar data with the exact same analysis. Or, we may have nominally similar data with nominally similar analysis. This way we can test the sensitivity and stability of our experiment.
Cyril explaining the different levels of reproducibility.
But this leads back to your question. Scientific findings should generalise. They should first be valid (repeatable) but should also be robust to various permutations of the data and analyses used. There is a great video by Kirstie Whitaker on YouTube that tackles these issues.
The reproducibility crisis is often associated with the field of psychology, is there anything different in the field of human brain mapping?
Ok, so here we are generally talking about the more general “reproducibility”, not just about being robust to permutations. We will assume that researchers have already ensured that their analysis is re-executable.
In 2005, John Ioannidis published a landmark article with the eye watering title of “Why Most Published Research Findings Are False.” If you are interested in understanding why many scientific articles are not reproducible we strongly recommend reading this article; it is an easy and insightful read. Notice that this article does not even specifically refer to psychology or to neuroimaging. This problem is general to, at least, the wider “medically-related” field.
The article points out that effect sizes in these fields tend to be low and that sample sizes are frequently lower than what would be needed to test for such small effects. In neuroimaging, there are many steps and expertise (and often money) involved in acquiring good data. As a result, our sample sizes tend to be typically small. Indeed, it was not too long ago when most neuroimaging articles were published on samples of approximately 20 participants. In 2020, studies with several hundred, up to a couple of thousand, participants are becoming more common, but these require a massive investment in resources and tight collaboration between sites.
In his video, Cyril provides an overview of cognitive biases that can contribute to limited reproducibility of neuroscientific research (from min. 7:18). He also explains how the analytical flexibility in neuroimaging research (such as fMRI analyses) adds an additional level of complexity (from min. 15:59). While papers with hot stories and “positive results” have it much easier to find a home in very high impact journals, the drawbacks of this trend are slowly starting to be recognized. Neuroimaging scientific societies are becoming aware of the importance of reproducible research and are incentivising the work. OHBM has a yearly replication award that was won by Andre Altmann this year. Also, initiatives, such as DORA, The Declaration on Research Assessment, aim to find ways of evaluating research and researchers that go beyond journal impact factors.
Pia Rotshtein discussing the conflict of interest between good science and researcher’s careers.
So what can we do to make neuroimaging research more reproducible?
Well, some things are harder to deal with than others. Running neuroimaging studies is time-consuming and expensive, there is very little that can be done about that, at least in the short to medium term. One thing we can do is to work towards using robust and valid measures from neuroimaging data. In his video, Xi-Nian explains how validity of our measures depends on reliability (from min. 5:40). He introduces reliability indices (the intraclass correlation coefficient) and gives an example of how they can inform the extent to which inter-subject variability (which is often what we are interested in, e.g. when investigating different groups of people or brain-behaviour correlations) exceeds intra-subject variability (which in these cases is unwanted variability in repeated measurements, often caused by measurement noise). He reminds us of this paper pointing out that brain-behaviour correlations are “puzzling high”, given the reliability of our cognitive measures and of our imaging measures. From min. 16:20 he goes through a variety of imaging measures and their reliability, and introduces CoRR (min. 21:30), the Consortium for Reliability and Reproducibility. The prerequisite to have reliable imaging measures is, of course, to have sufficient data quality.
How do I ensure that my data exhibits sufficient quality?
Quality assurance (QA) and quality control (QC) procedures are put forward to ensure and verify the quality of neuroimaging data, respectively. Although somewhat intertwined, QA and QC are slightly different. QA is process-oriented and aims to boost our confidence in the data via routine system checks, whereas QC is product-oriented and deals with verifying the quality of the final product in the pipeline. In his video, Pradeep Raamana briefly introduces QA and QC and outlines the different QC steps involved in the acquisition of neuroimaging data (from min. 3:47). Visualising and checking your neuroimaging data at all processing stages is absolutely essential. The most important yet basic tool you need is therefore an image viewer that allows simultaneous visualization of the three image planes, and of course, you as the observer! For more specialized QC, Pradeep presents a list of some of the available neuroimaging QC tools per neuroimaging modality here, where he also presents use-cases of some of the tools.
In order to conduct QC successfully, one would need to take care of the various common types and sources of artifacts encountered in neuroimaging data. Importantly, we need to keep in mind that QA and QC must be tailored to the specific nature of neuroimaging data in its various modalities, separately.
In the videos of the ‘Taking Control of Your Neuroimaging Data’ session, some of these procedures are presented. Pradeep introduces common sources of artifacts in anatomical MRI (min. 8:14) and presents some tips and tricks for detecting artifacts in T1-weighted images (min. 19:08). Then, Martina Callaghan presents key metrics to perform scanner QA for functional MRI, emphasising the need to look for subtleties (min. 3:53). Here, the key is to establish whether the system fluctuations inherent in the acquisition procedure and hardware are sufficiently low to allow detection of BOLD-related signal changes in task-based and resting-state functional MRI. Martina Callaghan then presents some of the online (i.e. real-time) QC procedures for functional MRI (min. 17:17).
Esther Kuehn then takes over and introduces artifacts in high resolution functional MRI acquired at high-field strength with particular emphasis on cortical layer imaging applications and presents some available means of artifact-correction (from beginning). In her video, Joset Etzel introduces a different aspect of QC for neuroimaging data - dataset QC - and talks about the importance of checklists and standard operating procedures (SOPs).
Dataset QC aims to verify whether a valid dataset (i.e. one that has already passed the various data QC steps) is also usable by different people at different times in different places, and intuitive data organisation alone is not sufficient. Finally, in his video, Alexander Leemans introduces common artifacts in diffusion MRI, presents strategies for checking the quality of data and common errors in this checking, and also correcting artifacts.
I’ve got so much data, how do I organise it?
Lots of neuroimaging data are acquired all over the world and the resulting datasets are organized in different ways according to the personal preferences of the users or the labs. With Open Data, so data that is publicly accessible, picking up momentum, there is growing need for standardization of neuroimaging datasets so that they are easy to use soon across a wide community of neuroscientists. The brain imaging data structure (BIDS) initiative aims to standardize neuroimaging data structures in order to make them interoperable under the FAIR data principle. In this tutorial, the BIDS data structure is introduced as a practical means for achieving FAIR data. Here, a number of BIDS resources and repositories and simple BIDS specifications are also given for an easy get-go (min. 27:27). Later, a hands-on session on how to create and validate a basic BIDS dataset is also introduced (min. 34:57). Also check out the TrainTrack session on BIDS of this year’s virtual meeting by Sam Nastase!
Jeffrey going through the benefits of the brain imaging data structure (BIDS).
Once you have nicely organised your data, they are also easier to use for other people. To make neuroimaging more reproducible overall, something else that can be done is to ensure that data does not get lost and forgotten. In short that our data are Findable, Accessible Interoperable and Reusable (or FAIR; see the educational course on FAIR data from min. 1:52 by Maryann Martone and Jeffrey Grethe).
This way, your science will be more robust, transparent and verifiable.
The problem is that making research FAIR as an afterthought is really tough. Indeed, generating or curating good quality data that abides by FAIR principles requires some forethought (FAIR workshop min. 12:36). Not only do a lot of steps and expertise go into acquiring good quality data, but your data need to be in a format and in a place that makes those data easy to use for your present self, your future self and for someone who is not yourself!
One tool to share statistical maps from your study is the platforms NeuroVault and Neurosynth. In his video, Chris Gorgolewski goes through the advantages that uploading your map has for you, such as the options for fancy visualisations of your maps (min. 4:37), cognitive decoding of your maps (min. 5:25), search to find similar maps in papers (min. 6:25), gene decoding (min. 7:04).
How can I make sure that my analysis workflow can be reproduced by others?
If you want all aspects of your study to be documented and reproducible, then this of course also includes your analysis. The BIDS structure can help with setting up a reproducible workflow, but it is not sufficient. It also needs to be clear which processing steps have happened, which analyses were done, with which software and which parameters, etc. There are a lot of tools out there to help you and the Center for Reproducible Neuroimaging Computation initiative (ReproNim) has held an extensive course at the 2018 annual meeting about this (and a whole Webinar series on best practices for neuroimaging, if you are interested).
Starting with the “computational basis”, Yaroslav Halchenko gives an introduction into the Linux shell, including the importance of environment variables (from min. 12:50) to ensure you are running the right version of software, how to use shell history (from min. 23:40) to check whether you indeed ran the right commands, and how to write shell scripts (min. 29:30). He also shows how Neurodebian can be used to search and download software (min. 41:21).
Most people have probably heard the name Git before. (Did you know the official definition is “stupid content tracker”?) Yaroslav explains the Git philosophy in 2 minutes (min. 58:01) and shows the most important commands (min. 52:50). While Git is useful to keep track of your scripts, get and provide code, a tool called DataLad (min. 1:03:17) can be used to do similar stuff with datasets. A hands-on session on this is provided in the Workflows for neuroimaging session from min. 47:20, and how this can be combined with specific statistical analyses is explained from min. 1:52:08.
Other tools to help you make sure you use consistent software within a study are containers and virtual machines. Dorota Jarecka gives a good overview of why these are very useful in research (from min. 7:39) and even guides you through some exercises (from min. 15:45). Jean-Baptiste Poline gives a short intro to Jupyter notebooks to demonstrate your code to others (from min. 2:43:51).
This year’s OHBM Hackathon also has a session on Git by Steffen & Saskia Bollman, on good coding practices with Matlab by Agah Karakuzu, on Datalad by Adina Wagner and on Containers by Tom Shaw and Steffen Bollmann.
You said that replicability also refers to other people being able to get the same outcome as my study, but if they test different participants, this is out of my control, right?
This is a good point, it is somewhat out of your control, but there are some ways in which you can help. First, being very transparent about what you did to your data will allow others to adapt methods as similar as possible to yours. As Celia Greenwood explains (from min. 2:24:01), the final statistical measure that one tries to replicate involves a lot more than just the statistical test, but includes all steps before, the processing, exclusion of outliers etc., which sometimes makes it hard to even work out what the null hypothesis is. She states that reproducibility in the statistical sense is about the final inference you make, so it is tied to the p-value. And this of course depends on your sample size and, to some extent, chance. In a demonstration (from min. 2:34:24) she shows that if you draw different samples from the same population, there is huge variability in the p-values and effect sizes that you get across samples (even with sample sizes of N > 100) , which are purely a result of random sampling.
Celia illustrates the effect of random sampling on estimated effect sizes.
Is this why “most published research findings are false?”
Are you insisting on going back to things we have already discussed?! I suppose it is fair to say that there is more to it. A measure called “predictive value” is the probability of the alternative hypothesis being true given your test result. In his video, Jean-Baptiste (from min. 2:47:14) uses a Jupyter notebook to explain the Bayesian math behind this value and shows that this measure depends on the power of your study as well as the odds ratio of the alternative hypothesis over the null hypothesis being true. So the lower the power in your study, the more unlikely that the alternative hypothesis (usually what you are interested in) is true, even if you have a significant result. And most neuroscience studies do not have much power, as shown by Katherine Button.
Well, you may say now, how do I know what my power will be? And is there even a point in doing my experiment or will it just produce another - false - research finding!?.
Good question. Doing power analysis for neuroimaging studies is not straightforward, but luckily, some packages, such as fmripower and neuropower, have been developed to at least get an educated guess of what your power might be. As Jeanette Mumford explains in her video (from min. 4:53) doing a power analysis has many benefits. She also gives some tips on how to assess other people’s power analyses (from min. 7:08) and what to consider when estimating effect sizes based on the literature (from min. 9:18). Jeanette also explains why the difficulty of doing power analysis increases with difficulty in model (from min. 11:59).
Jeanette talking about the power of different statistical models.
What else can I do to ensure best practices in neuroimaging?
Thorough reporting of what you have been doing in your data acquisition and analysis is always a good idea. Guidelines have been created by the Committee on Best Practices in Data Analysis and Sharing (COBIDAS; also see Tonya White’s video for the idea behind COBIDAS) for MRI and MEEG.
Various tools are available for testing your code. Also, if you publish your code on sites such as github, then other researchers can try it out and help further develop it.
Preregistration and registered reports are becoming more and more popular for neuroimaging, meaning that more and more journals accept and encourage them. In her video, Pia Rotshtein explains the philosophy behind and principles of registered reports (from min. 11:06) and shows some examples (from min. 22:55).
Tonya telling us about the Committee on Best Practices in Data Analysis and Sharing.
If I get into all these things, will I still have time to do research?
That is why there are 36 hours in every day! Seriously though, this is all part of doing research! Often, however, efforts on good practices in neuroimaging are not publishable by themselves and have not been well respected. There are good reasons and incentives to follow Open Science practices as individual researchers (for examples see this summary) and with the new OHBM initiative Aperture (see video and website), a new room for unconventional research objects (such as software and documentation) is being created.
If this still all seems overwhelming and time consuming, don’t worry. Most of the tools presented here have been developed to save you time and resources in the long run while making your research more sustainable. Think about the time that one would spend re-acquiring a data set because of a previously unnoticed problem with the scanner, trying to make sense of not intuitively organised data or trying to find a mistake in a long, badly structured code. Putting in place some of these preventative measures, does not seem like such a big investment anymore.
If you’re hooked, stay tuned. The numerous emerging Open Science initiatives keep coming up with new ideas and tools for how to make research as a whole more reproducible and trustworthy, and help us brain mappers, conduct neuroimaging research in more robust and applicable ways.
Guest post by Hiromasa Takemura
International diversity is essential for organizations like OHBM. Through my experiences attending recent OHBM Annual Meetings, I have found myself asking why so few researchers from Japan have visible roles. To find out whether this was indeed the case, and possibly why, I worked with the OHBM Executive Staff, Diversity & Inclusivity Committee, and Communications Committee to analyse membership and attendance data from the annual meetings. By collecting and analysing this demographic data we can gain insight into why some countries (in this case Japan given my background, but the findings may extend to others) may be underrepresented at OHBM.
Japan is the 11th most populous country in the world, with an estimated population of 126 million (m) people in 2020. For comparison, Mexico has the most similar population with 128m people and Germany, Europe’s most populous country, has 83m. Japan has, over the years, substantially contributed to the OHBM community: for instance, the 2002 Annual Meeting was held at Sendai, Japan and Dr. Kang Cheng, a pioneer of high-resolution fMRI studies at a founding lab for RIKEN's Brain Science Institute, is heavily involved in organization of OHBM meetings.
To get a picture of recent involvement of researchers from Japan, we examined data summarizing attendance and presentations at the OHBM Annual Meeting between 2017-2019 (Table 1). We defined Japanese members as those affiliated with Japanese institutions. Using this definition we found that Japanese members comprised 3.6%, 5.4% and 3.9% of all attendees for 2017, 2018, and 2019 respectively, with the fluctuation reflecting the location of the annual meeting (OHBM 2018 was held in Singapore). We found a lower proportion of abstracts submitted by Japanese members: 2.6%, 3.6%, and 3.6% of the total number of abstracts for each of these years.
We then examined the proportion of Japanese members giving oral presentations. These numbers included both regular oral sessions and symposia. The proportion was 1.7%, 3.0%, and 0.9% for 2017, 2018, and 2019 respectively. The low number at the 2019 Annual Meeting was striking, given the proportion of attendees and abstract submissions.
To determine potential contributors to these statistics, we examined the number of Japanese members who selected “talk preferred” at abstract submission, but were not accepted for talk presentations. Surprisingly, these numbers were very small: 3, 1, and 2 for 2017, 2018, and 2019, respectively. A major reason for underrepresentation of Japanese members at the OHBM meeting appeared to be a reluctance to present data in the form of a talk. It is true that certain types of presentations work better as posters than talks, but we wanted to find out why so few researchers from Japan opted for oral presentations. We wanted to find out why the community would miss opportunities to highlight and benefit from their work.
Why do Japanese members hesitate to give talks at the OHBM annual meeting?
To find out, we surveyed 86 Japanese scientists working in human brain mapping (Figure 1). First, we asked whether they would choose oral or poster presentations at domestic conferences: 58% answered “oral”. Then we asked whether they prefer oral or poster presentations at international conferences. In this case, only 35% answered “oral”. The trend to favor posters in international conferences was common across both junior and senior scientists.
Next, we asked why they would opt for a poster presentation (Figure 2). For domestic conferences, researchers chose poster presentations when the topics were specialized, or the data wasn’t ready to present to a broad audience. For international conferences, 32.6% of respondents were dissuaded due to the challenge of presenting in English. Indeed, for Japanese researchers the most common deterrent for oral presentations at international conferences like OHBM was the language barrier.
Figure 2. Survey on the reason for choosing a poster presentation for a domestic (left) and an international conference (right). Multiple choices were allowed for this question. While there are common reasons between a domestic and an international conference, people raised a difficulty in English presentation as a reason to prefer poster presentation in international conferences.
The challenge of presenting in English is not unique to Japanese members of OHBM. Instead, this case study serves to demonstrate the extent to which language barriers can limit scientific communication. It is, therefore, worth considering ways to organize an international conference that help enable non-native English speakers.
There are several actions we can take as a community. First, we could promote and encourage junior Japanese members (and other non-native English speakers) to apply for oral presentations, symposia proposals and educational courses. My own experience speaking at the 2019 Annual Meeting greatly increased my enthusiasm and experience of the conference (see photo below).
Second, as an international community, we can promote a friendly, open-minded environment for scientific presentations and debates across members, irrespective of their English proficiency. I appreciate that OHBM has made a clear Code of Conduct prohibiting harassment based on the accent of speakers. Since I believe that OHBM members are mutually respectful, I hope that non-native English speakers feel able to discuss their scientific work and ask questions during annual meetings.
Third, we can devise conference formats that reduce language barriers. OHBM 2020 was a virtual event. This allowed members to communicate using live chat features that will be much less affected by spoken language proficiency. OHBM 2021 will now also be virtual, so we have time to consider further digital features to aid communication. Looking forward to the return of physical conferences, we can use features like mobile apps to ask questions, as we did at OHBM 2019. There may be no single solution, but we can benefit from technologies tested in virtual formats in future physical conferences to encourage broader active participation in OHBM meetings. We could ensure that new scientific advances are communicated widely, and not hindered by the lingua franca.
Finally, it is worth restating that these issues are likely not specific to Japanese members. We hope that by shining a light on the challenges faced by my local community, we can increase accessibility for OHBM members from a variety of non-English speaking countries around the world.
Addendum (from the Diversity & Inclusivity Committee)
To examine the breadth of underrepresentation, the Diversity & Inclusivity Committee examined the geographical distribution of speakers at OHBM 2020. We calculated the number of speakers (at regular oral sessions and symposia) as a proportion of current OHBM members (see figure below).
Our findings paint a complex picture: most Asian countries are certainly underrepresented but researchers from central European countries, including non-native English speakers, are well represented. However, the Romance or West Germanic languages of these latter countries share typology with English, and so are considered by the Foreign Service Institute to be easier for an English speaker to learn. In contrast, Japanese, Arabic, Cantonese and Mandarin are considered to be ‘exceptionally difficult’ for English speakers to learn, and vice versa.
Other factors likely influence whether researchers submit abstracts as oral presentations. For example, Spain and Mexico, despite their Romance language, had relatively few speakers. Historical ties to OHBM from individual labs and other economic, local, and macro-cultural factors are likely at play. By considering what causes barriers - language or otherwise - and exploring how we can break them down, we can promote a culture of greater diversity and inclusivity at OHBM.
By Elizabeth DuPre
The OHBM 2020 Annual Meeting was a year of many firsts. The move to an all-online event reflected the severity of the COVID-19 pandemic, with work, travel and schooling routines already in disarray for researchers across the globe. As many of us had been out-of-office or away from our university campuses for months before the Annual Meeting, the chance to connect with the broader human brain mapping community became especially important.
Traditionally, the Annual Meeting offers a chance to interact formally and informally with other researchers to make both scientific as well as interpersonal connections. Replicating these spontaneous conversations was perhaps the biggest challenge for this year’s meeting. First, there were the issues of timing. With OHBM members participating from their home countries, one member’s afternoon in North America would be the middle of the night for another member in Asia. The meeting was therefore set on a rotating schedule, with day-blocks favoring Asia-Pacific, European and African, or North and South American working hours.
Once the timing was set, the second hurdle was developing a virtual space for interactions. Large online platforms—like those necessary to run a conference for thousands of members—often lean towards structured, lecture-style environments rather than organic interactions and impromptu discussions. From the available infrastructure options, OHBM Council decided in April to adopt the 6connex platform. Council’s intention was to allow time for all presenters, committees, and special interest groups (SIG) members to adapt their content; however, the time pressures of the COVID-19 pandemic meant that many were still unclear how this new platform would work in practice in June.
Expectations were thus high for the 6connex platform—possibly higher than could be reasonably met. The platform did well in delivering pre-recorded content, such as the excellent selection of keynotes lectures, symposia and oral sessions, but the space for spontaneous interaction was woefully lacking. As one example, many members noted the challenges of using the chat feature, such as when 1000+ attendees simultaneously participated in a single-threaded chat room. This lack of functionality created particular frustration in poster presentations and interactions, where presenters and attendees were unclear how to contact one another or how to provide on-the-spot poster walk-throughs.
OHBM members enjoying one of the poster sessions on the GallOP platform.
Although the official platform did not provide an outlet for interaction, it did create a galvanizing effect for the community to create such a space. Attendees, such as Yaroslav Halchenko, Soichi Hayashi, and many others, came together to openly develop the OHBM 2020 GallOP (Gallery of Open Presentations) platform. GallOP provided an easy interface to search for poster authors, titles, or keywords, creating more chances for researchers to find relevant work. But perhaps most importantly, it created individual video conferencing rooms for each poster, allowing attendees and presenters to directly interact during presentation time slots or to leave one another notes outside of official meeting times. Although GallOP was only created after the first poster presentation time, the community response was enthusiastic, and it was quickly accepted and shared by the OHBM leadership and incorporated into the official platform.
Interactions in the Open Science Room (OSR) Gather.Town, a virtual space where OHBM members could gather throughout the conference.
This spirit of creativity and connection swept through OHBM2020 and was perhaps the defining feature of the conference. Other important community-driven initiatives that arose included the BrainWeb poster viewer and the first-ever virtual OHBM Club Night, both of which created online spaces that mimicked many of the social features of an in-person meeting, albeit with fewer spilled drinks. All of these community-driven initiatives were linked together by emergent discussions in the Open Science Room (OSR); this central hub seemed to catalyse interaction across the conference. The OSR hosted emergent discussions on everything from software containerization, to correcting for confounding, to even the structure of the virtual conference itself. In a year in which our idea of community has been redefined by political, social, and cultural reckonings, this space to have conversation with other brain mappers about the important issues of our science—both in terms of research topics and lived experience—proved a highlight of the conference for many attendees.
Alongside these experiences, the official OHBM program also provided attendees the chance to consider the direction of our field. As always, the OHBM Talraich, Glass Brain awardee, keynote, and symposium speakers provided an inspiring vision of the future of our society and the work we can do together. The OHBM 2020 Hall of Fame celebrated individuals that uphold many of the values important to the membership (e.g. education, replication, open science, mentoring), as well as this year’s award-winning abstracts. Uniquely, the community-driven efforts of this year’s event also provided a glimpse into just how important more grassroots efforts are to the structure and functioning of our academic society. As a result of this work, the SIG chairs were invited to sit in on Council meetings and increase interaction between official and grassroots initiatives. This is an exciting next chapter for OHBM leadership, and it suggests that we will continue to see more innovation in the years to come.
Although the 2020 Annual Meeting was our first all-virtual event, it is clear that its lessons will shape the structure of OHBM moving forward. We now know that the OHBM 2021 annual meeting will also happen virtually; this decision was made in advance such that all community members have more time to prepare. These preparations include creating a dedicated ‘Technology Task Force’ to translate the lessons learned in the 2020 meeting into next year’s experience. Altogether, it’s clear from the 2020 meeting that the OHBM community is vibrant, responsive and collaborative. We look forward to seeing how these attributes can be further advanced in coming years, starting with the 2021 Annual Meeting!
By Tzipi Horowitz & Nils Muhlert
Institutions throughout the world have had to adapt to the Covid-19 pandemic. Many scanning centres shut their doors during lockdown, and have had to reopen gradually, and carefully. We surveyed several labs from around the world - to find out the challenges they’ve experienced and, in a few cases, the opportunities afforded.
UK - Matt Wall (Head of MRI applications, Invicro, Hammersmith Hospital, Imperial College London)
Challenges: Everything shut down rapidly at the start of lockdown. In March, two big commercial scanning projects had to stop immediately. One had been running for some time, the other had just started. We had a lot of clinical people working with us - some very good medics. They spent a lot of time developing risk assessments and procedures. So we ended up restarting in late June. I tweeted about it at the time, and was contacted by people in other universities, asking how we managed it - so we shared our findings from the risk assessment process.
Our actions were consistent with the government guidelines: 2m social distancing and everyone wearing masks where possible, full PPE (masks, visors, gloves and disposable apron) where closer contact was necessary, particularly when getting people in and out of the scanners. As part of our risk checklist we also asked all researchers and participants to confirm that neither they nor anyone in their household had experienced flu-like symptoms and we used a heatgun to check each individual’s temperature as they entered the building. We ended up being ‘open for business’ before many parts of the university were.
We’re not back to capacity yet - particularly for PET. There we develop and use unusual PET tracers - we have a cyclotron onsite to make carbon 11 tracers which have a short half life. For this you need a large team, people involved in the chemistry, Q&A and PET technicians. There were many people on the furlough scheme so it was slow to start up again, and is only now back up and running. Overall, we’re probably at about 60% of what we used to do.
Experience: There was a sense of anxiety to start but that resolved fairly quickly. Our collaborators, who are endocrinologists in Imperial University, were working on Covid-positive wards. They assured us that we were using similar procedures to those used when working with people that we know are covid-positive. That gave us a lot of confidence that we were doing the right thing. I was actually glad to get back to scanning. My research assistant, who runs the scanning, now goes in most days, and I’m in probably 2 days a week. My dog at home, who people may be familiar with on twitter, also calmed down during lockdown: no more chewed slippers.
Opportunities: We found in our studies that it was suddenly a lot easier to recruit people. Once lockdown was easing, there were still many people off work. For one study we needed to recruit a particular population and since people were essentially bored they were much more willing to participate. Also the scanner was almost completely free. We steamed through recruitment for that study in record time!
México - Eduardo Garza-Villareal (Lab Leader of the Computational and Translational Neuropsychiatry Lab, National MRI Laboratory, Institute of Neurobiology, Universidad Nacional Autónoma de México, Querétaro)
Challenges: There cannot be more than 2 people in the lab at the same time. Hence students are struggling to finish experiments. I mainly work with animals at the moment, and again, having very few staff is problematic. We were importing a rat strain from the US and in the end they stopped exporting animals and we couldn't finish that project. One of my students has to perform animal surgeries in another lab; she couldn't access the lab for one month. Another problem is student training, which is now impossible because you cannot be directly in contact with the students. Effectively, I'll see a work setback of 1 year or so. With TMS we do have human subjects but we are closed until probably January 2021, which is a big setback.
Adaptations: We are working remotely which has helped, however we would not be able to change population or projects now. We have also started to use public datasets for new research.
Because I have children my time is now more divided than ever with less time to think about research than before.
Canada - Rick Hoge (MRI Program Director, McConnell Brain Imaging Centre, Montreal Neurological Institute) & Julien Cohen-Adad (Functional Neuroimaging Unit, Polytechnique Montreal)
Challenges: All scanning stopped in March, except for clinical trials where imaging was a critical endpoint to evaluate treatment efficacy. Phantom and animal imaging activities gradually resumed in May. Human imaging restarted gradually in August. Many groups are eager to resume their studies and start new studies this fall.
Adaptations: The core MRI staff and users have focused on remote work, including data analysis, hardware development, literature review, attending conferences etc. Zoom and other platforms are used to maintain group interaction. On-site experimental work adapted protocols used previously for infectious controls are now used in animal neuroimaging research.
Gradually, we have adapted to the new normal, although scanning volumes are considerably lower than they were prior to Feb 2020. The first groups to return to scanning were mainly those doing methods development and neuroimaging in animal models (as noted above, clinical trials work continued throughout the different phases of the pandemic).
New ways of working: The pandemic forced us to become more efficient at working remotely, using videoconferencing tools, telephone, and mobile messaging. We were already using the Slack collaboration tool, but this was even more valuable since the pandemic began.
Remote monitoring of MRI cryogenics was a particular concern during the complete shutdown, and we implemented several layers of security to make sure we could respond immediately to events like a quench or chiller failure.
Germany - Sofie Valk (Research Group Leader, Cognitive Neurogenetics, Max Planck Institute, Leipzig)
Challenges: My lab officially started March 1st. So, I guess two major changes occurred at the same time, namely that I now have to run and support a lab of my own, and get used to working following COVID-19. Recruiting has been different. For me, given that I am interested in computational neuroanatomy, genes, and evolution, it is not strictly essential to acquire my own data at the moment to answer the questions that we have and I also did not plan the timeline of my lab-start this way. My current research takes advantage of open data, so that my lab and I use Human Connectome Project data, as well as enhanced NKI, genome superstructure project (GSP), Abide, as well as data from the Primate Neuroimaging data-exchange and UK Biobank. Now I try to read up on how start-ups manage remote and home office working and how to build a team that can work that way.
Adaptations: Remote work was a bit challenging at first, as I have two kids under 5 and daycare was closed. My partner and I had a shift system of one person 7am-1pm and the other 1pm-7pm and then work in the evening again. This kind of worked, but also resulted in little time for me, for sleep, or for my relationship. Often, I worked on my phone during my times with the kids, which I felt bad about. You can hardly call it the best of both worlds. For setting up the lab, the Max Planck Society supported funding for a researcher while this person was still abroad, which was very helpful. As the lab is oriented towards using open data, this meant we could just ‘start’. In the long run, we do want to acquire a good dataset for my lab to test more specific hypotheses, but there is no rush at the moment.
I try to be flexible and make the best out of the situation, and support the people that want to work with me. At the same time, home office and lab Slack make it sometimes feel work is always ongoing, whether I am in the office, playing with the kids, changing diapers, or in the sandbox. It is a novel challenge to also have ‘off’ time and to learn that, even if communication is always possible, most things can wait if needed. I am very fortunate with my team, because everybody is independent and mindful of each other.
Now in Germany, the situation has relaxed. Daycare is open, and most days I work at the institute, even though meetings are online. What has changed in me is that I now try to think of positive ways in which my lab can adapt, such as better collaboration, open science, database management.
Opportunities: I hope that the remote set-up can lead to sustainable changes. For example, collaborations and knowledge transfer with other labs not in the same building or hallway are easier now with Zoom talks. Also, it is nice to be able to give a presentation far away, without going there. Although flying to and from Montreal and Cologne with an infant (I was still breastfeeding at the time, and this seemed the best solution) made me feel like an international DJ last year, it was of course very tiring and bad for the environment. Now, I could just Zoom in from the kitchen table. Also, the lack of sleep and some challenges with work-life balance in the home-office has made me reconsider how I plan my time, and be more disciplined to take some time off, go for a run, and prioritize sleep a bit more. In the end it is a marathon and not a sprint.
USA - Fumiko Hoeft (Prof of Psychological Sciences, Mathematics, Neuroscience and Psychiatry, Brain Center Director, University of Connecticut, Storrs Connecticut)
Challenges: Since mid-March, non-COVID in-person related research operations have been shut down in the US. Only COVID-related research and remote research was allowed (like most other US institutions). Staff and students were asked to work remotely. Some behavioral-only research moved to remote platforms.
We conducted a survey beginning March to get a better understanding of the effect of COVID on our US-colleagues - we found that only 37% of imaging research centres remained open for a reduced ‘business as usual’:
Adaptations: Where possible, we moved to remote behavioral data collection but paused all imaging projects. Research, including human neuroimaging research, restarted in late May.
Behavioral testing remains remote whenever possible. For instance, we received NSF/Tremaine Foundation funding to start COVID related research. This is a project in K-Gr2 students examining the effect of an adoptive computerized reading instruction on children’s learning at home.
Israel - Yaniv Assaf (Prof of Neuroscience, Head of the Alfredo Federico Strauss Center, Department of Neurobiology, Faculty of Life Sciences and Sagol School of Neuroscience, Tel Aviv University)
Challenges: Israel was in a lockdown during March-April 2020, and studies in humans were shut down. At the end of lockdown (May), we were allowed to have only 5 individuals in the lab. As we have 12 team members in the group with four paid employees, only one student was allowed to attend the lab every day. Teaching remotely using zoom or similar software has some disadvantages including the lack of interaction with the students.
During lockdown, the imaging center was closed. Scanning eventually restarted in May with several restrictions dictated by the ministry of health: we had to sterilize the scanner and the suite and build in a gap of 30 minutes between each scan.
Adaption: We shifted to one-on-one and lab meetings using remote meeting applications. On a personal note, I had more quiet time to myself, which allowed me to complete some of my ongoing work.
Opportunities: I started working with a team-management software, that assists with planning my team’s assignments. On the research perspective, we had an incredible opportunity to scan individuals that were scanned prior to the pandemic (prior to February 2020) and after the lockdown release. Individuals were willing (and probably had time and availability!) to come over to get scanned. We inquired whether there are neurobiological associations to changes in behavior related to social avoidance, interaction etc following COVID. Strikingly, we found that the post-lockdown group showed an increase in amygdala volume. Obviously, the pandemic enabled an opportunity to test the brain structural correlates for the behavioral stress that everyone was experiencing.
In summary, labs across the world have been forced to adapt, to find new ways of remote working and safe ways of scanning during the covid-19 pandemic. There is clear regional variation in how imaging labs between countries have responded, reflecting local infection rates and government policies. Many have increased activities with open datasets, and there are certainly improved opportunities to present virtually at a range of locations. We’re clearly not out of the waters yet, as Europe and North America currently see a resurgence of higher covid-19 infection rates. But at least this time, we are at least somewhat prepared for whatever 2021 brings.
By Nabin Koirala
In advance of our scheduled launch of the upcoming Journal “Aperture” from the Organization of Human Brain Mapping (OHBM), we wanted to get up close with the first Editor-in-Chief of the Journal - Tonya White. Tonya is currently an Associate professor in the Department of Radiology and Nuclear Medicine and Department of Child and Adolescent Psychiatry in the Erasmus Medical center in Rotterdam, Netherlands. We discussed her personal journey in Science and her vision for the Journal.
Nabin Koirala (NK): Thank you so much for making time for this interview. To start, could you please introduce yourself to general readers who may not be scientists?
Tonya White (TW): That's always an interesting question because I have a number of different hats. I could say that I'm a developmental neuroscientist, a child and adolescent psychiatrist, a pediatrician or an electrical engineer. But what I've been mostly doing currently is what's called pediatric population neuroscience, which is actually the intersection between epidemiology and developmental neuroscience. The neuroimaging program I came to set up in Rotterdam is a large population-based study of child development. We’re currently collecting and evaluating more than 8000 MRI scans from children at three time points. Through the so called “Generation R Study” nearly 10,000 mothers who were pregnant between 2002 and 2006 were approached to participate in the study and the imaging is nested into a multifaceted epidemiologic study looking at many different aspects of child development.
NK: Wow, that's a large sample. So, what do you actually aim to find out from this huge cohort of 8000 scans?
TW: One of the things that's exciting about population imaging studies is that they have a direct translation into public health. So you can look at things such as exposures during pregnancies in mothers who smoke or don't take enough folate or who use cannabis and what happens in the brain because of these different exposures. Another aspect is that most studies using clinical populations look at the extremes within a distribution. Say, those who have a clinical diagnosis versus an overly healthy control group. Whereas, with population-based studies, you can also study individuals with subclinical symptoms i.e. who don't meet an actual clinical diagnosis but have some symptoms. Then you can ask the question, does the underlying neurobiology follow a continuum parallel with the clinical symptoms? If you have more symptoms, do you have more differences in the brain? Or are these differences related to a dose effect?
Lately, I've been really quite interested in the role of stochastic processes in development. We talk a lot about nature and nurture, but we don't often talk about chance. If you go back to the early 20th century, you see a whole dispute between Niels Bohr and Albert Einstein on the role of deterministic processes. So, I think we should also start a conversation of stochastic processes within the field of neuroscience.
NK: Thinking of Nature and Nurture, how much genetic influence do you think there is in brain development?
TW: I believe it's largely driven by genes. I think genes have a huge effect in terms of brain development and the other factors like the environment and stochastic processes have more subtle but real and measurable effects. As an example, in several of our studies, what we've looked into is the polygenic risk for different psychiatric disorders; we’ve shown several genes contributing to psychopathology.
NK: This is fascinating, but I have to admit I am even more curious about your Journey in Science. How did you end up doing neuroscience from being an Electrical Engineer?
TW: Oh, that’s a long but I guess a good story. Anyway, I’ll try to shorten it as much as I can. My undergraduate degree was in electrical engineering with a focus in biomedical engineering. I started engineering because I was good at math and my father, who was an engineer, encouraged my older brother, myself, and my younger sister to go into engineering. During my bachelors, about half the people in my engineering class were looking into jobs related to defense. But I was rather interested in medicine and the human side of things. So, during my graduate years, I started looking for options and my interest at first was in bioengineering and working with prosthetic devices. This might also have to do with the fact that I really liked my summer job working on the Utah artificial arm during my undergraduate years. So, while continuing my graduate program in Engineering at the University of Illinois, I began medical school. By my third year of medical school, I would have said that I was planning to go into pediatrics. However, during my fourth year of medical school, I had a chance to spend a month at Brown University, rotating through child Psychiatry at Bradley hospital. After that, I became very excited about child and adolescent psychiatry. I went back to my Dean and requested that he re-write his letter for me to apply to the so-called triple board program, which involved pediatrics, psychiatry and child psychiatry. It’s a natural step to become interested in the brain through psychiatry.
Because of my clinical experience during my residency, I became interested in autism. I went to the University of Iowa for a neuroimaging postdoctoral research fellowship with Nancy Andreasen on schizophrenia research with the goal of also being able to work with Joe Piven on neuroimaging in autism spectrum disorders. But shortly after I moved, Joe moved to North Carolina, so I then focused on schizophrenia and moved to the University of Minnesota to set up a youth psychosis program. I spent about four days a week at the Center for Magnetic Resonance Research at the University of Minnesota. Then in 2009 I had the unique opportunity to set up the neuroimaging program in Rotterdam.
NK: Wow! that’s quite a story. So, having three different degrees, how much do you think this vast intersection of knowledge has helped you in your academic career?
TW: I can say engineering has been extremely helpful, particularly for the intersection between neuroimaging and medicine. To give an example, there are lots of new methods that are proposed regularly. Most of them are cool, and sometimes neuroscientists apply them without actually thinking about the assumptions from a neuroscience perspective. So, the engineering degree has given me the ability to understand what's involved in the assumptions of these methods, to have a different overarching view to better apply them in addressing my research questions. Even though I do think that having in-depth knowledge in multiple domains is very helpful, I also like the kind of hybrid courses currently springing up in universities, like medical technology, computational neuroscience etc. I believe these courses create bridges between medicine and different aspects of technology.
NK: Maybe one last question regarding your academic journey so far. You were a faculty member in Minnesota for around eight years. What made you switch to this position in another continent?
TW: I would say a combination of things. A key factor was the generation R Study, which I thought was fascinating. Being able to infuse neuroimaging in that study seemed like a great opportunity. Among other factors, there was a lack of NIH funding under George W Bush. NIMH funding during his period went down to as low as 3-5%. I also had a great experience living in Europe during five years of my childhood with my parents in Germany. So I think it was more the opportunity in Rotterdam that had a pull, as I did very much enjoy being at the University of Minnesota and being close to family.
NK: So let’s focus on Aperture, could you tell us a bit more about its formation: the concept behind it and what this new journal brings in the sea of others? What’s different about it?
TW: I was the secretary on the OHBM Council when it was first introduced as a possibility by, I believe, Martin Lindquist. He said “wouldn't it be nice if we had a mechanism to publish these educational articles (the COBIDAS document) that are good for the OHBM community and the wider neuroimaging scientific community?” JB Poline then picked up on that and, together with Peter Bandettini and a group that now form the Aperture Oversight Committee, became the driving force behind Aperture.
As to what sets Aperture apart, there's a couple of different aspects. One is the promotion of open science. The journal itself will be open, but it’s not just the research articles that could be open access but also what we call research objects like scripts, data sets, educational materials etc. Even educational material that you would normally see at one of the educational courses in OHBM could be embedded in something like Jupyter notebooks etc. Second is the low publication cost. Given the very high ‘open-access’ publication costs in many journals, these costs can be prohibitive for researchers in low- and middle-income countries. Hence, it is getting more and more difficult to publish research from those countries. I think we're all together in this whole research mission to understand the brain in the hope that people's lives will be improved, so an initiative in that direction is definitely one of the primary goals of Aperture.
NK: Great. Coming from one of those low-income countries, I can totally relate to what you are saying. Based on an OHBM survey on Aperture, a large proportion of participants said that they would be mostly interested in publishing codes or tutorials. How do you plan to review and manage that given the updates needed for those articles?
TW: Science is dynamic, it will change. So, we are more focused on the initial review process rather than the eventual updates. Major updates could be submitted as updated research objects to Aperture and given a new DOI. We've discussed several ideas on how to review something like a code or a script. Even though there is not yet a concrete plan, one of the things I've been thinking is that perhaps it would be good to have teams who would get together to check the scripts and run the code and see if it does what it says it can do and what issues may arise while testing the code. We will start with some test cases to try some of these ideas from our editorial team and see how it goes. But the goal is eventually these types of research objects will be published and receive a DOI.
NK: That sounds fantastic. Among one of the active debates that's going on in science these days is about diversity and inclusion. Do you have some plans to enhance that in Aperture?
TW: Definitely. Step one, I will be looking at diversity within our editorial board, making sure that we have a diverse group covering different domains. And then it's important to consider diversity not only in terms of gender, race, ethnicity, sexual orientation, gender identity, disability, language, etc. but also the diversity in methodologies and types of approaches to science. Diversity is perhaps one of our greatest gifts that we currently have in academics, hence I would definitely try to promote that from every aspect possible.
NK: As an editor-in-chief have you envisioned a certain direction for the Journal in the next five years or so?
TW : One of the advantages of being under the umbrella of OHBM is that we can pursue only those research objects of high quality. It might not be straight forward at the beginning given we don’t yet have an impact factor or indexing in PubMed. We're working together to provide a venue for people to submit types of research objects that may not fit in other types of journals. The general goal would be to establish Aperture as a quality journal which is not for profit, and becomes a platform for diverse articles relevant to the field of neuroimaging.
NK: Have you thought about a particular model of review process? Would the reviewers be open after acceptance like in Frontiers? Would the process be double-blinded?
TW: We've discussed this and there are many advantages and disadvantages of both processes. For example, someone junior in the field who is reviewing a research object from someone more senior in the field may not feel comfortable publishing their name. So, for now the idea is to make it optional for the reviewer, having the ability to opt out of being open. Regarding the double-blinded process, we have not yet decided upon that.
NK: Before we conclude I would like to ask some personal questions which I am pretty sure is interesting for many of us. Let’s start with your hobbies?
TW: Actually, this afternoon I am planning to go climbing at a climbing hall. But apart from that probably the thing I most like to do (in the winter) is to ski. I'm a member of a cross country ski group. Here we go roller skiing in the Netherlands. I also enjoy biking and most things outdoors like canoeing, hiking etc., which I do a lot with my family. I’d rather listen to lots of audio books than read them. Because I read so much for my work, sitting and reading books feels more like work than free time, so I just listen to audiobooks while doing other things. Actually, listening to books on tape in Dutch has really helped to improve my Dutch language skills.
NK: Are you already a fluent speaker?
TW: Much of my interactions with my Dutch colleagues is in Dutch, but I am no way near perfect. Still a lot to learn.
NK: Having a chance of working both in the US and Europe, do you feel that there are some differences in terms of research culture across the continents?
TW: Most of the important things are very much the same, but there are things that are quite different. For example, most of the research in the US is done by the PIs (Principle Investigators) and research assistants, at least on the medical school side of things. But here in Europe most of the research is driven by PhD students. Also in Europe during summer the population of the research unit decreases by 75%, as most people go on vacation for extended periods of time and they're expected to do so. You are interviewing me in the summer [laughs]. I like the quietness of the summer to get work done and then I go on ski vacations in the winter.
NK: Being an expert in the Brain, I am very curious to know: if you were to be a brain region, which one would it be?
TW: What a great question! I think probably the hippocampus because it integrates multimodal information. And that's kind of sitting between multiple disciplines. That's kind of what I do. I actually believe more in connected regions rather than one specific brain region, but I guess I can't call connectivity a region, can I? [Laughs]. Probably another favorite would be an action potential. Potential for action related to Aperture. [Laughs]
NK: Three key ideas for early career researchers?
TW: 1. Think outside the box. Because you should always remember that if we, as senior researchers, had already figured it out, you wouldn't have a position. So if we continue doing things the same way that we're doing, it's possible that we miss something. So make sure that you think outside the box.
2. Persistence. As an example I would say, as a researcher you need to get used to rejection. When your first grant gets rejected and another grant after that, the ones who persist are the ones who are going to be successful.
3. Play well on the sandbox. We're all in this together to hopefully make life and this society better for all of us. So we all should remind ourselves of the real reason why we do science.
NK: Great suggestions. Thanks again for taking time and making it such a great interview.
Learn more about Aperture here.
John Mazziotta is Professor of neurology, CEO of UCLA Health, and vice chancellor of UCLA health sciences. He was also a founding member of the OHBM. He co-authored the first book on whole-body, cross-sectional anatomy using CT. He’s been involved in the first PET studies in normal subjects and with patients with epilepsy and Huntington’s disease. He was the principal investigator of the ICBM brain atlas, a key tool for brain normalisation. We interviewed him as part of our OHBM Oral History series, to find out about the early days of PET, (f)MRI and the inception of OHBM.
Nils Muhlert (NM): Thank you very much, Professor Mazziotta, for joining us today. I'd like to start by asking you about your background: Why and how did you become interested in neuroimaging?
John Mazziotta (JM): Well, I wanted to be an architect. That didn't work out because I spent a lot of time in Manhattan with architects when I was an undergraduate, and they didn't seem very happy. I like science and went into a lab where I was doing early molecular biology and that was interesting but very isolating. I thought, “Well, I'll go to medical school.” I hated medical school, memorizing bones and things of this sort. Ultimately, I met a neuroscientist in the medical school. The school also had a very active biophysics department and were building the first CT scanner that could image outside of the head. This is now mid-1970s. I got involved in that project and we physically built that machine, soldering wires. We had a functional scanner that worked anywhere in the body.
I decided I would become a neurologist, moved to Los Angeles and UCLA and immediately met the group that had moved from Washington University in St. Louis. They had been involved in the development of PET and all worlds connected, so I got involved in research with PET and then MRI.
NM: Should I ask who the neuroscientist was, that you met during your med school years?
JM: The neuroscientist in my medical school years was a woman named Betty Hamilton. And ironically, Peter Fox and I were in the same medical school class, and there were other neuroimagers in medical school with us. It was an interesting coincidence. The seeds were set there.
NM: And what was it about neuroscience and neuroimaging that really attracted you?
JM: Clinically, it was the approach to the patient, the gathering of the data, the deductive logic of coming to a diagnosis and then having confirmatory tools like imaging that could show you structural and/or functional abnormalities, confirming whether that diagnosis was accurate or not.
NM: And what do you see happening with neuroimaging in the US nowadays?
JM: Obviously there's been a big shift to MR-based strategies rather than PET. But now we're starting to see a resurgence of PET to study patients with neurodegenerative and psychiatric diseases. In the mid-1980s I was confident that psychiatric disease would be completely understood by PET because they were chemical disorders. You could give somebody chemicals and they would become psychotic or delusional or hallucinate. And you could take a patient who had mental health issues and give them medications and they would get better: it was a chemical issue, no structural changes. But after 40 years of scientists, very diligently, pounding away and making ligands and imaging patients, we still have not provided, to my mind, actionable insights in mental illness through imaging.
NM: And do you think there will be breakthroughs over the next 20 to 30 years? Is there anything you'd peg your hopes on?
JM: I’m just as confident now as I was then [laughs]. It's a tough problem and a very expensive problem. When developing a new ligand it might be useful if, as a community, we encouraged key places to be the factories of ligand development. It's so expensive, and like CERN or other high energy physics projects, talented scientists could go there, do the work and then return to their home institutions. Once proven, the recipes for the ligands could be distributed widely.
NM: That's an excellent idea, to really pool the resources and expertise. Going back to your own work, what research or other contributions are you most proud of?
JM: Always a humbling question. My research with PET began with studying normal individuals. We studied the visual system. Our first papers were on visual responses to different types of stimuli, sensory deprivation, auditory stimulation, and a variety of various states in normal subjects. That was very exciting. Every time we did a scan, it was a new day. You never knew what you were going to see. It was an exciting time.
On the clinical side, I was involved in the first PET studies of patients with epilepsy, the first studies of patients during seizures, Huntington's disease and depression. Combining genetics with imaging, in the Huntington study, was a good example of developing probabilistic approaches to individuals who are at risk, then testing them genetically and seeing the outcome and looking at the scan. Those were also exciting times, in our collaborations with what was then the Hammersmith group in London; ultimately with Karl Friston, Richard Frackowiac and the others, Terry Jones in that group, and our colleagues in UCLA with Mike Phelps and Henry Huang, myself and some of the fellows that I had, Roger Woods and Scott Grafton. We carried out a lot of the early work on blood flow measurements with that combined group.
NM: And many neuroimagers will know of the ICBM brain atlas. You had a pivotal role in the development of that. How did it come about?
JM: That was a painful part of history. We were all struggling with how to normalize data, in our own labs, among individuals within a modality and then across modalities, and then ultimately, to pool data from multiple different laboratories. And it was clear that this was an enormous problem, and it was unlikely that one laboratory on its own would solve it.
I invited to Los Angeles, where I was then working with Arthur Toga, a group of the individuals around the country and around the world that were doing this kind of work and were frustrated by the fact that the problem was a difficult one. We all worked together for a couple of days: Peter Fox, Alan Evans, the people I mentioned, some individuals from Europe and Asia. We were all natural born enemies. We're all vying for the same funding dollars, all doing a lot of similar work. There was a lot of posturing and opening remarks. In the end, we emerged from those two days saying the only way to really solve this problem is to do it together. And if we did it together, we'll actually get it done and would emerge with something that makes sense.
Later, we teamed up with other groups, particularly the group from Jülich, who did all the amazing work on the histology. It was a big program, Arthur Toga perfected the sectioning of human heads; that data went to Germany, to Karl Zilles and Katrin Amuints and their talented teams and went on from there. It's still going on today with the BigBrain project and the collaborations between McGill and Jülich. So that was another satisfying addition to the contributions by the group. That group continues to meet, always in Hawaii, always in the first week of November, this year will be the first exception in something like 26 years.
NM: It's a great example of fruitful collaboration, not just between national institutions, but across the globe. And you were also involved in the creation of OHBM. What was that like? And what did you imagine OHBM would be like?
JM: It was less about what it could be, but what it would eliminate for us. We were at a meeting in San Antonio that Peter Fox hosted. I was there, Arthur Toga, Alan Evans, Bernard Mazoyer. We were bemoaning the fact that we were all going to all these meetings every year, the cerebral blood flow and metabolism meeting, this meeting, that meeting, meetings about MRI, meetings about PET, meetings about everything. It didn't make any sense because most of those meetings had nothing to do with what we were interested in, which was trying to map the human brain.
Sitting around in a little conference room, the idea emerged,: "Well, why don't we just have a meeting about what we're interested in and not have to go to all these other meetings?" So everybody said, "Yeah, that'd be great. But it's going to be such a pain to do and who's going to front the money." But people were compelled to do it just because they were so frustrated with the current situation. And when we thought about how much money was wasted sending fellows and students and everybody else to all these other meetings that were low yield, we decided we'd roll the dice. Bernard decided he would really roll the dice and put a deposit on a center in Paris. And the rest is history.
NM: And how did you think it would evolve over the years?
JM: Well, we didn't know if it would work at all! So initially, it was a matter of trying to stabilize the finances to the point where we could at least be confident that if we advertisd the next year, there would be enough funding to get it to happen. The more grandiose envisioned something more like what the reality is now, that there would be books and journals and a subdiscipline of neuroscience that was basically doing this. We also felt that it would be important to have educational components: that a graduate student in psychology might not necessarily be exposed to the physics of the machines that was generating the raw data, or that a mathematician who was doing modeling wouldn't necessarily understand neurophysiology and neuroanatomy. And so once it was clear that we could sustain the meeting, then the next part was to make it something that had value not only as an information exchange but also as an opportunity to provide training to the field.
NM: And thinking of your own involvement with OHBM. What have you found most rewarding about that?
JM: Walking into the rooms with the posters or the lectures and seeing all those people, and the energy of the students and the fellows and people who are seeing their careers and their professional motivation, joined by colleagues who were like minded in the same place. That's very satisfying to see. From five people sitting in a room to thousands of people who are all thinking about these problems and trying to make progress.
NM: Definitely! Are there any experiences you've had attending OHBM that really stand out?
JM: One of the things that we did for a while was Richard Frackowiak and I would summarize the whole meeting at the last session in 30 minutes. So we had to somehow boil down 1000 presentations into 30 minutes. That was always challenging and fun. And I have to say it does focus your attention on the content of the meeting, rather than just cruising around and talking to people and reading a few posters. For the first 10 years, we had an “L & L party” with people from Los Angeles & London labs. We would jointly fund that party and it was one night a week and we'd bring the entire lab to these parties. We had them on boats, nightclubs, bars - they were all over the place. That's evolved into the social events that are on the different nights of the meeting,
NM: So that's been there since the very start?
JM: The first one.
NM: Brilliant. And have you seen any changes in how the meetings run or different angles coming into it that perhaps weren't there at the start?
JM: Well, when things are small, they're easy, everybody's in one room. And as things grow, they become more specialized and subcategorized. Then you have to pick and choose which things you want to attend or not. That's a natural evolution of any process like this. But I appreciate the fact that the named lectures and the other components have been maintained as unifying parts of the meeting. Attendees really look forward to those sessions and everybody's there.
NM: And a final question then. Not an easy one to end with! But, what do you see as the future for neuroimaging?
JM: [long pause] Notice the long pause [laughs].
The future is bright. But the pressure will be there to deliver on the clinical side, truly valid biomarkers from imaging. We don't have any of those yet. If you really look hard, and ask the question: if we have this imaging result, then this is the diagnosis. Those are few and far between, from functional imaging. That will be the question that will be put out there. Can you do that and if not, then why should we fund it? So that's one.
The second issue was the one I mentioned earlier: insights into purely functional and chemical illnesses like mental illness. And that's a big lift, and an important one. And similar one will be neurodegenerative diseases.
Then the most profound and the most interesting question is: how does the brain work? I would envision in the future, that through techniques we don't know about today or in some parts, extrapolations of some of the physics of MRI, that we'll get to the point of actually being able to image neural conduction, synaptic activity. It wouldn't be microscopic, but in large ensembles. With improved temporal resolution, we'll be able to understand the choreography of signaling in the brain. Once that level is achieved and massive data can be managed in four dimensions, then the insights will come more rapidly.
NM: So there's bridging of scales.
JM: Yes both spatially and temporally.
NM: Fantastic. Professor Mazziotta, thank you so much for taking the time to speak to us. It's been a pleasure hearing about your history with OHBM. Thank you very much.
JM: Thank you.
Authors: Katie Williams, Ilona Lipp, Mark Mikkelsen
Infographic: Roselyne Chauvin
Expert editors: James Kolasinski, Paul Mullins
Newbie editors: Curtiss Chapman, Yana Dimech
The noninvasive imaging tools that we Human Brain Mappers apply are most often being used to research brain structure and function. Neurotransmitter systems are something that we are aware of and use to take into account when coming up with hypotheses or interpreting our findings, but rarely make the direct subject of our investigation. Most of us have probably heard of GABA (gamma-aminobutyric acid) as the principal inhibitory neurotransmitter that is used by many interneurons. That we can also measure GABA in vivo with MR spectroscopy (MRS) is maybe less widely known. While this biomedical imaging tool opens many doors for neuroscience, measurement of GABA using MRS is not broadly used yet, possibly because special sequences and analysis methods are needed. At the OHBM Annual Meeting in 2019, for the first time, an educational session on GABA MRS was held. This post summarizes what was taught about the most important things you need to know if you’re considering GABA MRS for your research.
Why should we care about GABA?
As GABA is an inhibitory neurotransmitter, an intuitive way to think about it is that it can regulate neuronal firing, allowing the establishment of complex neural circuits and ensuring that the brain does not become “overactive”. This intuition is in line with the fact that drugs that act on the GABAergic system are traditionally used to treat anxiety and pain. In her video, Caroline Rae (from the beginning) emphasizes that when considering GABA, one should also consider glutamate, the excitatory neurotransmitter. She explains how GABA and glutamate are actively coupled at the synapse (min. 10:55). The ability of GABA to regulate glutamatergic firing makes it a neurotransmitter that is likely involved in many biological processes, one of them being brain plasticity, or the brain’s ability to structurally react to new situations (as explained by Charlotte Stagg from min. 1:55).
Charlotte explaining the role of GABA in brain plasticity
How can we measure metabolite concentrations with MRS?
To understand the physical principles that give rise to MRS, it is helpful to take a few steps back to the basics. Robin de Graaf succinctly reviews (from min: 2:30) how nuclear magnetic resonance (NMR) in its essence is all about separation and detection of frequencies. In conventional MRI, we create a signal by interacting with the resonance frequencies of protons in a nucleus of interest – most typically that of hydrogen around 127 MHz at 3T and 298 MHz at 7T. MRS differs from typical MRI because it relies on something called the chemical shift effect. What does chemical shift mean? Depending on the chemical composition of a given molecule, the constituent protons experience different electronic shielding effects, resulting in slight differences in their resonance frequencies, which translates into many peaks appearing as an MR spectrum rather than a single clean signal peak at that free molecule’s resonance frequency. This is called chemical shift, because the biochemistry and environment of the molecule lead to a slight shift of its signal in the frequency domain. So, if one were to effectively “zoom in” on the hydrogen proton signal at 298 MHz, for example, we would see that the signal is actually composed of many smaller peaks in the range of a few hundred Hz surrounding this frequency in the MR spectrum. This signal reflects all MR-visible hydrogen-containing molecules in the sample. Since the water signal (coming from the hydrogen protons in the water molecule) is so much stronger in intensity than these other peaks, frequency-selective water suppression pulses are integrated into MRS pulse sequences to help reveal the less intense peaks that we are interested in. After showing us a zoomed-in shot of several peaks (screenshot below), Robin explains (from min. 4:00) how electronic shielding and chemical shift lead to consistent, exact locations of the peaks of different chemicals – or metabolites – in the MR spectrum. As resonance frequency depends on the field strength, Robin goes on to describe how moving away from frequency-based units to a parts-per-million (ppm) scale allows metabolite measurements to be more easily compared across field strengths (from min. 6:52).
Robin showing us resonance frequency spectra of different nuclei
Ok, what now?
When combined with a pulse sequence with spatial localization, such as MEGA-PRESS or MEGA-(s)LASER (which Robin describes later, min. 17:56), a metabolite spectrum can be acquired from a volume of interest in the brain. The chemical shift effect holds true for all MR-detectable nuclei and, as such, for a long list of metabolites composed of those nuclei, including GABA. For this reason, many challenges that we face in measuring GABA concentrations apply universally in MRS. Clever use of relaxation properties and nuclear coupling effects give us a few solutions, however.
Why is it challenging to measure GABA concentration with MRS?
If specific metabolites like GABA can be measured with MRS, why are we not using it in every neuroimaging study? To be completely forthcoming, there are a number of challenges in conducting successful MRS measurements. Luckily, there are some options to deal with each of them. In spectroscopy, the signals we detect are very weak, so we have to run several hundred repeated acquisitions to obtain an acceptable averaged spectrum for quantification. Another way to boost SNR is to acquire spectra from larger voxels. Choosing an extra-large voxel size (by MRI standards) for higher SNR, however, is not an ideal solution because of heterogeneous tissue compositions in a voxel, and GABA concentration varies across different tissues. Ashley Harris explains that it is important to correct your measure for its tissue composition (from min. 8:53), because of known differences in GABA concentration in gray and white matter.
Given the low SNR of metabolite signals, it has been common for a long time to use single-voxel MRS acquisitions. This is the reason that sometimes spectroscopy is not always categorized together with conventional MRI as a true brain imaging technique. However, using specialized pulse sequences, it is possible to acquire data from more than one region of interest using dual-voxel MRS, for example, which Muhammad Saleh describes in his video (min. 20:40). It is worth mentioning here that MRS imaging (MRSI, so spatially resolved MRS) approaches do exist, with which multiple voxels are acquired from a cubic volume, for example 3D MRSI can reach whole-brain coverage with a 14 × 14 × 12 voxel matrix size and 200 × 200 × 170 mm field of view (2.89 mL nominal voxel resolution), and technological advances to improve them are continually occurring.
What is the problem with spectral overlap?
What might be considered the biggest challenge for accurate metabolite measurements is spectral overlap. Given that so many biologically relevant molecules contain hydrogen protons, many with similar hydrogen structures, their signals will overlap, making it hard to get an accurate quantification of individual peaks that we care about, as Robin describes (from min. 9:30) in his video. If we cannot isolate the GABA peaks, then we cannot quantify them easily!
In addition to the signals of identifiable metabolite peaks like creatine and glutamate that overlap with the GABA peaks in the spectrum, an underlying assortment of signals of broad peaks originating from macromolecules is present. (Here, macromolecules refer to a host of large molecules, including proteins, that differ from the smaller molecular structures such as GABA). The macromolecule (MM) signal is a biologically generic signal detected by in vivo MRS that usually consists of about ten peaks spread across the acquired spectrum. The MM signal can be attenuated using several acquisition solutions, which we describe below. However, it is important to note that the MM signals cannot be 100% removed, and their contribution is always present, to some extent, in a GABA measurement. There are several different options to approach spectral overlap, including moving to a higher field, like 7T, which improves the spectral resolution, meaning that the peaks are more spread out, and reduces the amount of overlap that occurs between them (an expensive solution, Robin notes, min. 10:50). Another possibility is to take advantage of T1 and T2 relaxation differences of different metabolites and use inversion recovery and spin-echo sequences in your experiments (as Robin describes, min. 11:46).
What can we do about this spectral overlap problem?
By far the most popular method for dealing with spectral overlap, and the most discussed technique for GABA quantification in the educational session, is the spectral editing approach: The same physical principles of nuclear interactions that make tiny changes to local magnetic environments and allow us to accomplish chemical shift imaging (i.e., to obtain spectra) offer a solution to spectral overlap. Nuclei that are chemically bonded to the same molecule, and thus generate multiple peaks for that molecule, are scalar-coupled, which, in quantum mechanics terms, means manipulation of one signal of a molecule also modulates the other signals of the same molecule. This phenomenon can be used to selectively manipulate overlapping signals and acquire the signal of interest. The figure above shows that GABA is composed of three major signals that are scalar-coupled to each other, and that glutamate and creatine have peaks overlapping in some locations. From min. 13:11 in his talk, Robin explains scalar coupling and how frequency-selective inversion pulses can be used during acquisition to modulate the signal of scalar-coupled molecules, but not the uncoupled ones. This is known as “editing” an MR spectrum. Using this technique, one can perform paired experiments, one with and one without the frequency-selective editing pulses, to recover the signal of the metabolite of interest. This technique, known as J-difference editing, is a powerful MRS technique used for measuring GABA in the brain. It should be noted that while scalar coupling helps us to more specifically acquire our signal of interest, co-editing always occurs, and attention should be paid about which molecules are being inverted. Robin describes a simple pulse sequence for a full J-difference editing experiment, using GABA as an example (from min. 17:56), while Muhammad Saleh speaks extensively in his video about special GABA editing sequences and ways to speed up editing experiments to increase the information extracted from the data acquired. And that brings us back to the topic of challenges in measuring GABA: applying solutions to acquire good spectra significantly increases scan duration, giving rise to more temporal instabilities in the signal, specifically frequency offsets. Frequency offsets are shifts in the main magnetic field that most often occur either because of heating/cooling of the gradient hardware elements in the scanner or bulk participant head motion. In her talk, around 24 minutes into the video, Ashley discusses this problem and how sometimes it can be fixed retrospectively through frequency alignment.
Robin explaining the J-difference strategy of measuring GABA
What do I need to consider when setting up a GABA-edited MRS acquisition?
There are some essential questions to answer when setting up a GABA-edited MRS experiment. Of course, the first is where in the brain you want to measure GABA. For hypothesis-driven studies, this will be determined either by the functional neuroanatomy of the aspect of brain function being studied or by the regions implicated in the neuropathology/pathogenesis of a particular brain disorder/disease. It is worth reiterating that, given the low SNR of the GABA signal, the size of the volume of interest will be on the order of cubic centimeters. Thus, one will need to be aware of the limitations on the specificity of where in the brain GABA will be measured. For GABA editing, voxels tend to have around 27 mL tissue in them (e.g., 3 × 3 × 3 cm3) in volume to attain reasonable SNR. In the MRS literature, the voxel size is often reported in volume, as this is the relevant factor for SNR.
Oh, such large voxels!? Can I not just go to 7T and get a better spatial resolution?
Should it be possible (and desired) to perform MRS experiments at ultra-high field (>3T), then the benefits of a higher field strength would alleviate some of the challenges of MRS acquisitions. Aside from the associated increase in inherent SNR of metabolite signals, and the already mentioned increase in spectral resolution (the separation of peaks in the spectrum), high field measurements allow improved selectivity of editing pulses. These advantages of ultra-high field MRS make it more feasible to detect GABA without using editing. Nevertheless, editing at 3T remains the most commonly used approach for measuring GABA that you will encounter in the literature.
Another consideration for increasing SNR is scan duration. In edited MRS, each acquisition is repeated (usually several hundred times) in order to perform signal averaging to improve the SNR of the detected metabolite signals. As Ashley Harris explains in her presentation (from min. 19:43), the question of how many averages are needed (how many times to repeat the measurement in one scan acquisition),(i.e., how long to scan) will depend on voxel size, the scientific question being asked, and the region in which you are scanning. Some regions like the occipital lobe provide good SNR and therefore allow you to scan for shorter periods. In contrast, other regions like the temporal lobe are more challenging to acquire high-quality data in and necessitate collecting relatively more averages.
Anything else I need to think about?
It is also worth considering the order in which you run your different MRI/S acquisitions in a given scan protocol. When conducting a study, it is quite likely you will be acquiring a variety of scans, such as fMRI, diffusion MRI, and MRS. Sequences that involve rapid switching of gradients (e.g., EPI and DWI) will lead to heating and subsequent cooling of the scanner’s hardware elements. This causes shifts (or drift) in the B0 field (and thus its frequency) that can have a considerably detrimental effect on edited MRS acquisitions, which require frequency stability to ensure the narrowband frequency-selective editing pulses perform as intended. Performing MRS acquisitions before any scans that make use of high gradient duty cycles can help lessen the impact of frequency drift on acquisition performance. Also, the use of prospective and retrospective frequency alignment methods can mitigate the detrimental effects of frequency drift on spectra. Ashley also talks about this in her presentation (from min. 22:30).
How do I know whether the quality of my spectra is good enough?
Several signal artifacts can lead to poor quality of MRS data. An excellent place to start is by reading this paper, which describes in detail the kind of artifacts one would see in corrupted MRS data. A full description of artifacts is beyond the scope of this blog post. Still, one thing in particular that can significantly degrade the quality of your spectra is participant motion. The comparatively longer scan times of edited MRS acquisitions, unfortunately, provide more opportunities for a participant to move and worsen spectral quality. Some simple steps that can be taken to prevent motion artifacts include emphasizing to participants the importance of remaining as still as they reasonably can when they hear the scans running and acquiring structural/fast localizer images and MRS data consecutively so that voxels are placed as accurately as possible given participants’ current head position.
The act (art) of rating the quality of MR spectra can be challenging to those new to MRS. Since MRS is methodologically distinct in several important ways from MRI, quality analysis may be less intuitive to new users who are more familiar with the latter technique. Typically, a good approach to quality analysis (when possible) is to consult a colleague (internally or externally) who has experience with MRS. A review of 2–3 pilot datasets can go a long way to establishing the predicted quality of MRS data for a proposed study. When an investigation is underway, it is highly beneficial to review data as they are collected. Continual reviews of data can prevent situations where a series of datasets have been acquired with significant artifacts that would lead to their removal from further analysis, which could potentially seriously undermine the success of a study.
Ok, so now I have a spectrum, but how do I quantify GABA?
Once you have acquired some GABA-edited MRS data, you can quantify the GABA from the data you have collected. There are some software analysis packages available to users that can quantify GABA from edited MRS data. These include Gannet, jMRUI, LCModel, TARQUIN and, most recently, also FSL. Each has its own strengths and weaknesses and particular learning curve, but each will allow you to derive a quantified measurement of GABA from your MRS data. The GABA signal is either quantified in the time or (more commonly) the frequency domain, where either the amplitude or the area of the GABA signal is used to determine the concentration (as concentration is proportional to signal amplitude or area). While a description of each package is beyond the scope of this blog, readers are advised to read the following papers for further information:
Edden RAE, Puts NAJ, Harris AD, Barker PB, Evans CJ. Gannet: A batch-processing tool for the quantitative analysis of gamma-aminobutyric acid-edited MR spectroscopy spectra. J Magn Reson Imaging. 2014;40(6):1445-1452
Provencher SW. Estimation of metabolite concentrations from localized in vivo proton NMR spectra. Magn Reson Med. 1993;30(6):672-679
Stefan D, Cesare F Di, Andrasescu A, et al. Quantitation of magnetic resonance spectroscopy signals: the jMRUI software package. Meas Sci Technol. 2009;20(10):104035
Wilson M, Reynolds G, Kauppinen RA, Arvanitis TN, Peet AC. A constrained least-squares approach to the automated quantitation of in vivo 1H magnetic resonance spectroscopy data. Magn Reson Med. 2011;65(1):1-12
Ok, so now I have a GABA concentration measure. But what does this measure mean?
There are two things to consider. First is the specificity of the GABA signal we are detecting. As Ashley describes (from min. 3:25), the GABA signal is contaminated by a co-edited MM signal that leads to the observed GABA signal at 3 ppm being ~50% MM. For this reason, it is conventional in the field to define edited GABA measurements acquired by standard editing as “GABA+” measurements, to acknowledge the MM contribution. The majority of studies in the literature applying GABA-edited MRS will have acquired measurements of this sort. It is, therefore, important to recognize this limitation when setting up an experiment (it may impact the interpretation of your findings). Alternatively, one may implement an MM suppression technique that removes the MM signal underlying the GABA+ signal so that measurements are a “purer” measure of GABA. However, this comes at the cost of reduced SNR (from min. 5:00).
Ashley warning us that we are not only measuring GABA
The second thing to consider is that MRS measurements of GABA are not direct measurements of neuronal inhibition. In her video, Caroline describes that there are actually four types of inhibition (from min. 8:43), several metabolic pathways for GABA, and multiple GABA receptor types (from min. 11:49). Based on the intuitive way of thinking about GABA as a inhibitory neurotransmitter, one may expect to find negative relationships between GABA levels and brain activity, such as measured with fMRI (also see this paper for guidance for how to form and test hypotheses about the relationship between neurochemistry and activity). However, when it comes to energy expenditure and metabolism that underlie functional imaging measures, such as BOLD signal changes, things are not so simple (see this paper to show that a relationship between GABA and BOLD is not easy to find). Caroline explains how excitatory and inhibitory activity together can either increase or decrease energy metabolism, depending on the context (min. 16:36), and even more, GABA can directly modulate blood flow (from min. 22:50). Therefore, the interpretation of GABA levels, measured with MRS, is far from straightforward. Caroline (from min. 23:20) points out that the measures reflect neurotransmitter and metabolic pools, they are dependent on brain energy and activity, and they could reflect tonic inhibition. As head motion, different types of medication, and tissue composition of your voxel all can have an impact on the outcome measure, Nicolaas (from min. 27:03) recommends considering these confounding factors in your analysis and data interpretation. Additionally, menstrual cycle and time of the day have been found to be potential influencers of MRS-measured GABA concentration.
Does the uncertainty of interpretation not mean that it is pointless to do GABA MRS?
We understand that the difficulty in interpretation may be off-putting. But at this point, we want to remind you that most of the imaging measures we look at are indirect. Think about BOLD as a measure of neural activity for example. These indirect measures are still useful for inferring something about a clinical condition, and in combination with other methods, develop a more holistic picture of what is going on. In his video, Nicolaas gives a number of examples for how GABA MRS has been used in clinical research, such as in neurodevelopmental disorders (from min. 7:46), depression (min. 9:44), personality disorders (min. 11:34) and schizophrenia (min. 13:40). Due to its role in learning and plasticity, GABA MRS has also been used in healthy populations. In her video, Charlotte Stagg provides some examples of how GABA, measured by MRS, changes in perceptual learning (from min. 10:00), overlearning (from min. 16:43), and learning how to juggle, as an example of long-term learning (from min. 20:06).
Ok, so if I do want to start using GABA MRS in my research, how can I learn more?
A good place to start is to read these overview/consensus papers:
Bogner W, Hangel G, Esmaeili M, Andronesi OC. 1D-spectral editing and 2D multispectral in vivo 1H-MRS and 1H-MRSI - Methods and applications. Anal Biochem. 2017;529:48-64
Harris AD, Saleh MG, Edden RAE. Edited 1H magnetic resonance spectroscopy in vivo: Methods and metabolites. Magn Reson Med. 2017;77(4):1377-1389
Mullins PG, McGonigle DJ, O’Gorman RL, et al. Current practice in the use of MEGA-PRESS spectroscopy for the detection of GABA. Neuroimage. 2014;86:43-52
Another excellent resource is Robin’s book (which can be downloaded if your institution has access): In Vivo NMR Spectroscopy: Principles and Techniques
Finally, the MRS community has recently begun assembling a curated collection of resources for data acquisition and analysis in the form of MRSHub. The forum is a great place to pose questions that can be answered by experts.
by Nikola Stikov
As we are getting ready to announce the 2020 OHBM Replication Award winner, here is a brief flashback to 2019 and our interview with Richard Dinga from the Department of Psychiatry at the Amsterdam University Medical Centers in the Netherlands. Richard led the effort to replicate a study published in Nature Medicine in 2017 about the relationship between resting state connectivity and the neurophysiological subtypes of depression.
In the lead up to the OHBM Annual Meeting, I had the pleasure of speaking to one of the keynote speakers, Dr. Biyu He, an Assistant Professor at New York University. Dr. He has made many valuable contributions to the field of neuroscience, combining diverse imaging methods and analytical techniques to tackle big questions relating to perceptual processing, spontaneous activity and consciousness in the human brain.
Rachael Stickland (RS): Thanks again for joining me. It's nice to - virtually - meet you.
Biyu He (BH): Pleasure to meet you as well.
RS: I'm getting used to having many video calls every day now. I'm sure you are as well. How have recent months been for you, adapting to working remotely and only connecting to most people virtually?
BH: It's been okay. I miss the face to face interactions with people. But I think we've been very adaptive in my lab. As you know, in human brain imaging, we do a lot of data analysis. So we have been working on reading, writing and data analysis. And I think we've been able to weather the strange situation we live in pretty well.
RS: You're currently based at New York University (NYU) as an Assistant Professor in the Departments of Neurology, Neuroscience & Physiology and Radiology. Do you mind telling me about your research path and your route into science?
BH: Sure. I was a biology major in college, and really liked maths and physics when I was young. I wasn't sure what I was going to do in college initially but once I found neuroscience I was immediately hooked. It is just so absolutely fascinating. I felt like I couldn't ever be bored again. And it's also one of the most interdisciplinary fields in science. It's challenging and fascinating and very, very intellectually engaging. I did my PhD in neuroscience at Washington University in St. Louis. From there, I was looking for postdoc positions at the end of my PhD and unexpectedly got offered two positions to set up my own lab. One at the National Institutes of Health (NIH) and one at the University of Konstanz in Germany. I decided to go to the NIH and spent about five and a half years there. It was a wonderful time — I learnt new techniques, made new friends, found new mentors, and started a new line of research, which is what I'll be talking about in my [OHBM] keynote talk. Then, I moved to NYU a few years ago.
RS: You mentioned how neuroscience is very interdisciplinary. That might be why it’s hard to explain what we do! If a non-scientist asked you what your research is about, and also why it's important, what would you say?
BH: Broadly speaking, I’m trying to understand how the human brain generates conscious awareness and conscious experiences. And how neural mechanisms underlying conscious awareness differ from, and interact with, unconscious processing. From decades of research in psychology, we know that sensory input impinging on the brain can be processed by the brain consciously, giving rise to all the experiences that we enjoy, but also unconsciously. So things that you don't consciously perceive can nevertheless influence your behavior. We don't really know what neural mechanism gives rise to conscious experience and how that differs from unconscious processing. Understanding the neural underpinnings of these processes and their differences is very important for a lot of clinically and societally important questions. For example, we'll be able to better treat disorders of consciousness, including minimally conscious states and vegetative states, as well as many clinical conditions with disordered perceptual awareness, such as hallucinations in schizophrenia, tunnel vision in autism. These are cases where you have disturbed conscious perception. In addition to applications in the clinical and societal domains, addressing this question also satisfies a fundamental human curiosity that is ‘Who are we? Why are we sentient beings? How are we different from robots?’
RS: That’s fascinating. I think scientists and nonscientists alike find the topic of consciousness very interesting. So do you think that fMRI has a key role in helping us understand consciousness?
BH: Absolutely. It's the best method for non-invasively measuring whole brain activity and finding out where in the brain some type of information is. In my mind, it is especially powerful when we combine fMRI with other techniques with higher temporal resolution, like MEG, ECoG or EEG. In human brain imaging, we have a lot of complementary techniques that are very powerful and can give us a view of whole brain activity or large-scale brain network activity, which you could say some of the more traditional animal research techniques haven't been able to get at. But, obviously, there's a lot of push to do large-scale simultaneous recording of many neurons and across many brain areas in animal models now as well.
RS: So your own research combines many of these techniques you just mentioned - invasive and non-invasive methods of studying the brain, including many different human neuroimaging methods. What are the main challenges with integrating such diverse methods, in terms of the experiments themselves but also in the interpretation of findings?
BH: Probably the main challenge is to grasp a lot of literature that's grounded in different techniques, because, when I was a PhD student, I realized that for the same question there is parallel literature, depending on if you use fMRI or EEG/MEG and then the insights are different. The questions and the debates people care about are also different. Each technique is like a window into the brain with its own vantage point. So if you only look through that one window, your field of view is somewhat limited. When combined, the knowledge and the insights from multiple techniques to understand the same biological question can provide a much broader view and you can get at the mechanisms better. Ultimately, we want to understand the mechanisms of how something works in a computational sense: how do neural circuits do the information transformation that allows certain perception and cognition to happen. And for that reason, simply mapping where or when would not be sufficient. We need to combine the insights from these different angles to build a full answer that addresses the mechanisms.
RS: Yeah, that makes sense. So, non-neuroscientists may be surprised just how much our prior knowledge and experience can shape how we perceive something in the present moment, and your research has advanced the scientific understanding on this topic. Related to that, what scientific finding have you found most surprising in your career? Has there been something that particularly surprised you about the brain?
BH: What you just mentioned was a finding that was actually very surprising to me. Me and my lab, when we made the discovery, we actually literally scratched our heads for several months before things started to make sense. You're absolutely right that past experiences and prior knowledge have a profound impact on perception. And it's very interesting because there are certain clinical disorders, including schizophrenia, autism, PTSD, where we know that this process is abnormal. There has been a lot of behavioral and neuroscience research done on this topic. What was really surprising in our findings was the spatial extent of the prior knowledge's impact on perceptually relevant processing across the brain. It used to be thought that visual perception, for example, is basically solved by visual regions. But what we found was that when you go to the really higher-order regions in the brain, even the so-called default network (that is the most remote from sensory input and the apex of the cortical hierarchy) they are involved in this process of prior knowledge guiding visual perception. It's not just that their activation magnitude changes, but their activation pattern changes as well. The voxel-wise activity pattern in those regions reflected the content of prior knowledge and the content of perception. So, that was very surprising. I think, in retrospect, it made sense because this process of prior knowledge guiding perception really requires many different brain networks to work together, from those processing sensory input to those mediating memories. We are still working on the exact mechanisms involved in this. But in the broader picture, it suggests that in real world vision, real world perception, where past experiences continually guide our perception, much more of the brain might be involved than we initially thought.
RS: Your research has brought new insights into the best ways to measure, categorize and model brain activity. Moving forward, what do you think are the most important questions that need addressing, or the most important technological advances, in order to progress understanding in your field?
BH: I have two thoughts here — one one is broader than the other one. The first one is that we need to integrate resting state approaches and task-evoked approaches. There's a huge amount of insight that has been learned, and to be learned, from both approaches. But each approach alone obviously won't be able to resolve how the brain works. I think we have made a lot of progress with both of those approaches, but exactly how we integrate the insights and their analysis methods, that is something that has a lot of room to be developed in the coming years. For example, related to my research topic, conscious perception: I don't think a system without spontaneous activity will have conscious perception; I think it will solve perceptual tasks, but it will not have perceptual awareness. Currently, we have a wonderful, beautiful field of knowledge based on resting-state studies but there is a gap between these insights and what we know about the neural mechanisms underlying perception and cognition. I think at the junction between those two fields, there is a lot of progress to be made.
And the second is something that I alluded to earlier (I think this is where the field is already going), which is to go beyond the mapping of where and when to get at the computational mechanisms. And there are many different ways of getting at the mechanisms — it probably requires leveraging multi-facetted analysis techniques to understand exactly the computational mechanisms as embodied in neural circuits and networks that underlie perception and cognition.
RS: What was the best piece of scientific or career advice you've received? What has helped you to get to the position you are in, carrying out brilliant research?
BH: Thank you. Something that comes to mind is when I was doing my PhD, my PhD advisor, Marcus Raichle, often told us that “Science must be done for its own sake, for any other harvest is uncertain.” It is important to enjoy the science you do. If not, you probably should do something different. That advice has propelled us to pursue questions we are passionate about.
RS: Your OHBM keynote talk is titled “From Resting State to Conscious Perception”. Can you give us a teaser or sneak preview of some of the interesting topics you will cover?
BH: It’s kind of a personal journey of how my scientific career has evolved, and how my work continues to make connections between these two areas. As you can see, from what I alluded to earlier, I think understanding the neural basis of conscious perception requires us to take into account the role of spontaneous brain activity and past experiences that persist through the resting brain. I've been to OHBM almost every year since I was a student, so it's very gratifying for me to be able to tell this personal journey through the different scientific questions I've investigated.
RS: Well, that's great. I look forward to tuning in and hearing it online.
Lee Jollans and the OHBM Diversity and Inclusivity Committee.
Edited by AmanPreet Badhwar
At the 2020 virtual meeting, OHBM will, for the second time, host a Diversity Round Table. This year the round table will feature discussions on the intersection between Neuroscience and the Lesbian, Gay, Bisexual, Transgender, and Queer (LGBTQ+) community. The four speakers will outline the specific challenges LGBTQ+ individuals face working in STEM (Jon Freeman), insights into the possible developmental bases of sexuality and gender (Doug VanderLaan), the current body of research into transgender identity (and its limitations), and the challenges and considerations that are crucial for carrying out good sex and gender research (Grace Huckins and Jonathan Vanhoecke).
Jon Freeman, New York University (top left), Doug VanderLaan, University of Toronto (top right), Grace Huckins, Stanford University (bottom left), and Jonathan Vanhoecke, Humboldt University (bottom right)
While studies suggest that the percentage of students interested in pursuing a doctorate is significantly higher among LGBTQ+ students (Greathouse et al., 2018), LGBTQ+ individuals have been shown in numerous studies to face unique challenges in STEM. Although specific data about Neuroscience and related fields is lacking (which is part of the problem), LGBTQ+ people are less represented in STEM fields than statistically expected, more frequently encounter non-supportive environments, and leave STEM fields at a high rate (Freeman, 2018). Moreover, one study suggests that more than 40% of LGBTQ+ people in STEM are not open about their LGBTQ+ identity with colleagues (Yoder & Mattheis, 2016). In his talk “LGBTQ Challenges in STEM: The Need for Data and Policy change”, Jon Freeman will outline how bias, harmful stereotypes, and unwelcoming environments can result in LGBTQ+ scientists leaving STEM, and will propose steps and policy changes we can implement to counteract these effects.
With a disproportionately low percentage of LGBTQ+ researchers, and rigid and outdated norms used to assess sex, gender, and sexuality, research about LGBTQ+ individuals has historically suffered from flawed data collection, and oversimplified, inaccurate, or outright harmful framing of research findings. In her talk “Trans Neuroscience: Stuck in 1995”, Grace Huckins will explain how studies examining the brains of transgender individuals are stuck in an outdated paradigm and why it is so crucial that this paradigm change.
Gender and sexuality are complex and interconnected, and attempting to examine them in isolation ignores the lived experiences of LGBTQ+ individuals. Cultural perceptions of masculinity and femininity, and social visibility and acceptance affects not only how LGBTQ+ people are treated and perceived, but also how research is conducted in different cultural contexts. Doug VanderLaan will describe findings from a neuroimaging study of LGBTQ+ individuals in Thailand, highlighting clues as to the relationship between early brain development, gender and sexuality in his talk “Sexual Orientation and Gender Identity Development: Insights from Thai gay men and sao praphet song”.
Research about marginalized groups by necessity always has a societal dimension – not only regarding the different experience of the world which marginalized individuals encounter, but also regarding the implications that findings might have for policy, stereotypes, and lived experience for the entire society. How can we disentangle ‘otherness’ from sociobiological variety? How to distinguish brain effects from effects of sociological background? Jonathan Vanhoecke will outline in their talk how brain research in the transgender community provokes sociological questions about sex and gender in other neuroscience fields. “The gap between neuroimaging of gender and gender studies of the brain: New perspectives from transgender research”.
We hope you’ll join us for this topical and thought-provoking roundtable, and we look forward to an interesting discussion!
The Diversity and Inclusivity Committee focuses on a different topic for their symposium each year. Topic and speaker suggestions for upcoming meetings are welcome.
In preparation for OHBM 2020, we talked to Dr Tomas Paus, who will be giving a keynote lecture on Friday, June 26th. Dr. Paus is Director of the Population Neuroscience & Developmental Neuroimaging Program at the Holland Bloorview Kids Rehabilitation Hospital, and Professor of Psychology and Psychiatry at the University of Toronto.
Roselyne Chauvin (RC): Thank you for taking the time to chat with us. In your talk you will be speaking about “population neuroscience and the growing brain.” There are a few ongoing longitudinal big data initiatives, such as ABCD or generation R. Those projects are now starting to think about the current pandemic situation. On one side, the situation is affecting everyone without discrimination; on the other, government responses create different experiences (from full to partial lockdown, to no restrictions), and of course, individuals show different stress responses. How do you think this might affect longitudinal datasets? And what are the questions that will need to be investigated out of this situation with regard to psychiatry and genetics?
Tomas Paus (TP): You can look at COVID as a natural disaster. There are studies where natural disasters have been used in the past as pseudo-experimental designs, i.e., to study the effects of a perturbation, because in most of our observational studies, we can really only look at associations between x and y and so cannot infer causality. In most cases, we don't know anything about the directionality of those relationships. But natural disasters provide an opportunity to study before and after and try to attribute the observed changes to those events.
A key component in the context of brain development and psychiatric disorders is social distancing and what has happened with social relationships. For children in particular there are two elements that I think really stand out. One is homeschooling, which, depending on a country, may last for several months. I don't know how it is in the Netherlands, but in Canada, it will last for at least three more months, if not more. And then the other element is the family, so it depends on what's happening at home. Unfortunately, in some cases, that means bigger exposure to adversity, adversity as bad as family violence. So there the stressor may be huge for some children.
Studies that have acquired detailed phenotypes, whether it's behaviour or brain phenotype on children before the event are in a unique position to go back when it will be possible and study the change in behaviour or in the brain. Generation R is certainly one such cohort, ABCD is another one. There are others. Even birth cohorts that may not be at the most relevant age from the perspective of child development but able to study the relationship between exposure to COVID-19 and events related to the disease and health in general. Of course, UK Biobank is the biggest one of all, right?
Now, one more thing in terms of children. Unfortunately, we do know that the most vulnerable segment of the population in terms of mortality are older people. And so there will be an increase in the number of grandparents dying. That is again, of course, a highly stressful life event and that will, one way or another, affect those children. Finally, we know already that at the level of mortality, COVID-19 is more frequent in disadvantaged populations, mostly in the context of socio-economic position. So there may also be an interaction between the pre-COVID conditions of those children and COVID-related stress.
RC: You’ve been involved in many different types of big data projects, from the acquisition and study of local communities like the SYS (saguenay youth study, ~1000 adolescents and their parents, from the genetic founder population of the Saguenay Lac St Jean region of Quebec, Canada) to the ENIGMA consortium (ENhancing Imaging Generic through Meta Analysis, a worldwide collaboration with more than 40 countries involved). How have you found carrying out these projects, and what advice would you give for those wanting to carry out these big data projects?
TP: It's a very good question and makes me reflect on my own path from the Saguenay to now. Over time I have increasingly become involved with collaborative work in the context of Enigma, and CHARGE, the other consortium that we work very closely with. I started this Saguenay study with my wife, Dr. Pausova, and others almost 20 years ago. That gives our team a lot of hands-on experience in carrying out big data projects. We learned what it takes to set up a cohort, to set up the protocol, to carry out quality assurance. All those different steps, on a relatively small scale. Even though 20 years ago, 1000 individuals was a fairly large scale for us. But I think that hands-on experience with a cohort is very, very important once you enter collaborations with others, and also once you start using data that had been produced by others. Of course, in a consortium, you share that experience and that's a currency.
In the CHARGE Consortium we have weekly conference calls. It's amazing how much you learn during one hour given there are between 20 to 40 people on the call. In one hour, we pick a topic, usually a study that is being carried out, and it's being discussed from the beginning to the end. You benefit, of course, from the expertise of people who have done many of those studies before. And you benefit from informal expertise that is very hard to get from reading the paper. In the same way that I can share my 20 years of experience with the Saguenay study with this group, every member of CHARGE group shares her or his experience back. So that's a huge plus.
In these consortia, it's not only about accessing data, you're really sharing knowledge; not only expertise in designing studies and acquiring data, but you’re also learning about the latest in genetics, epidemiology and statistics. So you’re keeping up-to-date with developments across many different fields. That's a huge benefit of working within a consortium.
The last point is about the diversity of the group. The group is diverse not only in terms of the disciplines, but also cultural backgrounds: it includes researchers from different countries, different educational systems. So for us, it means that there is a diversity of perspectives and I think that that's what you want. If you want to create new knowledge, you don't want everyone to have exactly the same background; you want to see things in many different ways and from many different perspectives.
RC: That also makes me think of sharing experience and trying to find the best way to maintain high quality. I mean, there are many initiatives to standardise scientific practices, for example using the BIDS format to organise data - that type of knowledge came from a consortium. Do you think we could extract some guidelines to help big database initiatives?
TP: I'm not sure about that. I mean there is a whole science of data harmonization of origin - there are experts who work on that. I'm somewhat sceptical about coming up with guidelines or toolboxes to be imposed on investigators when they are starting a new study. I think that there is a danger there. Yes, it would get easier then to harmonize across cohorts, but there is a danger that it would stifle innovation and new discoveries. If everyone is doing everything the same way, then where is the novelty? Where is the potential for new knowledge?
What I've seen is that, basically, it's a democracy of the scientists and the trainees voting indirectly by adopting certain tools more often than others. And then all of a sudden that tool emerges as the most commonly used tool. Freesurfer is an example of that, right? There are different ways to extract information about cortical thickness and surface area, but I must say that in the majority of studies Freesurfer became the main tool that everyone uses and so now you have a sort of natural emergence. So harmonisation has emerged in a natural way.
RC: In a similar vein, neuroimaging has faced a reproducibility crisis, just like genetics did before. There is increasing recognition that studies need to use larger sample sizes to produce more representative and reproducible findings. OHBM sessions have reflected these improvements in working, creating best practices for methods, promoting transparency via open publications, code, and data. The OHBM open science room grows every year and now the announcement of Aperture, their publication platform. What has been your experience and your change in practice? What advice do you give your lab members or early career researchers to improve the quality of their science?
TP: Well, that's a difficult one. I think that the starting point is critical thinking and that's what I'm trying to convey to my students. We need to question conclusions, to question reliability and that's maybe one of the reasons that even though we do use functional imaging, I do put more emphasis on multimodal imaging of brain structure because we know that structural imaging has higher reliability. Even though I started with imaging with PET with blood flow activation studies, I moved into that field from my interest in brain behaviour relationships, in a way. The relatively low test-retest reliability of functional measures and behaviour in general made me shift my focus to features that are easier to measure, such as the structural properties of the brain. That's probably one of the reasons why I changed my way of doing science in those large numbers - test-retest reliability becomes crucial if you are interested in a trait and if you are doing genetics, if you are running epidemiological studies where you are interested in influences of environment, you need to have that measure with a quality of a trait. That is, if I measure a trait today, and I measure it again two weeks later, I get more or less the same number. That's really crucial. I started by saying that one has to be critical, and I think, that that's kind of the simplest advice.
Another key for quality of science is replication. Let's say functional imaging studies, split the sample, analyse the data in one half and then see whether you find the same thing in the other half. Don't trust p-values. That would be my other advice. P-values will not guarantee reproducibility; replication would.
R.C.: So you said, you started with PET and then moved more towards structural MRI. Now that we are on the advice side, what do you think would be the next big topic in neuroimaging? Would you advise a young neuroscientist to follow the trend or look for their own niche? If you had to start something new, what would you go for?
TP: I wasn't really thinking too deeply about what I want to do in five years. I went with the flow and was always driven by curiosity, by novelty, by something unexplored. Often I was critical of a finding that I didn't believe and that triggered a line of thoughts: “I don't believe it's this way. Let's prove that it's the other way and what do I need to prove it.”
I do like to combine different levels of analysis. That's partly because of my initial educational background in medicine, human physiology, anatomy, etc, combined with deep interest in behaviour and psychiatric disorders. So you have both the systems level and molecular level, and integrating across systems, across levels, and I think it did work for me.
If I was going to do it again, I would probably again try to get a broad education that gives me at least some understanding of the different levels, rather than one very deep understanding of a particular approach, like the details of DNA structure. That just doesn't work for me, but it may work for someone else.
RC: Multidisciplinarity is at the core of cognitive science.
TP: It wasn't like that when I was starting! The fact that I got that broad education really prepared me for that interdisciplinarity and for working in large teams. When I was starting, the labs were small and there was little data sharing, even in genetics, and particularly in genetics of Mendelian traits. There were fierce competitions between people in terms of discovering disease genes, so people did not share. They competed with each other and that is a dramatic change over the past 30 years, possibly the biggest change I've seen in science and the social aspects of science.
Now, even with the amount of sharing there is always competition. Competition is good, we need it. But the competition doesn't interfere, as it did in the past, with generating data, with access to data because open science puts everyone on an equal playing field. So now it's not about someone having access to these data and blocking us from having access. It's not the case any longer. You really have to share data in some form.
RC: Yes. The evolution of the field is towards being open, being collaborative and getting experience from those that know how to acquire data and those that have strong expertise in methods.
TP: Also, when you look at institutions that support this kind of approach - they are successful. Institutions that are supporting open science and developing platforms for data sharing and open science in, for example, bringing different bioinformatics databases to communicate with each other, etc. An example is MIT Broad Institute in genetics.
RC: What are the findings that you are most proud of?
TP: There are two different types of things that I am proud of. I told you that I like innovation. I like doing things in a new way. In that context, I'm proud of two innovations. One is when we put together brain imaging and brain stimulation, our combined studies with transcranial magnetic stimulation and PET. Technically it was quite a challenge and I think we did it the right way. That approach eventually did not take off on a large scale. But I think in the mid 90s, when I worked on it, it was really exciting to be putting together TMS and PET in the way that we did. I'm definitely proud of that aspect.
Then, I think about what I'm doing now in terms of the combination of epidemiology, genetics, and neuroscience. I'm glad that I was able to put it together into that framework and I wrote a little book about it. I'm happy about it.
In terms of findings, I think two, for me, stand out. One goes back to the late 80s, to my PhD when I noticed some very interesting deficits associated with lesions of the anterior cingulate and then I followed up those findings with my first PET studies in Montreal. I came up with some discoveries about the function of the anterior cingulate cortex and its role in the interface between intention and motor control. Those early studies I still like.
The second finding is more recent and relates to what we have done in teenagers. The observation that testosterone has something to do with the radial growth of the axon. So, basically, the thickness of the axon, in particular in male adolescents, and how this may relate to axonal transport. That is a slight shift away from myelin and toward axon and I think it's important. We are pursuing that finding. I think that it's the axonal transport element that becomes very important for function. I personally believe that the link between axonal diameter and axonal transport will inform new studies of individuals, also mental illness. So that's the second finding that made a difference in my research.
RC : Are you going to talk about that during your OHBM lecture? Can you give us a sneak peak?
TP: I will talk mostly about big data and some findings from our work in the context of ENIGMA and CHARGE consortium, relating to the developing brain. This will illustrate the power of big data. But I will start with a bit of history on how we got where we are now and how important observations are, going back to my mentor Brenda Milner.
RC: Thank you for your time and for chatting with me!
TP: Thank you, it was really enjoyable.
RC: I am really looking forward to your lecture. This year is going to be a different format, as OHBM is happening online. So I hope this teaser will attract a lot of digital attendees and that everyone will enjoy your lecture and the meeting safely from home.
By Nils Muhlert
Professor Michael Fox is a neurologist at Harvard Medical School and director of the Lab for Brain Network Imaging and Modulation. His research into brain network imaging to define targets for brain stimulation holds considerable promise for new and improved treatments for a wide range of neurological and neuropsychiatric conditions. Here we found out how his academic career started through a chance meeting with Mark Raichle, about his plans for clinical translation of network neuroimaging, and his advice for early career researchers:
Nils Muhlert (NM): Thanks for meeting with us. We'll start by finding out about your background. How did you become interested in neuroimaging?
Michael Fox (MF): Good question. I didn't start off life as a neuroimager. I was an electrical engineer as an undergrad and then went to Washington in St. Louis for my MD and PhD combined. I wanted to do something at the intersection of engineering and medicine. My interest in neuroimaging came when I was walking through the neuroimaging facility at Washington University in St. Louis, on the way to a meeting. I saw a poster hanging there in the hall by Mark Raichle looking at brain imaging and the default mode network. I stopped, and I read the poster and thought, wow, that's fascinating. I had no idea who Mark Raichle was, but I subsequently knocked on his door and said, “Hey, I'm Mike - I just read a poster out here that I think is really interesting.” And that's how I got interested in neuroimaging.
NM: And how have you found the challenges of balancing your clinical work with your academic work?
MF: It's a challenge! There's always time constraints. On the side of getting out papers and getting grants, your clinician-scientists have to compete with full-time scientists. And with the challenge of taking care of patients, our clinical care has to be up to the same standard as full-time clinicians. It's like you're doing two jobs at once, and you have to be really good at both of them.
But with that challenge comes enormous opportunity. I wouldn't be doing both clinical and research if I didn't feel that it was valuable, and that one inherently informed the other. I don't feel like I'd know what the relevant research questions are to ask or to go after if I'm not seeing patients. Similarly, I won't know how to take care of my patients as best as I could, if I am not up to date on what the research is telling us about how to think about the brain.
NM: A lot of your work uses network neuroscience to understand how lesions in different locations in the brain can lead to similar symptoms. Can you tell us about this lesion-network mapping, how it works and how it can translate into the clinic?
MF: You asked me earlier: "how does research inform clinical care and clinical care inform research?". Well this entire field came from a patient. Aaron Boes, who was a fellow of mine at the time, saw a patient that walked into the clinic with acute onset visual hallucinations. Radiology acquired a brain scan on that patient and they found a focal lesion in the medial thalamus. Aaron Boes was fascinated by this patient. Why is it that a lesion in this particular location could result in this very impressive rapid onset severe visual hallucinations?
Aaron did what any good neurologists would do: he went through the literature and found other similar cases of patients with brain lesions that caused acute onset visual hallucinations. He mapped out where all of these lesion locations were, and then was left scratching his head.
All these different cases that cause symptoms very similar to what his patient had, were all in different locations across the brain. That's when he had his critical insight. When I'm trying to understand this patient's symptoms and I map out all the locations of brain damage, they don't line up. They don't intersect a single brain region.
Aaron literally came and knocked on my door and said, "Mike, I hear you do some kind of brain connectivity thing; could that brain connectivity stuff could help us understand how all these lesions in different locations are causing the same symptom."
Aaron's insight, which was in retrospect really brilliant, was that you can take a map of brain connectivity, overlay the lesions on a brain network and test the hypothesis that lesions causing the same symptom map to a single connected brain network rather than a single brain region. He was right for visual hallucinations. And subsequently, I think he's been right for every other neurological or psychiatric symptom that we've tried to investigate.
It's not really a new idea. Neurologists have known for a long time that symptoms probably mapped to brain networks or brain circuits. But before we had a wiring diagram, it was very hard to test that hypothesis or figure out what the network or circuit was in a data driven manner.
NM: How does it work in practice?
MF: In practice, you derive the network for each lesion location. So when you have a lesion that causes a certain symptom, you map it onto a brain atlas. You then turn to a connectome database and say, "Okay, I know where the lesion location is, but what I think is relevant for symptoms is everything that lesion location is connected to." So you turn the lesion into a lesion network, and you do that for every single lesion that you're interested in. Now, every lesion is going to be connected to hundreds of different brain areas, right? But if you take 40 lesions that all cause the same symptom, each one of those 40 lesions is a very different brain network or different set of connections. But the one thing that those 40 lesions share should be the connections that are relevant to the one symptom. And that's how you're able to then pull out the circuit that's relevant for that symptom shared by those 40 lesions.
NM: That's great. So this is a great example of how open science, through the human connectome project, has the potential to influence clinical practice...
MF: Very, very much so! I often feel a lot of gratitude for the field of neuroimaging as a whole and all the people out there that work so hard to build these connectome databases. If we didn't have things like Randy Buckner's genomics superstruct project, which is the connectome that I use for most of my work, if we didn't have the Wash U connectome, if we didn't have the MGH DTI connectome, then we wouldn't have the wiring diagram that allows me to do all the work that I do. So I'm very grateful to neuroimaging and grateful to these large scale projects that gave us these wiring diagrams. I'm just a user of this amazing resource that other people built.
NM: That's great to hear. Right now it's tricky to carry out clinical research projects so I imagine these large open databases are being well used. One topic that people have debated, particularly over the last couple of years, is clinical applications of fMRI. Your work seems to allow that - using functional brain networks to identify the targets for deep brain stimulation. How did you find the process of convincing people of the suitability of that approach?
MF: You're getting really to the heart of it. My PhD was focused on neuroimaging, and so when I moved into the clinic, and in my residency focused on trying to help people with brain problems, there was a disconnect. The field of functional neuroimaging does not have a lot of success stories. The idea was: "Hey, if we can see the brain at work, and identify areas that light up, if we can see the brain's connectivity, if we can look at the anatomical connectivity based on things like diffusion mapping, that all this will lead into better clinical care, better diagnosis, better outcomes, better treatments.” We don't have a lot of successes to hang our hat on. Even preoperative mapping with functional MRI is only used by a handful of centers. There's still debate as to how valuable it actually is. And that's probably our number one success story of clinical translation of functional neuroimaging.
So I've spent a lot of time thinking through why is that? One reason might be that we're on the right path but we need higher cohort sizes, better scanners, the next greatest imaging technique to show us something in the brain that we couldn't see before.
The other possibility is that we're approaching how we use neuroimaging to improve clinical care in the wrong way. I don't know the answer to it, but there's a couple of shifts that I've made in how I use neuroimaging and how I think about it. One big shift has been away from correlation imaging to causal mapping of human brain function. What I mean by this is that if you want to understand where a symptom lives in the human brain, neuroimagers have typically approached that by taking a bunch of patients with that symptom, and identified neuroimaging correlates of that symptom, which might be atrophy, PET metabolic patterns, resting state connectivity changes, and so on. But the problem is that in the end, that's just a correlate, not a therapeutic target. It doesn't tell you whether that neuroimaging correlate is causing the problem, compensating for the problem, or just a risk factor for the problem. We've started focusing on brain lesions and brain stimulation sites as a way to get at this causality. The idea is that the causal mapping of symptoms and brain function might be a more direct path to a treatment target.
The other big shift that I've made is a move away from focusing on single subject neuroimaging data to group neuroimaging data like the connectome. It's almost like I'm going in the opposite direction of where a lot of brilliant people are going: they're focusing on the individual and getting massive amounts of data on each single subject. That research is very valuable and might get us where we need to go with the clinical utility of single subject imaging data. In the meantime, as they improve the methods and technologies for single subject imaging, what I found is that the group connectome is already ready to be applied clinically. It's robust and reproducible and the wiring diagram is the wiring diagram of the average human brain.
NM: So we've very high hopes for your work targeting sites for stimulation to reduce symptoms in patients.
MF: Well, I don't want to overstate the success of my approach either. What we have right now is a lot of retrospective observations. So when we administer transcranial magnetic stimulation, for example, to try and reduce people's depression, what we see reproducibly is that people that are stimulated at a certain brain circuit or a certain site that's connected to a certain circuit, those are the people that are getting better. That is a reproducible, retrospective observation to explain why some people are getting better and some people are not. What we haven't done is taken the next step, where we change our clinical practice and directly target that circuit to improve clinical outcomes. We're just now reaching that precipice, the point where we're convinced that the retrospective observation is real and reproducible. But now we've got to actually prospectively apply it and find out if we can improve clinical outcomes, but we haven't done that yet.
NM: So what would you say are the most exciting things that your lab is working on now?
MF: I'd say, twofold. One is I'm very, very excited that we're reaching the point where we can take some of these retrospective observations and actually prospectively test them clinically. Now, those are bigger grants and take a lot more money. But I believe those resources are going to be coming. So I'm very excited to find out whether we can prospectively confirm our results and make treatment better.
The other is focusing on symptoms that are in huge need of better treatment. We recently submitted a paper, for example, on lesions that get rid of addiction (for a similar paper see here [NM]) and what brain circuit do those lesions map to? Does that identify a therapeutic target for addiction that can help constrain ongoing trials trying to make addiction treatment better?
In the field of depression, we've worked on brain lesions associated with depression, TMS sites that are associated with depression relief, and then some deep brain stimulation data that either can relieve depression or cause depression. What happens when you link up all three sources of causal information? Does it all converge on a single circuit target for depression across all these different modalities?
On the science side, we're even working on lesions that manipulate measures of spirituality or religiosity. Is there a human brain circuit that we can link to spirituality in a causal way? And is that a therapeutic target?
We're having a lot of fun these days, looking at very interesting questions both from the scientific side of things and social side of things, but also going towards the greatest therapeutic need. And then going towards clinical validation of all these observations that we're coming up with.
NM: Finally, what is your advice for early career researchers and those who are interested in network neuroscience? What would you say is a good training pathway for them?
MF: One piece of advice is follow your passion. If you're passionate about a particular brain problem or symptom or imaging technique, or brain circuit, follow that passion because your work is going to be better if you're following something that you're passionate about, not just what your advisor is passionate about.
Two, look at where the herd is going, and then intentionally go in a different direction. If everybody believes that the next big advancement is this imaging technique or application, then go the direction they're not going. Because there's plenty of people that are already doing what the herd is doing. That's why the herd is going there. It's an obvious need and a lot of smart people will fill that need. Go the opposite direction, find a way that people are not thinking about it. And that's where you feel like you add value to science, above and beyond what the community can generate. Think about it differently.
The last piece of advice is one I always tell my students. In my particular lab, we're focused on clinical translation and clinical application. So whenever my students come to me with brilliant ideas (and they come to me with brilliant ideas), I try and play it out. I say "Okay, let's say you're right, let's say that the experiment works out or that you're able to map it. Where does that go? What do we do with that information?" Oftentimes, you realize when you play that out is that the experiment, although it might be interesting, has no pathway towards clinical translation. There's no way that you can turn that information into a better treatment or a therapeutic target. Now, not everybody's interested in clinical translation, identifying therapeutic targets, but for my lab, thinking ahead three steps, we want to know 'Where does your research go? What do you do with that result? And how does that result translate into something important and meaningful, in my case for taking care of patients?' Again, it's a different way of approaching things then maybe in other neuroimaging labs.
NM: That's great advice. Professor Fox, thank you very much for your time today. We really look forward to your talk.
MF: Thank you so much for your interest.